Deploy systemd services and one-shot jobs to edge devices (without SSH)
With m87, you can deploy systemd services and run one-shot jobs on edge devices asynchronously.
You register what should run, and the device applies it when it has connectivity — even if it’s offline now, behind NAT/CGNAT, or reboots in between.
This is the right workflow for host-level agents, hardware bring-up, OS updates, migrations, and long-running system services.
The problem
A lot of edge workloads don’t belong in Docker:
- device-side agents (telemetry, gateways, robotics runtimes)
- services that must run before Docker
- privileged host access (I2C, GPIO, udev)
- OS-level updates or maintenance tasks
Typical approaches require SSH, VPN, or a CI runner that can reach the device. That breaks down for NATed or intermittently connected devices.
The solution: async jobs on the device
m87 deployments can run arbitrary steps on the device:
- Ship files as part of a job workspace (ephemeral or persistent)
- Execute commands in that workspace
- Copy artifacts from the workspace onto the host filesystem (e.g.
/etc/systemd/system) - Enable/restart services with systemd
- Define stop/undo behavior
Two common patterns:
- systemd service deployment: build/install files + copy to host + enable + start
- one-shot job: run once (update/migrate/configure) and finish
Quick start
1) Create and activate a deployment
m87 <device> deployment new --active
2) Deploy a run spec
m87 <device> deploy my-systemd-service.yml
3) Monitor execution
m87 <device> deployment status --logs
How job files work
In m87, files are written into a job workspace. You choose whether the workspace is:
ephemeral: recreated each run (good default)persistent: preserved across runs (useful for caches/state)
Steps run with the workspace as the working directory, so you can reference files by name (e.g. telemetry-agent.sh).
If you want files to live on the host filesystem (e.g. systemd unit files), you copy them from the workspace during a step.
Systemd service deployment (example)
This example writes a script and a unit file into the job workspace, then copies them onto the host and enables the service.
jobs:
- id: telemetry_agent
type: service
enabled: true
workdir:
mode: ephemeral
files:
telemetry-agent.sh: |
#!/usr/bin/env bash
set -euo pipefail
while true; do
echo "telemetry agent alive: $(date -Is)"
sleep 5
done
telemetry-agent.service: |
[Unit]
Description=Telemetry Agent
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/local/bin/telemetry-agent.sh
Restart=always
RestartSec=2
User=root
[Install]
WantedBy=multi-user.target
steps:
- name: install-files
run: |
install -m 0755 telemetry-agent.sh /usr/local/bin/telemetry-agent.sh
install -m 0644 telemetry-agent.service /etc/systemd/system/telemetry-agent.service
timeout: 2m
- name: daemon-reload
run: systemctl daemon-reload
timeout: 1m
- name: enable
run: systemctl enable telemetry-agent.service
timeout: 1m
- name: restart
run: systemctl restart telemetry-agent.service
timeout: 1m
- name: status
run: systemctl --no-pager --full status telemetry-agent.service
timeout: 1m
stop:
steps:
- name: stop
run: systemctl stop telemetry-agent.service
timeout: 1m
- name: disable
run: systemctl disable telemetry-agent.service
timeout: 1m
- name: cleanup
run: |
rm -f /etc/systemd/system/telemetry-agent.service
rm -f /usr/local/bin/telemetry-agent.sh
systemctl daemon-reload
timeout: 2m
Notes:
- The unit references
/usr/local/bin/telemetry-agent.shbecause the job installs it there. - Steps are idempotent (
install,restart) so reruns are safe. - If you want to avoid writing to
/usr/local/bin, install under your own namespace (e.g./opt/m87/...).
Updating a systemd service
Edit your YAML (script, unit file, config), then redeploy:
m87 <device> deploy my-systemd-service.yml
The device applies the updated spec when it has connectivity.
One-shot jobs (example)
One-shot jobs are ideal for:
- OS updates (
apt upgrade,dnf update) - migrations (DB schema, file formats)
- provisioning (create users, write config, udev rules)
- device bring-up (enable kernel modules, set sysctls)
This example updates packages and reboots if needed.
jobs:
- id: os_maintenance
type: service
enabled: true
workdir:
mode: ephemeral
steps:
- name: update
run: |
set -e
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y upgrade
timeout: 45m
retry:
attempts: 2
backoff: 30s
- name: reboot-if-needed
run: |
if [ -f /var/run/reboot-required ]; then
echo "reboot required"
reboot
else
echo "no reboot required"
fi
timeout: 2m
on_failure:
undo: executed_steps
continue_on_failure: false
Notes:
- Even though this is “one-shot”, it can be modeled as a job with steps that complete.
- If a reboot happens mid-run, the device will reconnect and continue reconciliation after startup.
Removing jobs
Remove a job from the active deployment:
m87 <device> undeploy telemetry_agent
Any defined stop steps will run.
Monitoring and diagnostics
- View device status and observation summaries:
m87 <device> status
- See deployment execution and step logs:
m87 <device> deployment status --logs
For continuous health monitoring, add observe checks (liveness/health/record/report).
Common patterns
Install a unit file that references a packaged binary
- Use steps to install packages first (
apt install …) - Put the unit file in
files - Copy it onto the host in a step
daemon-reload,enable,restart
Deploy a templated configuration
- Put config files into
files - Copy them into place in a step
- Restart the service
Run privileged host setup
- sysctl changes
- kernel modules
- udev rules +
udevadm control --reload-rules && udevadm trigger
FAQ
Do I need SSH?
No. Devices pull deployments over their outbound connection.
Does this work behind CGNAT?
Yes. No inbound connectivity is required.
Can a job run only once?
Yes. Use a steps-only job that completes. For recurring checks, use observe jobs.
What about secrets?
Do not hardcode secrets into specs. Use your platform’s secret distribution mechanism (if configured) or retrieve secrets at runtime.