6.7 KiB
6.7 KiB
🛡️ Backup Monitor
A self-hosted backup monitoring dashboard with MongoDB backend, designed for Borgmatic (but works with any tool that can send HTTP requests).
Features
- Dashboard – Real-time overview of all backup hosts with status cards
- Host Management – Add, edit, disable, delete hosts via Web UI (no config files)
- History – 90-day retention with per-day calendar heatmap and size charts
- Detailed Stats – Duration, original/deduplicated/compressed size, file counts
- Uptime Kuma Integration – Automatic push per host after each backup
- Stale Detection – Configurable threshold (default: 26h) marks missed backups
- Auto-Refresh – Dashboard updates every 30 seconds
- Dark Theme – Clean, modern UI with status-colored indicators
- Zero Config – Hosts auto-register on first push, or add manually via UI
Quick Start
# Clone
git clone https://github.com/feldjaeger/backup-monitor.git
cd backup-monitor
# Start
docker compose up -d
# Open
open http://localhost:9999
Docker Compose
services:
backup-monitor:
build: .
container_name: backup-monitor
restart: always
ports:
- "9999:9999"
environment:
- MONGO_URI=mongodb://mongo:27017
- STALE_HOURS=26 # Hours before a host is marked "stale"
depends_on:
- mongo
mongo:
image: mongo:4.4 # Use 7+ if your CPU supports AVX
container_name: backup-mongo
restart: always
volumes:
- mongo_data:/data/db
volumes:
mongo_data:
Push API
After each backup, send a POST request:
# Minimal push (just hostname + status)
curl -X POST "http://localhost:9999/api/push?host=myserver&status=ok"
# Full push with stats (JSON)
curl -X POST -H "Content-Type: application/json" \
-d '{
"host": "myserver",
"status": "ok",
"duration_sec": 342,
"original_size": 5368709120,
"deduplicated_size": 104857600,
"compressed_size": 83886080,
"nfiles_new": 47,
"nfiles_changed": 12,
"message": "Backup completed successfully"
}' \
http://localhost:9999/api/push
Borgmatic Integration
Add to your borgmatic.yml:
after_backup:
- >-
bash -c '
STATS=$(borgmatic info --archive latest --json 2>/dev/null | python3 -c "
import sys,json
d=json.load(sys.stdin)[0][\"archives\"][-1]
s=d.get(\"stats\",{})
print(json.dumps({
\"host\":\"$(hostname)\",
\"status\":\"ok\",
\"duration_sec\":int(s.get(\"duration\",0)),
\"original_size\":s.get(\"original_size\",0),
\"deduplicated_size\":s.get(\"deduplicated_size\",0),
\"compressed_size\":s.get(\"compressed_size\",0),
\"nfiles_new\":s.get(\"nfiles\",0)
}))" 2>/dev/null || echo "{\"host\":\"$(hostname)\",\"status\":\"ok\"}");
curl -fsS -m 10 -X POST -H "Content-Type: application/json" -d "$STATS" "http://YOUR_SERVER:9999/api/push" || true
'
on_error:
- >-
curl -fsS -m 10 -X POST -H "Content-Type: application/json"
-d '{"host":"'$(hostname)'","status":"error","message":"Backup failed"}'
"http://YOUR_SERVER:9999/api/push" || true
API Reference
| Method | Endpoint | Description |
|---|---|---|
GET |
/ |
Web UI |
GET/POST |
/api/push |
Push backup status (query params or JSON) |
GET |
/api/hosts |
List all hosts with current status |
POST |
/api/hosts |
Add a host {"name": "...", "kuma_push_url": "..."} |
PUT |
/api/hosts/<name> |
Update host {"enabled": bool, "kuma_push_url": "..."} |
DELETE |
/api/hosts/<name> |
Delete host and all history |
GET |
/api/history/<host>?days=30 |
Backup history for a host |
GET |
/api/calendar/<host>?days=30 |
Calendar heatmap data (aggregated by day) |
GET |
/api/summary |
Dashboard summary (counts, today stats) |
Uptime Kuma Integration
- Create a Push monitor in Uptime Kuma for each host
- Copy the push URL (e.g.
https://status.example.com/api/push/borg-myserver?status=up&msg=OK) - In Backup Monitor: Click on a host → Edit → Paste the Kuma Push URL
- After each backup push, Backup Monitor automatically forwards the status to Uptime Kuma
Environment Variables
| Variable | Default | Description |
|---|---|---|
MONGO_URI |
mongodb://mongo:27017 |
MongoDB connection string |
STALE_HOURS |
26 |
Hours without backup before host is marked stale |
WEBHOOK_URLS |
(empty) | Comma-separated webhook URLs for notifications |
WEBHOOK_EVENTS |
error,stale |
Events that trigger webhooks |
Prometheus Integration
The /metrics endpoint exposes backup metrics in Prometheus format:
backup_hosts_total 21
backup_host_status{host="myserver"} 1 # 1=ok, 0=error, -1=stale
backup_host_last_seconds{host="myserver"} 3600 # seconds since last backup
backup_host_duration_seconds{host="myserver"} 342
backup_host_size_bytes{host="myserver"} 5368709120
backup_host_dedup_bytes{host="myserver"} 104857600
backup_host_files_new{host="myserver"} 47
backup_today_total 22
backup_today_bytes 47280909120
Add to your prometheus.yml:
scrape_configs:
- job_name: 'backup-monitor'
static_configs:
- targets: ['backup-monitor:9999']
scrape_interval: 60s
Webhook Notifications
Set WEBHOOK_URLS to receive notifications on backup errors or stale hosts:
environment:
- WEBHOOK_URLS=https://n8n.example.com/webhook/backup-alert,https://other.webhook/endpoint
- WEBHOOK_EVENTS=error,stale # which events trigger webhooks
Webhook payload:
{
"event": "error",
"host": "myserver",
"message": "Backup failed",
"timestamp": "2026-04-05T06:00:00Z"
}
Events:
error– Fired immediately when a backup reports status "error"stale– Fired when a host exceedsSTALE_HOURSwithout a backup (once per host, resets on next successful backup)
Data Retention
- History entries are automatically deleted after 90 days (MongoDB TTL index)
- Hosts are never auto-deleted – remove them manually via UI or API
Screenshots
Dashboard
Dark-themed overview with summary cards, host grid with status badges, and 14-day minibar charts per host.
Host Detail
Slide-out drawer with 30-day calendar heatmap, data volume chart, and detailed backup history table.
Tech Stack
- Backend: Python 3.12, Flask, Gunicorn
- Database: MongoDB 4.4+
- Frontend: Vanilla JS, CSS (no framework dependencies)
License
MIT