Why I Set This Up
I run several servers—two Proxmox nodes, a Synology NAS, and a few VMs scattered across different networks. Each one runs scheduled tasks: backups, container cleanups, DNS updates, log rotations. For a long time, I only knew these jobs failed when something downstream broke. A backup didn’t run for three days? I’d find out when I needed to restore something.
I needed a way to know immediately when something went wrong, without checking logs manually or relying on email that I might not see for hours. I wanted push notifications on my phone, but I didn’t want to use a third-party service that would route my server alerts through someone else’s infrastructure.

That’s when I set up Gotify—a self-hosted notification server that I could point all my cron jobs at.
What Gotify Actually Does
Gotify is a notification server you run yourself. It has a simple API that accepts messages via HTTP, and it pushes those messages to clients—either a mobile app or a web interface. When a script or cron job sends a message to Gotify, my phone vibrates and shows the notification.
It’s not fancy. It doesn’t integrate with a thousand services or offer dashboards. It just receives messages and delivers them. That simplicity is exactly what I needed.
My Setup
I run Gotify in a Docker container on one of my Proxmox VMs. I chose Docker because it’s fast to deploy and easy to update. The container uses a persistent volume for data, so messages and application tokens survive restarts.
Here’s the Docker Compose configuration I use:
version: "3"
services:
gotify:
image: gotify/server
container_name: gotify
volumes:
- /mnt/docker-data/gotify/data:/app/data
ports:
- "9080:80"
restart: unless-stopped
I expose Gotify on port 9080 internally. I don’t expose it directly to the internet—it sits behind Traefik, which handles SSL and routing.
Reverse Proxy and SSL
I use Traefik as my reverse proxy. Gotify needs to be accessible from multiple servers, so I gave it a subdomain and configured Let’s Encrypt for SSL. Without SSL, the Android app throws warnings, and I didn’t want to deal with certificate exceptions.
I added these labels to the Docker Compose file:
labels: - "traefik.enable=true" - "traefik.http.routers.gotify.rule=Host(`gotify.internal.vipinpg.com`)" - "traefik.http.routers.gotify.entrypoints=websecure" - "traefik.http.routers.gotify.tls.certresolver=letsencrypt" - "traefik.http.services.gotify.loadbalancer.server.port=80"
After deploying, I accessed the web UI at https://gotify.internal.vipinpg.com. The default login is admin/admin, which I changed immediately.
Creating Applications and Tokens
In Gotify, each “application” represents a stream of notifications. I created separate applications for different purposes:
- Backups – for backup script notifications
- Cron Failures – for any cron job that exits with an error
- DNS Updates – for dynamic DNS update scripts
- System Alerts – for disk space warnings and other system issues
Each application gets its own token, which acts as the authentication key for sending messages. I keep these tokens in environment variables on each server, so I don’t hardcode them in scripts.
Sending Notifications from Scripts
The simplest way to send a notification is with a curl command. Here’s the basic structure:
curl -X POST "https://gotify.internal.vipinpg.com/message?token=YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"title": "Backup Failed",
"message": "Daily backup script exited with code 1",
"priority": 9
}'
Priority values range from 0 to 10. I use 9 or 10 for failures that need immediate attention, and 5 or lower for routine confirmations.
Reusable Notification Function
Writing the full curl command in every script got repetitive, so I created a reusable function. I saved this in /usr/local/bin/notify_gotify on each server:
#!/bin/bash
GOTIFY_URL="https://gotify.internal.vipinpg.com"
GOTIFY_TOKEN="${GOTIFY_TOKEN:-}"
if [ -z "$GOTIFY_TOKEN" ]; then
echo "Error: GOTIFY_TOKEN environment variable not set"
exit 1
fi
TITLE="$1"
MESSAGE="$2"
PRIORITY="${3:-5}"
curl -s -X POST "${GOTIFY_URL}/message?token=${GOTIFY_TOKEN}" \
-H "Content-Type: application/json" \
-d "{
\"title\": \"${TITLE}\",
\"message\": \"${MESSAGE}\",
\"priority\": ${PRIORITY}
}"
I made it executable with chmod +x /usr/local/bin/notify_gotify. Now I can send notifications with:
notify_gotify "Backup Failed" "Restic backup exited with code 1" 9
Monitoring Cron Job Failures
For cron jobs, I wanted automatic failure notifications without modifying every script. I created a wrapper script that runs the job, checks the exit code, and sends a notification if it fails.
I saved this as /usr/local/bin/cron_wrapper:
#!/bin/bash
JOB_NAME="$1"
shift
COMMAND="$@"
OUTPUT=$(eval "$COMMAND" 2>&1)
EXIT_CODE=$?
if [ $EXIT_CODE -ne 0 ]; then
notify_gotify "Cron Failure: ${JOB_NAME}" \
"Command failed with exit code ${EXIT_CODE}\n\nOutput:\n${OUTPUT}" \
9
fi
exit $EXIT_CODE
In my crontab, I wrap critical jobs with this script:
0 2 * * * /usr/local/bin/cron_wrapper "Daily Backup" "/usr/local/bin/backup.sh"
If the backup script fails, I get a notification with the job name, exit code, and any output it produced.
Mobile App Setup
I installed the Gotify Android app from F-Droid. After opening it, I entered my Gotify server URL and logged in with my user credentials.
The app maintains a WebSocket connection to the server for real-time notifications. One issue I ran into: Android’s battery optimization would kill the app during sleep, so I stopped receiving notifications. I fixed this by going to Settings → Battery → Battery Optimization, finding Gotify, and setting it to “Not optimised.”
After that change, notifications arrived instantly.
What Didn’t Work
First Attempt: Email Notifications
Before Gotify, I tried setting up email notifications for cron failures. I configured msmtp on each server to send alerts to my email. It worked, but I rarely checked that inbox frequently enough, and emails got buried in spam or other messages.
Initial Token Management
At first, I hardcoded the Gotify token directly in scripts. This became a problem when I rotated tokens—I had to update every script across every server. Moving tokens to environment variables in /etc/environment or a dedicated config file solved this.
Trailing Slashes in URLs
When I first configured Traefik, I used a URL with a trailing slash in my scripts. This caused 404 errors when sending notifications. Gotify expects the URL without a trailing slash, so I removed it from the GOTIFY_URL variable.
What I Learned
Gotify is intentionally simple, which makes it reliable. It does one thing—accept messages and deliver them—and does it well. I don’t need to rely on external services, and I don’t have to worry about rate limits or API changes.
The wrapper script approach works better than modifying every cron job individually. I can add or remove jobs without touching notification logic.
Priority levels matter. I initially sent everything at priority 10, which meant my phone was buzzing constantly for routine confirmations. Now I use priority 9-10 only for failures, and 3-5 for success messages. I can ignore low-priority notifications until I have time to check them.
Running Gotify behind a reverse proxy with SSL is essential if you’re accessing it from multiple networks. The Android app will work without SSL, but it throws persistent warnings, and I didn’t want to expose an unencrypted notification service.
How I Use It Now
Every server I manage has the notify_gotify and cron_wrapper scripts installed. I set the GOTIFY_TOKEN environment variable on each server, pointing to the appropriate application in Gotify.
When a cron job fails, I see the notification within seconds. I can check the error output directly from my phone and decide if I need to investigate immediately or wait until I’m at my desk.
I also use Gotify for non-cron alerts. My disk space monitoring script sends a notification when usage crosses 90%. My DNS update script notifies me when my public IP changes. My container monitoring setup sends alerts when a container stops unexpectedly.
Gotify doesn’t replace proper logging or monitoring, but it fills the gap between “something went wrong” and “I noticed something went wrong.” It’s the difference between discovering a failed backup three days later and knowing about it three seconds after it happens.