Category Archives: Computers

Borgitory – A Modern Web Interface for BorgBackup Management

Managing BorgBackup repositories through command line tools works great for simple setups, but when you’re dealing with multiple repositories, scheduled backups, and want to browse archive contents, the terminal quickly becomes limiting. Commercial backup solutions either cost too much or lack the flexibility of Borg’s deduplication and compression. I set out to build a comprehensive web interface that
would give me the control I wanted without sacrificing Borg’s powerful features.

The result is Borgitory – a self-hosted web application that brings modern UI/UX to BorgBackup operations while maintaining all the security and efficiency that makes Borg great.

Repository Management Made Simple

Setting up repositories through Borgitory eliminates the guesswork of command-line Borg initialization. The interface validates connections in real-time and stores encrypted passphrases securely using
Fernet encryption. I can add local repositories, remote SSH locations, or any path accessible to the Docker container.

Additional functionality includes the repository overview page where I can see backup status, archive counts, and storage usage at a glance. No more running borg list commands to figure out what’s where.

Interactive Archive Browsing

One feature I’m particularly proud of is the archive browser. Using FUSE mounting, Borgitory mounts archives as filesystems and provides a web interface for navigating directories just like a file manager.
This was challenging to implement in a Docker environment – requiring --cap-add SYS_ADMIN and --device /dev/fuse – but the result is seamless file exploration without extracting entire archives.

Files can be downloaded directly from mounted archives, streaming efficiently even for large files. Multiple downloads work simultaneously, and there’s no temporary storage overhead since files stream
directly from the FUSE-mounted filesystem.

Automated Scheduling with Composite Jobs

Rather than managing separate cron jobs for backup, pruning, and cloud sync operations, Borgitory introduces “composite jobs” that chain multiple tasks together. A single scheduled job can backup data,
clean up old archives according to retention policies, sync to cloud storage, and send push notifications when complete.

The scheduling interface uses cron expressions but provides common presets for daily, weekly, and monthly backups. Each schedule can be enabled/disabled independently and shows the next execution time.

Real-time Job Monitoring

All backup operations run in isolated Docker containers and stream progress updates via Server-Sent Events. The job history shows detailed logs for each task within composite jobs, with expandable output
sections and progress indicators.

Failed jobs highlight errors prominently, and completed jobs show statistics like data processed, compression ratios, and execution time. The job manager handles queuing and prevents resource conflicts
when multiple operations are scheduled simultaneously.

Flexible Cleanup Policies

Archive cleanup goes beyond simple “keep N days” rules. Advanced retention strategies can keep different numbers of daily, weekly, monthly, and yearly archives – just like Borg’s native pruning but
configured through an intuitive interface.

Cleanup operations can run independently or as part of scheduled composite jobs. The interface shows exactly which archives will be deleted and estimates space savings before execution.

Cloud Storage Integration

Using Rclone under the hood, Borgitory syncs repositories to S3-compatible storage after backups complete. The cloud sync configuration supports multiple providers and includes connection testing to validate credentials.

Sync operations track transfer progress and can run manually or automatically after successful backups. This provides offsite storage without requiring separate backup tools.

Push Notifications

Pushover integration sends notifications to mobile devices when jobs complete, fail, or require attention. Rather than checking logs manually, I get immediate alerts about backup status with configurable
triggers for success, failure, or both.

Repository Statistics

Basic statistics about a repository can be viewed through the UI, including measurements that change over time as archives are created.

Technical Implementation

The backend uses FastAPI with SQLite for job history and configuration storage. The frontend leverages HTMX for dynamic updates without JavaScript frameworks, creating a responsive interface that works
well on mobile devices. All sensitive data like passphrases and API keys use Fernet encryption at rest.

Docker deployment includes pre-built images on Docker Hub, eliminating the need to clone and build from source. The container requires specific capabilities for FUSE mounting but otherwise runs as a
standard web service.

Getting Started

Installation is straightforward with Docker:

docker run -d \
-p 8000:8000 \
-v ./data:/app/data \
-v /path/to/backup/sources:/backup/sources:ro \
-v /path/to/borg/repos:/repos \
--cap-add SYS_ADMIN \
--device /dev/fuse \
--name borgitory \
mlapaglia/borgitory:latest

The interface is available at http://localhost:8000 and creates the first admin account on initial setup. From there, repository management, scheduling, and monitoring are all handled through the web
interface.

The project is open source and available on https://github.com/mlapaglia/Borgitory with comprehensive API documentation at /docs. Whether you’re backing up a single machine or managing multiple
repositories across different systems, Borgitory brings the power of BorgBackup to a modern web interface without sacrificing the flexibility that makes Borg great.

Self Hosted Email Server With Unraid and Poste.IO

I use NginxProxyManager as the main entry point of web traffic onto my local server. I exclusively use it as a proxy host for multiple subdomains. Each subdomain managed by NPM gets proxied to a docker container. Adding new services is mostly setting up the docker container and using NPM to create a ssl certificate with letsencrypt and directing the traffic from that subdomain to the container.

A few weeks ago I randomly discovered Poste.io through the community apps page of unraid. It’s an all-in-one dockerized email server. Since Unraid has a template for it, installing was a breeze:

Community Applications plugin has a template ready to go

Email requires more ports than just 443/80, but following their documentation I was able to get it up and running. Thanks to my local ISP I am able to have port 25 and all the rest unblocked, normally ISPs don’t allow these ports.

slick looking admin console

The docker container comes complete with an admin site for managing users, domains, and server settings. The mail app isn’t half bad either:

Once I tried to connect Thunderbird though I got SSL verification errors. I soon found out even through NPM handles the SSL offloading for port 443 (HTTPS web traffic), it doesn’t do the same for the mail ports (143, 993, etc). Thunderbird was getting a generic mail.poste.io SSL certificate instead of mail.mattlapaglia.com!

Poste.io does support LetsEncrypt, but trying to run LE validation behind another LE instance (NPM) is problematic. When the Poste.IO LE tries to validate domain ownership the LE server ends up calling the NPM LE, which says “uh, what, 404 for me I guess”.

LE servers not able to validate that I own the domain name 🙁

I tried getting the LE functionality in Poste.io to work with the NPM LE but couldn’t come up with a solution that would result in automatic SSL certificate renewals in the future. Then I thought to myself, “NPM LE stores the certificates in the AppData folder of Unraid, I could use that!”

I went back to the Poste.io docker configuration page and added 4 paths to map:

  • ca.crt
  • server.key
  • server.crt
  • server-combined.crt

I mapped them from the NPM appdata folder to the Poste container.

  • /mnt/user/appdata/NginxProxyManager/letsencrypt/live/npm-33/chain.pem
  • /mnt/user/appdata/NginxProxyManager/letsencrypt/live/npm-33/privkey.pem
  • /mnt/user/appdata/NginxProxyManager/letsencrypt/live/npm-33/cert.pem
  • /mnt/user/appdata/NginxProxyManager/letsencrypt/live/npm-33/fullchain.pem

were mapped to:

  • /etc/ssl/ca.crt
  • /etc/ssl/server.key
  • /etc/ssl/server.crt
  • /etc/ssl/server-combined.crt

After this I was able to successfully connect to the email server from my computer and other devices! Now when NPM updates the mail.mattlapaglia.com domain name automatically, poste will directly reference the new certificates without any manual intervention.