Borg and Duplicacy are both popular choices for backup software. I have historically used both, but am currently using Borg. I periodically reassess my choices, to determine if they are still the correct choices for me.

In this, I compare only the speed of backup and restore (as the speed of restore, in particular, can be important) of Borg 1.2.6, and of Duplicacy CLI 3.2.0.

Initial Backup

I backed up the same repository of data (524MB of mixed sized images and videos) over SSH/SFTP to my rsync.net account. These were freshly initialised repositories of which no prior data existed. Backup source lives on 3 disk RAIDZ1, and are spinning HDD’s. Internet connection is 40Mbps up:

Borg

root@Borg:/dataset> borg create ssh://[email protected]/./data::data Folder --stats --progress
Enter passphrase for key ssh://[email protected]/./data::data:

------------------------------------------------------------------------------
Repository: ssh://[email protected]/./data::data
Archive name: data
Archive fingerprint: 7dd72b9117e3eb9431331ad6ba944e83d476b0eea8fc7ee1ecf425f214418e65
Time (start): Tue, 2023-09-12 11:21:43
Time (end):   Tue, 2023-09-12 11:25:19
Duration: 3 minutes 36.44 seconds
Number of files: 537
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:              524.62 MB            522.53 MB            517.97 MB
All archives:              524.61 MB            522.53 MB            518.03 MB

                       Unique chunks         Total chunks
Chunk index:                     659                  665
------------------------------------------------------------------------------

Duplicacy

root@Duplicacy:/dataset> ~/duplicacy backup -stats
Backup for /dataset at revision 1 completed
Files: 537 total, 512,319K bytes; 537 new, 512,319K bytes
File chunks: 95 total, 512,319K bytes; 95 new, 512,319K bytes, 511,618K bytes uploaded
Metadata chunks: 3 total, 76K bytes; 3 new, 76K bytes, 58K bytes uploaded
All chunks: 98 total, 512,395K bytes; 98 new, 512,395K bytes, 511,676K bytes uploaded
Total running time: 00:10:29
Borg Duplicacy
3:36 10:29

Subsequent Backup

I re-ran the backup, with no changed data, to see how fast each iterates over the files.

Borg

root@Borg:/dataset> borg create ssh://[email protected]/./data::data2 Folder --stats --progress
Enter passphrase for key ssh://[email protected]/./data::data:

------------------------------------------------------------------------------
Repository: ssh://[email protected]/./data::data
Archive name: data
Archive fingerprint: 6cc817e924504b9d2a6bd8c225c9809942ffaa323d7d41cd2f02ddd3ad9efea2
Time (start): Tue, 2023-09-12 11:54:15
Time (end):   Tue, 2023-09-12 11:54:16
Duration: 1.13 seconds
Number of files: 543
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:              524.69 MB            522.59 MB             64.54 kB
All archives:                1.05 GB              1.05 GB            518.15 MB

                       Unique chunks         Total chunks
Chunk index:                     668                 1336
------------------------------------------------------------------------------

Duplicacy

root@Duplicacy:/dataset> ~/duplicacy backup -stats
Storage set to sftp://[email protected]/data-dup
Enter the path of the private key file:/root/.ssh/rsyncnet
Enter storage password:************
Last backup at revision 1 found
Indexing /dataset
Parsing filter file /dataset/.duplicacy/filters
Loaded 0 include/exclude pattern(s)
Backup for /dataset at revision 2 completed
Files: 537 total, 512,319K bytes; 0 new, 0 bytes
File chunks: 95 total, 512,319K bytes; 0 new, 0 bytes, 0 bytes uploaded
Metadata chunks: 3 total, 76K bytes; 0 new, 0 bytes, 0 bytes uploaded
All chunks: 98 total, 512,395K bytes; 0 new, 0 bytes, 0 bytes uploaded
Total running time: 00:00:03
Borg Duplicacy
0:01.13 0:03

Restore

I then completed removed and restore the same dataset. There was not much in the way of interesting output to view from the terminal, so I have merely tabled the data here for comparison:

Borg Duplicacy
2:07 6:08

Conclusion

This is a brief, non-exhaustive test just to show a rough comparison between the two in terms only of speed. However in this test, Borg was (surprisingly) the clear winner. Historically I’ve seen Duplicacy be much faster, but this doesn’t appear to be the case anymore (at least, with this use case/scenario).

Edit: There seems to a trend of slow restores as per https://github.com/deajan/backup-bench