Duplicity is a backup tool that allows for encrypted, incremental backups. It uses GPG for encryption and can store backups on various remote or local backends, such as FTP, SSH, S3, Google Drive, and local directories.
Duplicity uses GnuPG (GPG) for encryption. It encrypts backup data before transmission, so even if a third party intercepts the backup, they cannot read it without the GPG key. You can specify which GPG key to use for encryption.
You can restore your backup using the duplicity restore
command:
duplicity restore scp://user@remote.server//path/to/backup /path/to/restore
This command decrypts and restores the backup to the specified directory. Make sure you have the GPG key used during the backup process.
Duplicity supports a wide range of backends, including:
You can specify the backend in the backup command using the appropriate URL syntax (e.g., scp://
, ftp://
, etc.).
An incremental backup means only the files that have changed since the last backup (whether it was full or incremental) are backed up. This reduces the amount of data transferred and stored, saving bandwidth and storage space.
To run an incremental backup, Duplicity will compare the current source directory with the previous backup. If it’s the first backup, it will be a full backup.
You can schedule automatic backups using cron jobs. Here’s an example of scheduling a daily backup:
Open your crontab editor:
crontab -e
Add a line like this to run the backup at midnight daily:
0 0 * * * duplicity --encrypt-key YOUR_GPG_KEY /path/to/source scp://user@server//path/to/destination
To exclude specific files or directories from a backup, use the --exclude
option:
duplicity --exclude /path/to/exclude --encrypt-key YOUR_GPG_KEY /path/to/source scp://user@server//path/to/destination
You can also exclude multiple directories or files by adding more --exclude
options.
To ensure that your backup is complete and valid, you can use the verify
command:
duplicity verify scp://user@server//path/to/backup /path/to/source
This checks if the current state of the source directory matches the backup.
You can list the files stored in a backup with:
duplicity list-current-files scp://user@server//path/to/backup
Duplicity can resume backups automatically if an interruption occurs, as it tracks the state of each backup. Just run the same backup command again, and it will pick up where it left off.
You can check the size of your backup using:
duplicity collection-status scp://user@server//path/to/backup
This will display information about your backup chains and their sizes.
You can specify retention policies to delete older backups. For example, to remove backups older than 6 months:
duplicity remove-older-than 6M --force scp://user@server//path/to/backup
This helps manage storage space by automatically deleting old backups.
Yes, by default, Duplicity compresses backups using gzip. You can specify a different compression algorithm with the --compress
option, or disable compression entirely using --no-compression
.
Yes, you can run multiple backup commands, each pointing to a different destination. For example, one backup could go to a local hard drive, and another to Amazon S3:
duplicity /path/to/source file:///path/to/local/backup
duplicity /path/to/source s3://bucket_name/path/to/backup
If you’re backing up a large dataset, you can split your backup into volumes by using the --volsize
option. This creates smaller backup chunks that are easier to manage:
duplicity --volsize 200 /path/to/source scp://user@server//path/to/destination
This creates 200MB volumes.
Duplicity can handle certain levels of corruption, depending on the damage. If an incremental backup gets corrupted, it will affect all future increments from that point on. Regular verification and maintaining multiple backup chains can help mitigate this risk.