
Compression and Backup -
how to compress back up files to < 10GB
10 GB backup
There is probably only one way to shrink the backup size from around 120GB to 10 GB: Select the most important data for a separate backup.
There are several backup tools, that let you select directories and files, that you want to include in the backup. Some of them provide compression. You can start searching via this link,
help.ubuntu.com/community/BackupYourSystem
You can run this backup regularly.
Full backup
You can backup your complete system once in a great while, for example after major system upgrades or once per year. And you can keep this backup in an external drive, that you store far from the computer (in case of fire or theft).
answered Mar 21 '18 at 5:30

38.9k55 gold badges6767 silver badges127127 bronze badges
Status of the compression feature
Here is an example.
I have a Dropbox account and folder /home/Dropbox. I encrypt the whole /home directory and back up to Dropbox.
Dropbox puts a file, let’s say, a copy of the Windows image into my Dropbox directory. Dropbox then measures the size of new blobs that are added on their servers. If the size has not changed, Dropbox infers that a copy of Windows exists in my computer, namely, Dropbox recovers one of my files. Repeat for other files, images, texts, sentences, messages, blobs, etc. The attacker needs to be able to add or drop plaintext (or otherwise have some control over plaintext), and measure the size of ciphertext.
It’s client side scanning like the one proposed by Apple, with crypto prepared by restic!
In this case, dedup works somewhat similarly. But I suppose deduplication does not replace compression, which is why compression actually further reduces repository size.
Now this is from some random Joe. Imagine what sophisticated attacks NSA could do.
Once you interact with a cunning adversary, the features you include in your software can be opportunities for the adversary.
SQL Backup 9
SQL Backup Pro offers four compression levels, described below. Generally, the smaller the resulting backup file, the slower the backup process.
Smaller backups save you valuable disk space. For example, if you achieve an average compression rate of 80%, you can store the backup for a 42.5 gigabyte (GB) database on a 8.5 GB DVD-R dual layer disc. Smaller files can also be transferred more quickly over the network, which is particularly useful, for example, when you want to store backups off-site.
To set the compression level:
The compression level used to create a backup does not noticeably affect the time necessary to restore the backup.
The compression you can achieve depends upon the type of data stored in the database; if the database contains a lot of highly-compressible data, such as text and uncompressed images, you can achieve higher compression. For full backups, you can use the Compression Analyzer to perform a test on the databases to check which compression level will produce the best result for your requirements.
Compression level 4
Compression level 4 uses the LZMA compression algorithm. This compression level generates the smallest backup files in most cases, but it uses the most CPU cycles and takes the longest to complete.
Compression level 3
Compression level 3 uses the zlib compression algorithm.
On average, the backup process is 25% to 30% faster than when compression level 4 is used, and 27% to 35% fewer CPU cycles are used. Backup files are usually 5% to 7% larger.
Compression level 2
This compression level uses the zlib compression algorithm, and is a variation of compression level 3.
On average, the backup process is 15% to 25% faster than when compression level 3 is used, and 12% to 14% fewer CPU cycles are used. Backup files are usually 4% to 6% larger.
Compression level 1
This is the default compression level. It is the fastest compression, but results in larger backup files.
On average, the backup process is 10% to 20% faster than when compression level 2 is used, and 20% to 33% fewer CPU cycles are used. Backup files are usually 5% to 9% larger than those produced by compression level 2.
However, if a database contains frequently repeated values, compression level 1 can produce backup files that are smaller than if you used compression level 2 or 3. For example, this may occur for a database that contains the results of Microsoft SQL Profiler trace sessions.
Compression level 0
If you do not want to compress your backups, specify compression level 0 from the command line or extended stored procedure; in the graphical user interface, clear the Compress backup check box in the wizard. For example, you may want to do this if you require only encryption and you do not want to compress your backups.
Compression percentage
SQL Backup Pro calculates the percentage compression of a backup by comparing the size of the SQL Backup backup with the total database size.
For example, if a database comprises a 10 GB data file and a 3 GB transaction log file and SQL Backup Pro generates a full backup of the database to create a backup file that is 3 GB, the compression for this backup is calculated as 77%, [1-(3/13)]x100.
The compression percentage is displayed in the Activity History



It would be great funcionality to compress game backups. I'm right now doing backups of ten games. Each backuped game has same size as instalation folder.
I think it could be fine to choose compression like: none, low, medium, maximum.
I know that compression would increase time to do a backup but I insert backups to external hard drive and use them seldom.

Report this post
Database compression is a feature that Microsoft introduced in SQL Server 2008, but many people still don’t understand or regularly use the feature. The power of this feature is to both speed up the backup process, and to save disk space. The speed benefit is a result of reduced disk activity as you stream the compressed backup file directly to disk. You can use all your available CPU cycles to perform the backup straight to disk, and since the smaller file is saved to disk you will probably reduce any potential delay as you write the backup file to disk. The other obvious benefit is the resulting backup file can also be much smaller. In my experience, I’ve seen compression between 20-50 percent, but you will need to test your backup to determine your real space savings based on the contents of the database and how well your data can be compressed.
Perform a test backup without compression and see how long it takes and how large the resulting BAK file is for your sample database. Then backup the same database using compression to see if it is faster and how much smaller the BAK file is when it is complete.
To create compressed database backups, all you need to do is add the COMPRESSION option to the BACKUP command as shown below:
BACKUP DATABASE MyDatabase TO DISK = 'H:\BACKUPS\MyDatabase.BAK' WITH FORMAT, COMPRESSION;Compression has been a feature available with SQL Server 2017 in both the Standard and Enterprise editions:
Feature | Enterprise | Standard | Web | Express with Advanced Services | Express |
---|---|---|---|---|---|
Server core support 1 | Yes | Yes | Yes | Yes | Yes |
Log shipping | Yes | Yes | Yes | No | No |
Database mirroring | Yes | Yes Full safety only | Witness only | Witness only | Witness only |
Backup compression | Yes | Yes | No | No | No |
Database snapshot | Yes | Yes | Yes | Yes | Yes |
Always On failover cluster instances2 | Yes | Yes | No | No | No |
Always On availability groups3 | Yes | No | No | No | No |
Basic availability groups 4 | No | Yes | No | No | No |
Online page and file restore | Yes | No | No | No | No |
Online indexing | Yes | No | No | No | No |
Resumable online index rebuilds | Yes | No | No | No | No |
Online schema change | Yes | No | No | No | No |
Fast recovery | Yes | No | No | No | No |
Mirrored backups | Yes | No | No | No | No |
Hot add memory and CPU | Yes | No | No | No | No |
Database recovery advisor | Yes | Yes | Yes | Yes | Yes |
Encrypted backup | Yes | Yes | No | No | No |
Hybrid backup to Windows Azure (backup to URL) | Yes | Yes | No | No | No |
You can read more about compression here.
Like this:
LikeLoading...
Related
Compression and Backup -
backup windows live mailHow to compress postgres database backup using barman
First things first:
The tool, Barman indeed is a really good tool but for your used case I seldom believe its coming out as a productive one.
Second, you need to redo some of your backup strategy work. Looking at the backups your are consuming, and I do not know what your retention is(i'm assuming it won't be high looking at the size of the backups) here are my 2 cents:
Compressing backups may save time and space but it adds an overhead and time when your want to restore(agin this doesn't apply to smaller DB's).
Taking daily backups of a DB which is TB's(might grow as well), is not a good option when you compare it with having incremental and logs on top of it.
Have a base backup every week, differential daily and then logs on top of it. You can always fine tune this along with the retention.
I'm not sure if you environment supports snapshots but this is another way out and really speeds up the restore and backups for you, its snapshots(at machine level, storage level etc.) In essence take the snapshot, tune your pg_starts and pg_stop along with the snapshot timings and then put archival logs(this is if someone archives their logs)
The above would help in speeding up the process and also give you back space.
Now coming back to the tool. Pgbackrest is an excellent option to perform all of the above (except 4.). I see it working for your used case better than barman if you do not want to redesign the backup strategies(which I would recommend doing irrespective of which tool you use)
I would recommend against taking a backup and then zipping it, just to save space, this is taking more time even after the backup is done and restorability takes a hit. Futuristically this approach will not scale well too.
answered Jul 8 at 4:31

76211 gold badge33 silver badges1616 bronze badges
Status of the compression feature
Here is an example.
I have a Dropbox account and folder /home/Dropbox. I encrypt the whole /home directory and back up to Dropbox.
Dropbox puts a file, let’s say, a copy of the Windows image into my Dropbox directory. Dropbox then measures the size of new blobs that are added on their servers. If the size has not changed, Dropbox infers that a copy of Windows exists in my computer, namely, Dropbox recovers one of my files. Repeat for other files, images, texts, sentences, messages, blobs, etc. The attacker needs to be able to add or drop plaintext (or otherwise have some control over plaintext), and measure the size of ciphertext.
It’s client side scanning like the one proposed by Apple, with crypto prepared by restic!
In this case, dedup works somewhat similarly. But I suppose deduplication does not replace compression, which is why compression actually further reduces repository size.
Now this is from some random Joe. Imagine what sophisticated attacks NSA could do.
Once you interact with a cunning adversary, the features you include in your software can be opportunities for the adversary.
SQL Backup 9
SQL Backup Pro offers four compression levels, described below. Generally, the smaller the resulting backup file, the slower the backup process.
Smaller backups save you valuable disk space. For example, if you achieve an average compression rate of 80%, you can store the backup for a 42.5 gigabyte (GB) database on a 8.5 GB DVD-R dual layer disc. Smaller files can also be transferred more quickly over the network, which is particularly useful, for example, when you want to store backups off-site.
To set the compression level:
The compression level used to create a backup does not noticeably affect the time necessary to restore the backup.
The compression you can achieve depends upon the type of data stored in the database; if the database contains a lot of highly-compressible data, such as text and uncompressed images, you can achieve higher compression. For full backups, you can use the Compression Analyzer to perform a test on the databases to check which compression level will produce the best result for your requirements.
Compression level 4
Compression level 4 uses the LZMA compression algorithm. This compression level generates the smallest backup files in most cases, but it uses the most CPU cycles and takes the longest to complete.
Compression level 3
Compression level 3 uses the zlib compression algorithm.
On average, the backup process is 25% to 30% faster than when compression level 4 is used, and 27% to 35% fewer CPU cycles are used. Backup files are usually 5% to 7% larger.
Compression level 2
This compression level uses the zlib compression algorithm, and is a variation of compression level 3.
On average, the backup process is 15% to 25% faster than when compression level 3 is used, and 12% to 14% fewer CPU cycles are used. Backup files are usually 4% to 6% larger.
Compression level 1
This is the default compression level. It is the fastest compression, but results in larger backup files.
On average, the backup process is 10% to 20% faster than when compression level 2 is used, and 20% to 33% fewer CPU cycles are used. Backup files are usually 5% to 9% larger than those produced by compression level 2.
However, if a database contains frequently repeated values, compression level 1 can produce backup files that are smaller than if you used compression level 2 or 3. For example, this may occur for a database that contains the results of Microsoft SQL Profiler trace sessions.
Compression level 0
If you do not want to compress your backups, specify compression level 0 from the command line or extended stored procedure; in the graphical user interface, clear the Compress backup check box in the wizard. For example, you may want to do this if you require only encryption and you do not want to compress your backups.
Compression percentage
SQL Backup Pro calculates the percentage compression of a backup by comparing the size of the SQL Backup backup with the total database size.
For example, if a database comprises a 10 GB data file and a 3 GB transaction log file and SQL Backup Pro generates a full backup of the database to create a backup file that is 3 GB, the compression for this backup is calculated as 77%, [1-(3/13)]x100.
The compression percentage is displayed in the Activity History