Compression and Backup

A step-by-step demo of the powerful VM backup compression feature of NAKIVO Backup & Replication. Save hundreds on storage space expansion. zip and cleanuphrs parameter will clean the backup zip files older than specified number of hours. /* ##Script to Compress the backups in sql. Compression. Compression is used for backups. Compression is often selected to decrease disk space usage, but also has an inverse impact on your backup.

Compression and Backup -

how to compress back up files to < 10GB

10 GB backup

There is probably only one way to shrink the backup size from around 120GB to 10 GB: Select the most important data for a separate backup.

There are several backup tools, that let you select directories and files, that you want to include in the backup. Some of them provide compression. You can start searching via this link,

help.ubuntu.com/community/BackupYourSystem

You can run this backup regularly.

Full backup

You can backup your complete system once in a great while, for example after major system upgrades or once per year. And you can keep this backup in an external drive, that you store far from the computer (in case of fire or theft).

answered Mar 21 '18 at 5:30

sudodussudodus

38.9k55 gold badges6767 silver badges127127 bronze badges

Источник: https://askubuntu.com/questions/1017798/how-to-compress-back-up-files-to-10gb

Status of the compression feature

Here is an example.

I have a Dropbox account and folder /home/Dropbox. I encrypt the whole /home directory and back up to Dropbox.

Dropbox puts a file, let’s say, a copy of the Windows image into my Dropbox directory. Dropbox then measures the size of new blobs that are added on their servers. If the size has not changed, Dropbox infers that a copy of Windows exists in my computer, namely, Dropbox recovers one of my files. Repeat for other files, images, texts, sentences, messages, blobs, etc. The attacker needs to be able to add or drop plaintext (or otherwise have some control over plaintext), and measure the size of ciphertext.

It’s client side scanning like the one proposed by Apple, with crypto prepared by restic!

In this case, dedup works somewhat similarly. But I suppose deduplication does not replace compression, which is why compression actually further reduces repository size.

Now this is from some random Joe. Imagine what sophisticated attacks NSA could do.

Once you interact with a cunning adversary, the features you include in your software can be opportunities for the adversary.

Источник: https://forum.restic.net/t/status-of-the-compression-feature/1908

SQL Backup 9

SQL Backup Pro offers four compression levels, described below. Generally, the smaller the resulting backup file, the slower the backup process.

Smaller backups save you valuable disk space. For example, if you achieve an average compression rate of 80%, you can store the backup for a 42.5 gigabyte (GB) database on a 8.5 GB DVD-R dual layer disc. Smaller files can also be transferred more quickly over the network, which is particularly useful, for example, when you want to store backups off-site.

To set the compression level:

The compression level used to create a backup does not noticeably affect the time necessary to restore the backup.

The compression you can achieve depends upon the type of data stored in the database; if the database contains a lot of highly-compressible data, such as text and uncompressed images, you can achieve higher compression. For full backups, you can use the Compression Analyzer to perform a test on the databases to check which compression level will produce the best result for your requirements.

Compression level 4

Compression level 4 uses the LZMA compression algorithm. This compression level generates the smallest backup files in most cases, but it uses the most CPU cycles and takes the longest to complete.

Compression level 3

Compression level 3 uses the zlib compression algorithm.

On average, the backup process is 25% to 30% faster than when compression level 4 is used, and 27% to 35% fewer CPU cycles are used. Backup files are usually 5% to 7% larger.

Compression level 2

This compression level uses the zlib compression algorithm, and is a variation of compression level 3.

On average, the backup process is 15% to 25% faster than when compression level 3 is used, and 12% to 14% fewer CPU cycles are used. Backup files are usually 4% to 6% larger.

Compression level 1

This is the default compression level. It is the fastest compression, but results in larger backup files.

On average, the backup process is 10% to 20% faster than when compression level 2 is used, and 20% to 33% fewer CPU cycles are used. Backup files are usually 5% to 9% larger than those produced by compression level 2.

However, if a database contains frequently repeated values, compression level 1 can produce backup files that are smaller than if you used compression level 2 or 3. For example, this may occur for a database that contains the results of Microsoft SQL Profiler trace sessions.

Compression level 0

If you do not want to compress your backups, specify compression level 0 from the command line or extended stored procedure; in the graphical user interface, clear the Compress backup check box in the wizard. For example, you may want to do this if you require only encryption and you do not want to compress your backups.

Compression percentage

SQL Backup Pro calculates the percentage compression of a backup by comparing the size of the SQL Backup backup with the total database size.

For example, if a database comprises a 10 GB data file and a 3 GB transaction log file and SQL Backup Pro generates a full backup of the database to create a backup file that is 3 GB, the compression for this backup is calculated as 77%, [1-(3/13)]x100.

The compression percentage is displayed in the Activity History

Источник: https://documentation.red-gate.com/sbu9/settings-and-options/compression-levels
automated backup
STEAM
Compression of game backups
Hi,

It would be great funcionality to compress game backups. I'm right now doing backups of ten games. Each backuped game has same size as instalation folder.
I think it could be fine to choose compression like: none, low, medium, maximum.
I know that compression would increase time to do a backup but I insert backups to external hard drive and use them seldom.
Close

Report this post

Note: This is ONLY to be used to report spam, advertising, and problematic (harassment, fighting, or rude) posts.
Источник: https://steamcommunity.com/discussions/forum/10/627457521139993970/
backup software network

Compression - @SeniorDBA

Database compression is a feature that Microsoft introduced in SQL Server 2008, but many people still don’t understand or regularly use the feature. The power of this feature is to both speed up the backup process, and to save disk space. The speed benefit is a result of reduced disk activity as you stream the compressed backup file directly to disk. You can use all your available CPU cycles to perform the backup straight to disk, and since the smaller file is saved to disk you will probably reduce any potential delay as you write the backup file to disk. The other obvious benefit is the resulting backup file can also be much smaller. In my experience, I’ve seen compression between 20-50 percent, but you will need to test your backup to determine your real space savings based on the contents of the database and how well your data can be compressed.

Perform a test backup without compression and see how long it takes and how large the resulting BAK file is for your sample database. Then backup the same database using compression to see if it is faster and how much smaller the BAK file is when it is complete.

To create compressed database backups, all you need to do is add the COMPRESSION option to the BACKUP command as shown below:

BACKUP DATABASE MyDatabase TO DISK = 'H:\BACKUPS\MyDatabase.BAK' WITH FORMAT, COMPRESSION;

Compression has been a feature available with SQL Server 2017 in both the Standard and Enterprise editions:

FeatureEnterpriseStandardWebExpress with Advanced ServicesExpress
Server core support 1YesYesYesYesYes
Log shippingYesYesYesNoNo
Database mirroringYesYes

Full safety only

Witness onlyWitness onlyWitness only
Backup compressionYesYesNoNoNo
Database snapshotYesYesYesYesYes
Always On failover cluster instances2YesYesNoNoNo
Always On availability groups3YesNoNoNoNo
Basic availability groups 4NoYesNoNoNo
Online page and file restoreYesNoNoNoNo
Online indexingYesNoNoNoNo
Resumable online index rebuildsYesNoNoNoNo
Online schema changeYesNoNoNoNo
Fast recoveryYesNoNoNoNo
Mirrored backupsYesNoNoNoNo
Hot add memory and CPUYesNoNoNoNo
Database recovery advisorYesYesYesYesYes
Encrypted backupYesYesNoNoNo
Hybrid backup to Windows Azure (backup to URL)YesYesNoNoNo

You can read more about compression here.

Like this:

LikeLoading...

Related

Источник: https://seniordba.wordpress.com/2019/04/15/faster-backups-with-sql-server-backup-compression/

Compression and Backup -

backup windows live mail
Источник: http://www.leo-backup.com/zip-compression-backup.shtml
automated backup

How to compress postgres database backup using barman

First things first:

The tool, Barman indeed is a really good tool but for your used case I seldom believe its coming out as a productive one.

Second, you need to redo some of your backup strategy work. Looking at the backups your are consuming, and I do not know what your retention is(i'm assuming it won't be high looking at the size of the backups) here are my 2 cents:

  1. Compressing backups may save time and space but it adds an overhead and time when your want to restore(agin this doesn't apply to smaller DB's).

  2. Taking daily backups of a DB which is TB's(might grow as well), is not a good option when you compare it with having incremental and logs on top of it.

  3. Have a base backup every week, differential daily and then logs on top of it. You can always fine tune this along with the retention.

  4. I'm not sure if you environment supports snapshots but this is another way out and really speeds up the restore and backups for you, its snapshots(at machine level, storage level etc.) In essence take the snapshot, tune your pg_starts and pg_stop along with the snapshot timings and then put archival logs(this is if someone archives their logs)

The above would help in speeding up the process and also give you back space.

Now coming back to the tool. Pgbackrest is an excellent option to perform all of the above (except 4.). I see it working for your used case better than barman if you do not want to redesign the backup strategies(which I would recommend doing irrespective of which tool you use)

I would recommend against taking a backup and then zipping it, just to save space, this is taking more time even after the backup is done and restorability takes a hit. Futuristically this approach will not scale well too.

answered Jul 8 at 4:31

Raj VermaRaj Verma

76211 gold badge33 silver badges1616 bronze badges

Источник: https://stackoverflow.com/questions/68292699/how-to-compress-postgres-database-backup-using-barman

Status of the compression feature

Here is an example.

I have a Dropbox account and folder /home/Dropbox. I encrypt the whole /home directory and back up to Dropbox.

Dropbox puts a file, let’s say, a copy of the Windows image into my Dropbox directory. Dropbox then measures the size of new blobs that are added on their servers. If the size has not changed, Dropbox infers that a copy of Windows exists in my computer, namely, Dropbox recovers one of my files. Repeat for other files, images, texts, sentences, messages, blobs, etc. The attacker needs to be able to add or drop plaintext (or otherwise have some control over plaintext), and measure the size of ciphertext.

It’s client side scanning like the one proposed by Apple, with crypto prepared by restic!

In this case, dedup works somewhat similarly. But I suppose deduplication does not replace compression, which is why compression actually further reduces repository size.

Now this is from some random Joe. Imagine what sophisticated attacks NSA could do.

Once you interact with a cunning adversary, the features you include in your software can be opportunities for the adversary.

Источник: https://forum.restic.net/t/status-of-the-compression-feature/1908

SQL Backup 9

SQL Backup Pro offers four compression levels, described below. Generally, the smaller the resulting backup file, the slower the backup process.

Smaller backups save you valuable disk space. For example, if you achieve an average compression rate of 80%, you can store the backup for a 42.5 gigabyte (GB) database on a 8.5 GB DVD-R dual layer disc. Smaller files can also be transferred more quickly over the network, which is particularly useful, for example, when you want to store backups off-site.

To set the compression level:

The compression level used to create a backup does not noticeably affect the time necessary to restore the backup.

The compression you can achieve depends upon the type of data stored in the database; if the database contains a lot of highly-compressible data, such as text and uncompressed images, you can achieve higher compression. For full backups, you can use the Compression Analyzer to perform a test on the databases to check which compression level will produce the best result for your requirements.

Compression level 4

Compression level 4 uses the LZMA compression algorithm. This compression level generates the smallest backup files in most cases, but it uses the most CPU cycles and takes the longest to complete.

Compression level 3

Compression level 3 uses the zlib compression algorithm.

On average, the backup process is 25% to 30% faster than when compression level 4 is used, and 27% to 35% fewer CPU cycles are used. Backup files are usually 5% to 7% larger.

Compression level 2

This compression level uses the zlib compression algorithm, and is a variation of compression level 3.

On average, the backup process is 15% to 25% faster than when compression level 3 is used, and 12% to 14% fewer CPU cycles are used. Backup files are usually 4% to 6% larger.

Compression level 1

This is the default compression level. It is the fastest compression, but results in larger backup files.

On average, the backup process is 10% to 20% faster than when compression level 2 is used, and 20% to 33% fewer CPU cycles are used. Backup files are usually 5% to 9% larger than those produced by compression level 2.

However, if a database contains frequently repeated values, compression level 1 can produce backup files that are smaller than if you used compression level 2 or 3. For example, this may occur for a database that contains the results of Microsoft SQL Profiler trace sessions.

Compression level 0

If you do not want to compress your backups, specify compression level 0 from the command line or extended stored procedure; in the graphical user interface, clear the Compress backup check box in the wizard. For example, you may want to do this if you require only encryption and you do not want to compress your backups.

Compression percentage

SQL Backup Pro calculates the percentage compression of a backup by comparing the size of the SQL Backup backup with the total database size.

For example, if a database comprises a 10 GB data file and a 3 GB transaction log file and SQL Backup Pro generates a full backup of the database to create a backup file that is 3 GB, the compression for this backup is calculated as 77%, [1-(3/13)]x100.

The compression percentage is displayed in the Activity History

Источник: https://documentation.red-gate.com/sbu9/settings-and-options/compression-levels
backup software network backup windows live mail
Источник: http://www.leo-backup.com/zip-compression-backup.shtml

To save the drive space, many users may select to compress their backup data. Yet, there are some users who are still suspecting whether it is right. Hence, in this post, we Compression and Backup look at this issue and expose its pros and cons in detail.

More and more users have understood the significance of data backup in that the backup data will make future data recovery much easier. For instance, if you have backed up your Outlook data file, even if original file gets corrupted, you still can recover Outlook data from backup file. Therefore, making regular data backup is vitally important.

Yet, with timing going on, more and more backup files, and you will discover that backups have taken up a lot of disk space. Under this circumstance, you have two alternatives to deal with it. One is to delete the older backups. And the Compression and Backup one is to compress the backup data. Many users prefer to select the latter one. But at the same time, they fear that compression will make some troubles. Therefore, in the followings, we will expose the advantages and disadvantages of compressing backup data, helping you make your own decision.

Is It Right to Compress Your Backup Data?

Advantages

Compression is actually a mathematical process which can take data and make it smaller via removing the redundancy and repeated patterns. In other words, it is able to reduce the file side by encoding the data in a much more effective way.

So, it is apparent that the most superior advantage of Compression and Backup your backup data is that it can make your backup data smaller, therebsy saving a lot of space on the backup storage device. Therefore, if your device tends to run out of space, it is a good option to compress the backup data.

Disadvantages

Nevertheless, compressing backup data can cause some troubles as well, like the followings.

  1. First off, compression is a kind of CPU-intensive activity. If your PC is too old Compression and Backup slow, it’s inadvisable to run compression during backup. Otherwise, your computer may be prone to get stuck and even crash.
  2. In addition, compression backups will take some time, which will make the duration of backup much longer. Similarly the course of restoring backup will take more time as well.
  3. If the entire backup is compressed, when a bad sector appears in the backup, it’ll be more difficult to recover the backup. That is to say, if a single bit goes wrong, all the left of compressed data will become compromised, too. In a nutshell, a single error can damage the whole backup in a moment.

Author Introduction:

Shirley Zhang is a data recovery expert in DataNumen, Inc., which is the world leader in data recovery technologies, including sql recovery and outlook repair software products. For more information visit www.datanumen.com

Источник: https://www.datanumen.com/blogs/right-compress-backup-data/

is it safe to compress backups for databases with TDE enabled?

I've been reading about this for a long time, and it seems it's not safe to compress a backup when the database has TDE enabled.

You asked about safety so replying. Using backup compression with TDE is safe and I have been using it quite a lot. What doesn't works well, for version SQL Server 2014 and below is the backup compression mileage (the amount of compression) you would get for versions below SQL Server 2016 is not great, let me put that the backups are slightly compressed as compared to backup compression on non TDE enabled database. From the Docs which Andrew Sayer shared (Backup compression with TDE

Starting from Compression and Backup Server 2016

Starting with SQL Server 2016 (13.x), setting MAXTRANSFERSIZE larger than 65536 (64 KB) enables an optimized compression algorithm for Transparent Data Encryption (TDE) encrypted databases that first decrypts a page, compresses it, and then encrypts it again. If MAXTRANSFERSIZE is not specified, or if MAXTRANSFERSIZE = 65536 (64 KB) is used, backup compression with TDE encrypted databases directly compresses the encrypted pages, and may not yield good compression ratios.

Starting from SQL Server 2019

Starting with SQL Server 2019 (15.x) CU5, setting MAXTRANSFERSIZE is no longer required to enable this optimized compression algorithm with TDE. If the Compression and Backup command is specified WITH COMPRESSION or the backup compression default server configuration is set to 1, MAXTRANSFERSIZE will automatically be increased to 128K to enable the optimized algorithm

is anyone experiencing errors during restore with compressed backups Compression and Backup tde databases?

Show me the error, you must be restoring TDE enabled database without certificates. My best guess

answered Compression and Backup 17 at Compression and Backup src="https://i.stack.imgur.com/GVsmv.jpg?s=64&g=1" alt="" width="32" height="32">

ShankyShanky

17.8k44 gold badges3131 silver badges5454 bronze badges