You are here
MySQL backup to file, gzip and load in one step
Fri, 2012-11-30 00:26 — Shinguz
When a MySQL Slave is set-up with mysqldump
you have 2 possibilities:
- You dump into a file and then load the data into the Slave with the
mysql
client utility. - You dump directly into the
mysql
client utility.
The first possibility has the advantage that you can start the load again if it failed. You can look into the file (and do some changes if needed).
The second possibility has the advantage that you do not need disk space and that it is possibly faster. But when the load fails you have to start from the very beginning.
What I was looking for is a way to combine everything in one step: Dumping to a file including compression and in the same step load the database to a slave. This is what I found to solve these requirements:
mysqldump --user=root --all-databases --flush-privileges --single-transaction --master-data=1 --quick \ --flush-logs --triggers --routines --events | tee >(gzip > /tmp/full_backup.sql.gz) | mysql --user=root --host=192.168.1.60 --port 3306
With this command you can even load several CPUs of the system:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24747 mysql 20 0 534m 56m 5504 S 36.1 0.7 4:12.35 mysqld 4967 mysql 20 0 402m 33m 5236 S 7.0 0.4 0:02.06 mysqld 4982 mysql 20 0 23348 2112 1216 S 6.6 0.0 0:01.64 mysqldump 4984 mysql 20 0 28608 3856 1372 S 5.6 0.0 0:01.58 mysql 4986 mysql 20 0 4296 688 304 S 5.3 0.0 0:02.10 gzip 4983 mysql 20 0 98.5m 628 544 S 0.7 0.0 0:00.13 tee
If gzip
becomes the bottleneck you can try with pigz
.
- Shinguz's blog
- Log in or register to post comments