You are here

Abdel-Mawla Gharieb

Subscribe to Abdel-Mawla Gharieb feed
FromDual RSS feed about MySQL, Galera Cluster, MariaDB and Percona Server
Updated: 6 min 39 sec ago

Replication Troubleshooting - Classic VS GTID

Fri, 2014-07-04 15:05

In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).

There are two main operations we might need to do in a replication setup:

  • Skip or ignore a statement that causes the replication to stop.
  • Re-initialize a slave when the Replication is broke and could not be started anymore.
Skip or Ignore statement

Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.

To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).

Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?
CLASSIC REPLICATION

Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:

SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;
GTID REPLICATION

Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.

To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:

  • Check which transaction is causing the problem: SQL> SHOW SLAVE STATUS\G . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7 Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1, b9b4712a-df64-11e3-b391-60672090eb04:1-6 Auto_Position: 1

    Retrieved_Gtid_Set means the retrieved GTIDs from the master

    Executed_Gtid_Set means the executed GTIDs on the slave.

    According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.

  • Inject an empty transaction: SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7'; SQL> BEGIN;COMMIT; SQL> SET GTID_NEXT='AUTOMATIC'; SQL> START SLAVE;

    BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.

Note:

After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.

Re-initialize/ re-build a slave

For many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.

How to solve that issue ?
CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Start the slave: SQL> START SLAVE;

NOTE:

Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.

GTID REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
  • Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
  • Start the slave: SQL> START SLAVE;

NOTES:

  • If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
    shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

    The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.

  • Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
Conclusion

While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.

GTID In Action

Thu, 2014-06-12 14:09

In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.

The following topics will be covered in this blog:

What is the concept of GTID protocol?

GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.

GTID consists of two parts separated by a column {source_id:transactions_id}.

WHERE

  • source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
  • transaction_id: A sequence number determining the order of the committed transaction.

The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3

As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):

  • gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
  • enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
    • CREATE TABLE .. SELECT.
    • CREATE TEMPORARY TABLE (inside a transaction).
    • Statements that update nontransactional tables inside a transaction.
  • gtid_purged: The set of transactions that have been purged from the binary logs.
  • gtid_executed: The set of transactions which already executed on that server.
  • gtid_next: The GTID which will be used for the next transaction.
GTID Replication Implementation Fresh Installations

Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.

The implementation process is divided into two parts:

MASTER'S SIDE CONFIGURATION:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';

SLAVE'S SIDE CONFIGURATION:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Set the master information on the slave's:

    Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:

    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;
  • Start replication: SQL> START SLAVE;
  • Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
Adding New Slave

It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:

  • Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
  • Restore the backup file taken from the master.
  • Use change master to with MASTER_AUTO_POSITION=1
  • Start the slave.

Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?

Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:

-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';

After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.


NOTE:

If the database size is big (100GB or so) then using Xtrabackup tool instead of mysqldump here will be a very good idea. Check out this link for more information on how to use Xtrabackup tool to restore a slave server having GTID enabled.

Migration from classic replication to GTID replication How to perform the migration?

To migrate an already running replication using the classical method to GTID replication, the following steps should be done:

  • Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
  • Shutdown MySQL on all servers and add the GTID variables to the configuration files.
  • Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
  • Start MySQL service on all servers.
  • Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
  • Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
Is online migration from classic to GTID replication available?

At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:

  • GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
  • Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
Workaround ??

There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)

The online migration steps would be:

  • Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
  • Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
  • Point the application to write to slaveA instead of the old master.
  • Restart the old master to use GTID_MODE=ON and having slaveA as a master.

Note: This is not yet available in Oracle binaries

More information on this could be find here.

GTID Benefits
  • Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
  • Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
  • Simple to determine whether masters and slaves are consistent or not.
  • Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
  • Automatic fail-over scripts is now much easier to implement.

To know how to troubleshoot GTID replication, check out the Replication Troubleshooting - Classic VS GTID blog.

Taxonomy upgrade extras: GTIDreplication

GTID In Action

Thu, 2014-06-12 14:09

In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.

The following topics will be covered in this blog:

What is the concept of GTID protocol?

GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.

GTID consists of two parts separated by a column {source_id:transactions_id}.

WHERE

  • source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
  • transaction_id: A sequence number determining the order of the committed transaction.

The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3

As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):

  • gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
  • enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
    • CREATE TABLE .. SELECT.
    • CREATE TEMPORARY TABLE (inside a transaction).
    • Statements that update nontransactional tables inside a transaction.
  • gtid_purged: The set of transactions that have been purged from the binary logs.
  • gtid_executed: The set of transactions which already executed on that server.
  • gtid_next: The GTID which will be used for the next transaction.
GTID Replication Implementation Fresh Installations

Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.

The implementation process is divided into two parts:

MASTER'S SIDE CONFIGURATION:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';

SLAVE'S SIDE CONFIGURATION:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Set the master information on the slave's:

    Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:

    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;
  • Start replication: SQL> START SLAVE;
  • Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
Adding New Slave

It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:

  • Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
  • Restore the backup file taken from the master.
  • Use change master to with MASTER_AUTO_POSITION=1
  • Start the slave.

Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?

Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:

-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';

After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.


NOTE:

If the database size is big (100GB or so) then using Xtrabackup tool instead of mysqldump here will be a very good idea. Check out this link for more information on how to use Xtrabackup tool to restore a slave server having GTID enabled.

Migration from classic replication to GTID replication How to perform the migration?

To migrate an already running replication using the classical method to GTID replication, the following steps should be done:

  • Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
  • Shutdown MySQL on all servers and add the GTID variables to the configuration files.
  • Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
  • Start MySQL service on all servers.
  • Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
  • Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
Is online migration from classic to GTID replication available?

At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:

  • GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
  • Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
Workaround ??

There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)

The online migration steps would be:

  • Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
  • Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
  • Point the application to write to slaveA instead of the old master.
  • Restart the old master to use GTID_MODE=ON and having slaveA as a master.

Note: This is not yet available in Oracle binaries

More information on this could be find here.

GTID Benefits
  • Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
  • Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
  • Simple to determine whether masters and slaves are consistent or not.
  • Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
  • Automatic fail-over scripts is now much easier to implement.

To know how to troubleshoot GTID replication, check out the Replication Troubleshooting - Classic VS GTID blog.

Taxonomy upgrade extras: GTIDreplication

GTID In Action

Thu, 2014-06-12 14:09

In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.

The following topics will be covered in this blog:

What is the concept of GTID protocol?

GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.

GTID consists of two parts separated by a column {source_id:transactions_id}.

WHERE

  • source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
  • transaction_id: A sequence number determining the order of the committed transaction.

The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3

As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):

  • gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
  • enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
    • CREATE TABLE .. SELECT.
    • CREATE TEMPORARY TABLE (inside a transaction).
    • Statements that update nontransactional tables inside a transaction.
  • gtid_purged: The set of transactions that have been purged from the binary logs.
  • gtid_executed: The set of transactions which already executed on that server.
  • gtid_next: The GTID which will be used for the next transaction.
GTID Replication Implementation Fresh Installations

Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.

The implementation process is divided into two parts:

MASTER'S SIDE CONFIGURATION:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';

SLAVE'S SIDE CONFIGURATION:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Set the master information on the slave's:

    Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:

    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;
  • Start replication: SQL> START SLAVE;
  • Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
Adding New Slave

It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:

  • Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
  • Restore the backup file taken from the master.
  • Use change master to with MASTER_AUTO_POSITION=1
  • Start the slave.

Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?

Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:

-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';

After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.

Migration from classic replication to GTID replication How to perform the migration?

To migrate an already running replication using the classical method to GTID replication, the following steps should be done:

  • Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
  • Shutdown MySQL on all servers and add the GTID variables to the configuration files.
  • Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
  • Start MySQL service on all servers.
  • Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
  • Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
Is online migration from classic to GTID replication available?

At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:

  • GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
  • Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
Workaround ??

There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)

The online migration steps would be:

  • Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
  • Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
  • Point the application to write to slaveA instead of the old master.
  • Restart the old master to use GTID_MODE=ON and having slaveA as a master.

Note: This is not yet available in Oracle binaries

More information on this could be find here.

GTID Benefits
  • Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
  • Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
  • Simple to determine whether masters and slaves are consistent or not.
  • Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
  • Automatic fail-over scripts is now much easier to implement.

In a future post, I will write about how to troubleshoot GTID replication.

How to Setup MySQL Master/Slave Replication ?

Thu, 2014-04-24 15:53

It's not usual to find an easy source on how to setup MySQL replication, I thought it might be useful at least for the beginners to write a direct and simple howto blog on setting up Master/Slave replication in MySQL using the classic method (binary log information). Check out my post GTID In Action for information about transaction-based replication using GTID.

Before going through the replication setup steps, I think it's better to explain first how Replication works in MySQL.

MySQL replication mainly consists of three-part process:

  • The master server records all data changes to its binary logs (binary log events) and send it to the slave using a thread called (Binlog dump thread) once the slave connects to the master.
  • The slave copies the binary log events sent by the master's binlog dump thread to its relay logs using a thread called (Slave I/O thread).
  • The slave applies these changes from the relay logs to its data by replaying (executing) all events using a thread called (Slave SQL thread).

Now, lets go through the setup process which is divided into 3 main sections (assuming you have already installed MySQL on master and slave servers):

Master's side configuration:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 ## must be unique log-bin=mysql-bin binlog_format=ROW
  • Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
  • Take a full snapshot from the master's databases: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql

    Note:

    If you have MyISAM tables you should omit the option --single-transaction while --master-data=2 will automatically turn on --lock-all-tables option which is required for MyISAM tables).
  • After preparing the backup file transfer it to the slave server.
  • If you are setting up fresh M/S servers (no data is there on the master) you don't have to backup the master databases and all what you need is to execute the following command on the master server just to know the master's coordinates which will be used later on the slave's: SQL> SHOW MASTER STATUS\G *************************** 1. row *************************** File: mysql-bin.000275 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:

And that is it on the master's, let's do the slave's work.

Slave's side configuration:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 ## must be unique. relay_log=relay-log skip-slave-start ## useful to make any checks before starting the slave (this way, slave must be started manually after each mysql restart)
  • Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
  • Restore the full backup taken from the master (if performed): shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql.
  • Get the master's coordinates information from the backup file: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO"

    OR from the output of "SHOW MASTER STATUS;" in case of no backups taken (check the final step in the "Master's side configuration").

  • Set the master information on the slave's: SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', ## The user which was created on the master -> MASTER_PASSWORD='s3cret', -> MASTER_LOG_FILE='mysql-bin.000275', ## taken from the above step -> MASTER_LOG_POS=120; ## taken from the above step
  • Start replication: SQL> START SLAVE;
Replication checking and troubleshooting:
  • Once the slave is started, check the replication status on the slave by the following command: SQL> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: slave_user_name Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000275 Read_Master_Log_Pos: 120 Relay_Log_File: mysql-relay.000001 Relay_Log_Pos: 283 Relay_Master_Log_File: mysql-bin.000275 Slave_IO_Running: Yes Slave_SQL_Running: Yes . . . Last_SQL_Error: . . .
  • If the Slave_IO_State= connecting .... then make sure that the slave user information is set correctly and there is no firewall restrictions between the two servers (master and slave) this could be checked by connecting to the master's MySQL from the salve server by the replication user (in this example, slave_user_name).
  • If both Slave_IO_Running and Slave_SQL_Running = Yes, then the replication had been set up correctly.
  • If the Slave_SQL_Running = No, check the value of Last_SQL_Error for more details about the SQL error.
  • If you know that error and you want to ignore it, you can execute "SET GLOBAL sql_slave_skip_counter = 1;" on the slave and then start the slave again "START SLAVE;".
  • To restrict all normal users from changing data on the slave - which might break the replication - the option "read-only" should be added in the slave's my.cnf file.
  • the server option "server-id" must be unique among all servers inside the replication (masters and slaves).
  • If your database size is big (100GB or so) Xtrabackup tool could be used instead of mysqldump - when preparing the master snapshot - for faster backup and restore operations. For more information on how to use Xtrabackup, check out this link.
  • For more information on how to setup MySQL replication, check out the manual documentation.
  • For more information on how to troubleshoot MySQL replication, check out the Replication Troubleshooting - Classic VS GTID blog.
Taxonomy upgrade extras: replication

How to Setup MySQL Master/Slave Replication ?

Thu, 2014-04-24 15:53

It's not usual to find an easy source on how to setup MySQL replication, I thought it might be useful at least for the beginners to write a direct and simple howto blog on setting up Master/Slave replication in MySQL using the classic method (binary log information). Check out my post GTID In Action for information about transaction-based replication using GTID.

Before going through the replication setup steps, I think it's better to explain first how Replication works in MySQL.

MySQL replication mainly consists of three-part process:

  • The master server records all data changes to its binary logs (binary log events) and send it to the slave using a thread called (Binlog dump thread) once the slave connects to the master.
  • The slave copies the binary log events sent by the master's binlog dump thread to its relay logs using a thread called (Slave I/O thread).
  • The slave applies these changes from the relay logs to its data by replaying (executing) all events using a thread called (Slave SQL thread).

Now, lets go through the setup process which is divided into 3 main sections (assuming you have already installed MySQL on master and slave servers):

Master's side configuration:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 ## must be unique log-bin=mysql-bin binlog_format=ROW
  • Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
  • Take a full snapshot from the master's databases: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql

    Note:

    If you have MyISAM tables you should omit the option --single-transaction while --master-data=2 will automatically turn on --lock-all-tables option which is required for MyISAM tables).
  • After preparing the backup file transfer it to the slave server.
  • If you are setting up fresh M/S servers (no data is there on the master) you don't have to backup the master databases and all what you need is to execute the following command on the master server just to know the master's coordinates which will be used later on the slave's: SQL> SHOW MASTER STATUS\G *************************** 1. row *************************** File: mysql-bin.000275 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:

And that is it on the master's, let's do the slave's work.

Slave's side configuration:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 ## must be unique. relay_log=relay-log skip-slave-start ## useful to make any checks before starting the slave (this way, slave must be started manually after each mysql restart)
  • Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
  • Restore the full backup taken from the master (if performed): shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql.
  • Get the master's coordinates information from the backup file: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO"

    OR from the output of "SHOW MASTER STATUS;" in case of no backups taken (check the final step in the "Master's side configuration").

  • Set the master information on the slave's: SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', ## The user which was created on the master -> MASTER_PASSWORD='s3cret', -> MASTER_LOG_FILE='mysql-bin.000275', ## taken from the above step -> MASTER_LOG_POS=120; ## taken from the above step
  • Start replication: SQL> START SLAVE;
Replication checking and troubleshooting:
  • Once the slave is started, check the replication status on the slave by the following command: SQL> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: slave_user_name Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000275 Read_Master_Log_Pos: 120 Relay_Log_File: mysql-relay.000001 Relay_Log_Pos: 283 Relay_Master_Log_File: mysql-bin.000275 Slave_IO_Running: Yes Slave_SQL_Running: Yes . . . Last_SQL_Error: . . .
  • If the Slave_IO_State= connecting .... then make sure that the slave user information is set correctly and there is no firewall restrictions between the two servers (master and slave) this could be checked by connecting to the master's MySQL from the salve server by the replication user (in this example, slave_user_name).
  • If both Slave_IO_Running and Slave_SQL_Running = Yes, then the replication had been set up correctly.
  • If the Slave_SQL_Running = No, check the value of Last_SQL_Error for more details about the SQL error.
  • If you know that error and you want to ignore it, you can execute "SET GLOBAL sql_slave_skip_counter = 1;" on the slave and then start the slave again "START SLAVE;".
  • To restrict all normal users from changing data on the slave - which might break the replication - the option "read-only" should be added in the slave's my.cnf file.
  • the server option "server-id" must be unique among all servers inside the replication (masters and slaves).
  • If your database size is big (100GB or so) Xtrabackup tool could be used instead of mysqldump - when preparing the master snapshot - for faster backup and restore operations. For more information on how to use Xtrabackup, check out this link.
  • For more information on how to setup MySQL replication, check out the manual documentation.
  • For more information on how to troubleshoot MySQL replication, check out the Replication Troubleshooting - Classic VS GTID blog.
Taxonomy upgrade extras: replication

How to Setup MySQL Master/Slave Replication ?

Thu, 2014-04-24 15:53

It's not usual to find an easy source on how to setup MySQL replication, I thought it might be useful at least for the beginners to write a direct and simple howto blog on setting up Master/Slave replication in MySQL using the classic method (binary log information). Check out my post GTID In Action for information about transaction-based replication using GTID.

Before going through the replication setup steps, I think it's better to explain first how Replication works in MySQL.

MySQL replication mainly consists of three-part process:

  • The master server records all data changes to its binary logs (binary log events) and send it to the slave using a thread called (Binlog dump thread) once the slave connects to the master.
  • The slave copies the binary log events sent by the master's binlog dump thread to its relay logs using a thread called (Slave I/O thread).
  • The slave applies these changes from the relay logs to its data by replaying (executing) all events using a thread called (Slave SQL thread).

Now, lets go through the setup process which is divided into 3 main sections (assuming you have already installed MySQL on master and slave servers):

Master's side configuration:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 ## must be unique log-bin=mysql-bin binlog_format=ROW
  • Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
  • Take a full snapshot from the master's databases: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql

    Note:

    If you have MyISAM tables you should omit the option --single-transaction while --master-data=2 will automatically turn on --lock-all-tables option which is required for MyISAM tables).
  • After preparing the backup file transfer it to the slave server.
  • If you are setting up fresh M/S servers (no data is there on the master) you don't have to backup the master databases and all what you need is to execute the following command on the master server just to know the master's coordinates which will be used later on the slave's: SQL> SHOW MASTER STATUS\G *************************** 1. row *************************** File: mysql-bin.000275 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:

And that is it on the master's, let's do the slave's work.

Slave's side configuration:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 ## must be unique. relay_log=relay-log skip-slave-start ## useful to make any checks before starting the slave (this way, slave must be started manually after each mysql restart)
  • Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
  • Restore the full backup taken from the master (if performed): shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql.
  • Get the master's coordinates information from the backup file: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO"

    OR from the output of "SHOW MASTER STATUS;" in case of no backups taken (check the final step in the "Master's side configuration").

  • Set the master information on the slave's: SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', ## The user which was created on the master -> MASTER_PASSWORD='s3cret', -> MASTER_LOG_FILE='mysql-bin.000275', ## taken from the above step -> MASTER_LOG_POS=120; ## taken from the above step
  • Start replication: SQL> START SLAVE;
Replication checking and troubleshooting:
  • Once the slave is started, check the replication status on the slave by the following command: SQL> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: slave_user_name Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000275 Read_Master_Log_Pos: 120 Relay_Log_File: mysql-relay.000001 Relay_Log_Pos: 283 Relay_Master_Log_File: mysql-bin.000275 Slave_IO_Running: Yes Slave_SQL_Running: Yes . . . Last_SQL_Error: . . .
  • If the Slave_IO_State= connecting .... then make sure that the slave user information is set correctly and there is no firewall restrictions between the two servers (master and slave) this could be checked by connecting to the master's MySQL from the salve server by the replication user (in this example, slave_user_name).
  • If both Slave_IO_Running and Slave_SQL_Running = Yes, then the replication had been set up correctly.
  • If the Slave_SQL_Running = No, check the value of Last_SQL_Error for more details about the SQL error.
  • If you know that error and you want to ignore it, you can execute "SET GLOBAL sql_slave_skip_counter = 1;" on the slave and then start the slave again "START SLAVE;".
  • To restrict all normal users from changing data on the slave - which might break the replication - the option "read-only" should be added in the slave's my.cnf file.
  • the server option "server-id" must be unique among all servers inside the replication (masters and slaves).
  • If your database size is big (100GB or so) Xtrabackup tool could be used instead of mysqldump - when preparing the master snapshot - for faster backup and restore operations. For more information on how to use Xtrabackup, check out this link.
  • For more information on how to setup MySQL replication, check out the manual documentation.
  • For more information on how to troubleshoot MySQL replication, check out the Replication Troubleshooting - Classic VS GTID blog.

Setting the right GCache size in Galera Cluster

Thu, 2014-04-17 13:52

One of our customers had a question related to the right value of Galera Cache size (gcache.size) in Galera Cluster for MySQL which I would like to share with you.

The question was: My maintenance window takes 4 hours for my 5TB DB. How can I avoid an SST ?!

Basically, having too small GCache size will lead to SST (Snapshot State Transfer) instead of IST (Incremental State Transfer), thus we can avoid the SST by setting the GCache to the appropriate value.

To check the current value of the GCache size:

mysql> SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options'\G Variable_name: wsrep_provider_options Value: base_host = 192.168.1.12; . . . gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; . . .

The value of GCache size could be changed by adding the following line in the my.cnf file and restarting the node (it could NOT be changed online):

#my.cnf [mysqld] wsrep_provider_options="gcache.size=256M"

But the question is how can we calculate the right value for GCache size to cover the maintenance window and at the same time not larger than what it needs?

To answer that question we should first find out how much GCache can handle which could be calculated by the following formula:

Hold time = GCache size / Replication Rate.

Where:

  • Replication Rate = Amount of replicated data / time.
  • Amount of replicated data = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.

The amount of replicated data for the customer's case = 7200MB.

Now, we can find out how much GCache (default 128M) can handle for the customer's case:

Hold time = 128MB / (7200MB / 4h) = 128MB / 0.5 MB = 256s.

Then, we can calculate the right GCache size value to handle the maintenance window by the following formula:
GCache = Maintenance window * Replication Rate = 14400s * 0.5 MB.
GCache = 7200MB.

In other words, the right GCache size should be equivalent to (or not less than) the amount of replicated data.

A shorter way using the binary logs size

Is there any relation between (wsrep_replicated_bytes + wsrep_received_bytes) and the binary log traffic? Lets check that by the following test scenario:

  • Starting a Galera Cluster where the newest binary log file size will be empty (120 bytes) and the same for the status variables: mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | wsrep_replicated_bytes | 0 | +------------------------+-------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received_bytes | 368 | +----------------------+-------+ 1 row in set (0.00 sec)
  • Execute some DML statements on the current node (to increase wsrep_replicated_bytes) and on another node (to increase wsrep_received_bytes), then check the status values:
    mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_replicated_bytes | 80125192 | +------------------------+----------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_received_bytes | 40062948 | +------------------------+----------+ 1 row in set (0.00 sec)

    The Amount of replicated data = (80125192 + 40062948) - (0 + 368) = 120187772 Bytes.

  • Checking the increase of the binary log file: shell> ll /var/lib/mysql/mysql-bin.000243 -rw-rw---- 1 mysql mysql 113769454 Mar 20 13:42 mysql-bin.000243
  • Notes:

    • The variable log_slave_updates MUST to be enabled on ALL nodes, otherwise, the option wsrep_received_bytes will not be reflected on the binary logs, thus will lead to WRONG calculations!!
    • Since the cluster is freshly started, MySQL started to write into a new binary log file while in an already running cluster we can force MySQL to start from new binary log file by issuing the SQL command "FLUSH BINARY LOGS;" so that we can use the total size of the binary logs generated after that during the maintenance.
    Conclusion:

    Although the binary log traffic will be always less than the amount of replicated data but they are nearly close, thus we can use it to get a rough estimation about the right GCache size value, hence the formula will be as follows:

    • GCache size = Maintenance window * Replication Rate.
    • GCache size = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
    • GCache size = binary log traffic which occurs during the maintenance window.

Setting the right GCache size in Galera Cluster

Thu, 2014-04-17 13:52

One of our customers had a question related to the right value of Galera Cache size (gcache.size) in Galera Cluster for MySQL which I would like to share with you.

The question was: My maintenance window takes 4 hours for my 5TB DB. How can I avoid an SST ?!

Basically, having too small GCache size will lead to SST (Snapshot State Transfer) instead of IST (Incremental State Transfer), thus we can avoid the SST by setting the GCache to the appropriate value.

To check the current value of the GCache size:

mysql> SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options'\G Variable_name: wsrep_provider_options Value: base_host = 192.168.1.12; . . . gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; . . .

The value of GCache size could be changed by adding the following line in the my.cnf file and restarting the node (it could NOT be changed online):

#my.cnf [mysqld] wsrep_provider_options="gcache.size=256M"

But the question is how can we calculate the right value for GCache size to cover the maintenance window and at the same time not larger than what it needs?

To answer that question we should first find out how much GCache can handle which could be calculated by the following formula:

Hold time = GCache size / Replication Rate.

Where:

  • Replication Rate = Amount of replicated data / time.
  • Amount of replicated data = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.

The amount of replicated data for the customer's case = 7200MB.

Now, we can find out how much GCache (default 128M) can handle for the customer's case:

Hold time = 128MB / (7200MB / 4h) = 128MB / 0.5 MB = 256s.

Then, we can calculate the right GCache size value to handle the maintenance window by the following formula:
GCache = Maintenance window * Replication Rate = 14400s * 0.5 MB.
GCache = 7200MB.

In other words, the right GCache size should be equivalent to (or not less than) the amount of replicated data.

A shorter way using the binary logs size

Is there any relation between (wsrep_replicated_bytes + wsrep_received_bytes) and the binary log traffic? Lets check that by the following test scenario:

  • Starting a Galera Cluster where the newest binary log file size will be empty (120 bytes) and the same for the status variables: mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | wsrep_replicated_bytes | 0 | +------------------------+-------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received_bytes | 368 | +----------------------+-------+ 1 row in set (0.00 sec)
  • Execute some DML statements on the current node (to increase wsrep_replicated_bytes) and on another node (to increase wsrep_received_bytes), then check the status values:
    mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_replicated_bytes | 80125192 | +------------------------+----------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_received_bytes | 40062948 | +------------------------+----------+ 1 row in set (0.00 sec)

    The Amount of replicated data = (80125192 + 40062948) - (0 + 368) = 120187772 Bytes.

  • Checking the increase of the binary log file: shell> ll /var/lib/mysql/mysql-bin.000243 -rw-rw---- 1 mysql mysql 113769454 Mar 20 13:42 mysql-bin.000243
  • Notes:

    • The variable log_slave_updates MUST to be enabled on ALL nodes, otherwise, the option wsrep_received_bytes will not be reflected on the binary logs, thus will lead to WRONG calculations!!
    • Since the cluster is freshly started, MySQL started to write into a new binary log file while in an already running cluster we can force MySQL to start from new binary log file by issuing the SQL command "FLUSH BINARY LOGS;" so that we can use the total size of the binary logs generated after that during the maintenance.
    Conclusion:

    Although the binary log traffic will be always less than the amount of replicated data but they are nearly close, thus we can use it to get a rough estimation about the right GCache size value, hence the formula will be as follows:

    • GCache size = Maintenance window * Replication Rate.
    • GCache size = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
    • GCache size = binary log traffic which occurs during the maintenance window.

Setting the right GCache size in Galera Cluster

Thu, 2014-04-17 13:52

One of our customers had a question related to the right value of Galera Cache size (gcache.size) in Galera Cluster for MySQL which I would like to share with you.

The question was: My maintenance window takes 4 hours for my 5TB DB. How can I avoid an SST ?!

Basically, having too small GCache size will lead to SST (Snapshot State Transfer) instead of IST (Incremental State Transfer), thus we can avoid the SST by setting the GCache to the appropriate value.

To check the current value of the GCache size:

mysql> SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options'\G Variable_name: wsrep_provider_options Value: base_host = 192.168.1.12; . . . gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; . . .

The value of GCache size could be changed by adding the following line in the my.cnf file and restarting the node (it could NOT be changed online):

#my.cnf [mysqld] wsrep_provider_options="gcache.size=256M"

But the question is how can we calculate the right value for GCache size to cover the maintenance window and at the same time not larger than what it needs?

To answer that question we should first find out how much GCache can handle which could be calculated by the following formula:

Hold time = GCache size / Replication Rate.

Where:

  • Replication Rate = Amount of replicated data / time.
  • Amount of replicated data = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.

The amount of replicated data for the customer's case = 7200MB.

Now, we can find out how much GCache (default 128M) can handle for the customer's case:

Hold time = 128MB / (7200MB / 4h) = 128MB / 0.5 MB = 256s.

Then, we can calculate the right GCache size value to handle the maintenance window by the following formula:
GCache = Maintenance window * Replication Rate = 14400s * 0.5 MB.
GCache = 7200MB.

In other words, the right GCache size should be equivalent to (or not less than) the amount of replicated data.

A shorter way using the binary logs size

Is there any relation between (wsrep_replicated_bytes + wsrep_received_bytes) and the binary log traffic? Lets check that by the following test scenario:

  • Starting a Galera Cluster where the newest binary log file size will be empty (120 bytes) and the same for the status variables: mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | wsrep_replicated_bytes | 0 | +------------------------+-------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received_bytes | 368 | +----------------------+-------+ 1 row in set (0.00 sec)
  • Execute some DML statements on the current node (to increase wsrep_replicated_bytes) and on another node (to increase wsrep_received_bytes), then check the status values:
    mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_replicated_bytes | 80125192 | +------------------------+----------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_received_bytes | 40062948 | +------------------------+----------+ 1 row in set (0.00 sec)

    The Amount of replicated data = (80125192 + 40062948) - (0 + 368) = 120187772 Bytes.

  • Checking the increase of the binary log file: shell> ll /var/lib/mysql/mysql-bin.000243 -rw-rw---- 1 mysql mysql 113769454 Mar 20 13:42 mysql-bin.000243
  • Notes:

    • The variable log_slave_updates MUST to be enabled on ALL nodes, otherwise, the option wsrep_received_bytes will not be reflected on the binary logs, thus will lead to WRONG calculations!!
    • Since the cluster is freshly started, MySQL started to write into a new binary log file while in an already running cluster we can force MySQL to start from new binary log file by issuing the SQL command "FLUSH BINARY LOGS;" so that we can use the total size of the binary logs generated after that during the maintenance.
    Conclusion:

    Although the binary log traffic will be always less than the amount of replicated data but they are nearly close, thus we can use it to get a rough estimation about the right GCache size value, hence the formula will be as follows:

    • GCache size = Maintenance window * Replication Rate.
    • GCache size = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
    • GCache size = binary log traffic which occurs during the maintenance window.

Impact of General Query Log on MySQL Performance

Tue, 2014-04-08 11:19

Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.

The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?

Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.

System Information: HW configurations: Software configurations:
  • OS: Ubuntu 12.04
  • MySQL Server: 5.6.17
  • Sysbench: 0.4.12
Test Information:
  • Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
  • Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1

Note:

The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.

General log Disabled:

To make sure that the general query log is disabled:

mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+

Testing results:

General log Enabled:

The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):

mysql> SET GLOBAL general_log=ON;

We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).

What are the benefits that we might get if we have the log output to be stored in a table not in a file?

  • We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
  • The log contents could be accessed remotely if someone can connect to the MySQL server.
  • Standard format for the log entries.
  • If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
  • It is easy to expire the logs by simply TRUNCATE the log table.
  • Log rotation is possible by using RENAME TABLE statement.
  • Log entries are not replicated to the slave because they are not written to the binary logs.
  • The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.

So, let's check the performance impact then of each log output.

Output is FILE:

To check the output destination of the general log, the following command should be used:

mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+

Testing results:

Output is TABLE (CSV table):

To change the output destination of the general log from file to table (CSV by default), the following command should be used:

mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+

Testing results:

Output is TABLE (MyISAM table):

Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?

The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.

To alter the log table, you must first disable the logging:

mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;

Testing results:

Output is TABLE (MyISAM table with some structures changes):

In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.

The following is the general log table structure:

mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'

Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):

  • Can we create partitions on that table which might boost our search?
    Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables).
  • Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
    While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100).
  • What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
    • Add index on `event_time` column:
      mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;

      Testing results:

    • Add FULLTEXT index on `argument` column:
      mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;

      Testing results:


To make it more clear, the following is the combination of all results in one graph followed by response time comparison:





The raw results in Transactions / Sec might be useful:


Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
  • The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
    • Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
    • Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
    • Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
    • Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
    • Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
  • Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
  • Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
  • Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
  • Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
  • Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.
Taxonomy upgrade extras: general query logperformance

Impact of General Query Log on MySQL Performance

Tue, 2014-04-08 11:19

Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.

The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?

Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.

System Information: HW configurations: Software configurations:
  • OS: Ubuntu 12.04
  • MySQL Server: 5.6.17
  • Sysbench: 0.4.12
Test Information:
  • Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
  • Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1

Note:

The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.

General log Disabled:

To make sure that the general query log is disabled:

mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+

Testing results:

General log Enabled:

The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):

mysql> SET GLOBAL general_log=ON;

We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).

What are the benefits that we might get if we have the log output to be stored in a table not in a file?

  • We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
  • The log contents could be accessed remotely if someone can connect to the MySQL server.
  • Standard format for the log entries.
  • If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
  • It is easy to expire the logs by simply TRUNCATE the log table.
  • Log rotation is possible by using RENAME TABLE statement.
  • Log entries are not replicated to the slave because they are not written to the binary logs.
  • The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.

So, let's check the performance impact then of each log output.

Output is FILE:

To check the output destination of the general log, the following command should be used:

mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+

Testing results:

Output is TABLE (CSV table):

To change the output destination of the general log from file to table (CSV by default), the following command should be used:

mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+

Testing results:

Output is TABLE (MyISAM table):

Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?

The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.

To alter the log table, you must first disable the logging:

mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;

Testing results:

Output is TABLE (MyISAM table with some structures changes):

In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.

The following is the general log table structure:

mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'

Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):

  • Can we create partitions on that table which might boost our search?
    Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables).
  • Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
    While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100).
  • What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
    • Add index on `event_time` column:
      mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;

      Testing results:

    • Add FULLTEXT index on `argument` column:
      mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;

      Testing results:


To make it more clear, the following is the combination of all results in one graph followed by response time comparison:





The raw results in Transactions / Sec might be useful:


Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
  • The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
    • Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
    • Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
    • Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
    • Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
    • Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
  • Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
  • Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
  • Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
  • Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
  • Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.

Impact of General Query Log on MySQL Performance

Tue, 2014-04-08 11:19

Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.

The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?

Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.

System Information: HW configurations: Software configurations:
  • OS: Ubuntu 12.04
  • MySQL Server: 5.6.17
  • Sysbench: 0.4.12
Test Information:
  • Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
  • Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1

Note:

The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.

General log Disabled:

To make sure that the general query log is disabled:

mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+

Testing results:

General log Enabled:

The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):

mysql> SET GLOBAL general_log=ON;

We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).

What are the benefits that we might get if we have the log output to be stored in a table not in a file?

  • We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
  • The log contents could be accessed remotely if someone can connect to the MySQL server.
  • Standard format for the log entries.
  • If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
  • It is easy to expire the logs by simply TRUNCATE the log table.
  • Log rotation is possible by using RENAME TABLE statement.
  • Log entries are not replicated to the slave because they are not written to the binary logs.
  • The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.

So, let's check the performance impact then of each log output.

Output is FILE:

To check the output destination of the general log, the following command should be used:

mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+

Testing results:

Output is TABLE (CSV table):

To change the output destination of the general log from file to table (CSV by default), the following command should be used:

mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+

Testing results:

Output is TABLE (MyISAM table):

Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?

The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.

To alter the log table, you must first disable the logging:

mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;

Testing results:

Output is TABLE (MyISAM table with some structures changes):

In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.

The following is the general log table structure:

mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'

Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):

  • Can we create partitions on that table which might boost our search?
    Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables).
  • Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
    While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100).
  • What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
    • Add index on `event_time` column:
      mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;

      Testing results:

    • Add FULLTEXT index on `argument` column:
      mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;

      Testing results:


To make it more clear, the following is the combination of all results in one graph followed by response time comparison:





The raw results in Transactions / Sec might be useful:


Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
  • The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
    • Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
    • Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
    • Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
    • Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
    • Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
  • Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
  • Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
  • Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
  • Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
  • Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.

Online DDL vs pt-online-schema-change

Wed, 2014-02-12 18:14

One of the most expensive database operations is performing Data Definition Language (DDL, e.g. CREATE, DROP, ALTER, etc.) statements, specially, the ALTER statements because MySQL blocks the entire table for both reads and writes while modifying the table.

For the huge tables, this might take hours to get the table changed which affects the application, so that, a good planning is required for such operations in order to avoid doing these changes during the peak times. For those people who have 24/7 services or limited maintenance window, DDL on huge tables is a really nightmare.

Percona developed a very good tool called pt-online-schema-change (version 2.2.6 at the time of writing this article) to perform such operations online without blocking/affecting the application and read/write operations to the table being changed is available.
Also MySQL made some enhancements for DDL statements and introduced the Online DDL feature in MySQL 5.6.

In this article, I will talk about an overview of both ways (Online DDL & pt-online-schema-change) alongside with an example and which one of them should be used in different scenarios.

pt-online-schema-change Overview

This tool is developed by Percona to alter tables without locking them during the ALTER operation.
Simply, this tool creates a new empty table like the original table with the needed structure change, copy the data from the original table in small chunks to the new table, drop the original table and then rename the new table to the original name. During the copy process all new changes to the original table are being applied to the new one because a trigger is created on the original table which ensure that all new changes will be applied on the new table.

For more information about pt-online-schema-change tool, check out the manual documentation.

Example

Altering a table called "test.test1" by adding an index (name_idx) on column "name":

[root@gcservera ~]# pt-online-schema-change --execute --alter "add index name_idx (name)" D=test,t=test1,h=localhost Operation, tries, wait: copy_rows, 10, 0.25 create_triggers, 10, 1 drop_triggers, 10, 1 swap_tables, 10, 1 update_foreign_keys, 10, 1 Altering `test`.`test1`... Creating new table... Created new table test._test1_new OK. Altering new table... Altered `test`.`_test1_new` OK. 2014-02-09T15:33:27 Creating triggers... 2014-02-09T15:33:27 Created triggers OK. 2014-02-09T15:33:27 Copying approximately 1 rows... 2014-02-09T15:33:27 Copied rows OK. 2014-02-09T15:33:27 Swapping tables... 2014-02-09T15:33:27 Swapped original and new tables OK. 2014-02-09T15:33:27 Dropping old table... 2014-02-09T15:33:27 Dropped old table `test`.`_test1_old` OK. 2014-02-09T15:33:27 Dropping triggers... 2014-02-09T15:33:27 Dropped triggers OK. Successfully altered `test`.`test1`.

Note:

The output is perfectly describing all steps that the tool is doing in the background.

Limitations of pt-online-schema-change
  • A PRIMARY KEY or a unique index should be defined for the table before using this tool because it is required for the DELETE trigger.
  • Not supported if the table has already triggers defined.
  • The tool become complicate a little if the table has a foreign key constraint and an additional option --alter-foreign-keys-method should be used.
  • Also because of the foreign keys, the object names might be changed (indexes names , .. etc).
  • In Galera Cluster environment, altering MyISAM tables is not supported and the system variable "wsrep_OSU_method" must be set to "TOI" (total order isolation).
Online DDL Overview

In MySQL 5.5 and 5.1 with the InnoDB plugin, a new feature known as Fast Index Creation was introduced to avoid copying the tables data - when adding or removing secondary indexes - using the optimized CREATE INDEX and DROP INDEX statements.

In MySQL 5.6, the Online DDL method was introduced to allow more changes to be made on the table while accessing and writing to the table being changed is available.

The Online DDL syntax is exactly the same like the normal alter statement after specifying two parameters:

ALGORITHM:
  • INPLACE: the table change will be made in-place without rebuilding the entire table (in most cases, no copying data to temporary table is required).
  • COPY: copying data to a temporary table, rebuilding the table and reconstructing the secondary indexes will be made (equivalent to the traditional method).
LOCK:
  • NONE: Read and write operations are allowed during the altering process.
  • SHARED: Only read operations are allowed during the altering operations (DML is not allowed).
  • EXCLUSIVE: The entire table will be locked for both reading and writing (neither select nor DML are allowed).

The Online DDL is perfectly explained in the online manual documentation, you can check it out here for more information.

Example

Altering a table called "test.test2" by adding an index (name_idx) on column "name":

mysql> alter table test2 -> add index name_idx (name),algorithm=inplace, lock=none; Query OK, 0 rows affected (0.03 sec) Records: 0 Duplicates: 0 Warnings: 0 Limitations of Online DDL
  • Works only with InnoDB (syntax wise it could be used with other storage engines "like MyISAM" but only "algorithm=copy" is allowed which is equivalent to the traditional method).
  • Regardless of the locking used (none,shared or exclusive) a brief period at the beginning and at the end of the process is requiring an exclusive lock on the table.
  • foreign_key_checks should be disabled when adding/dropping foreign keys to avoid table copying behavior.
  • Still some alter operations require table copying or table locking in order to make the change (the old behavior). For more details on which table change require table-copying or table locking, check out this manual page.
  • LOCK=NONE is not allowed in the alter table statement if there are ON...CASCADE or ON...SET NULL constraints on the table.
  • While the Online DDL will be replicated on the slaves the same like the master (if LOCK=NONE no table-locking will take place on the slaves during the alter execution) but the replication itself will be blocked as the replay process executes in a single thread on the replicas which will cause slave lagging problem.
Comparison results

The following is a comparison results between Online DDL and pt-online-schema-change for some alter operations applied on a table contains 1,078,880 rows:


Online DDLpt-online-schema-changeChange OperationRow(s) affectedIs table locked?Time (sec)Row(s) affectedIs table locked?Time (sec)Add Index0No3.76All rowsNo38.12Drop Index0No0.34All rowsNo36.04Add Column0No27.61All rowsNo37.21Rename Column0No0.06All rowsNo34.16Rename Column + change its data typeAll rowsYes30.21All rowsNo34.23Drop Column0No22.41All rowsNo31.57Change table ENGINEAll rowsYes25.30All rowsNo35.54Which method should be used?

Now the question is, which method should we use to perform alter table statements?

While pt-online-schema-change allows read and write operations to the table being altered, it still copies the tables data to a temporary table in the background which adds overhead on the MySQL server. So basically, we should use pt-online-schema-change if the Online DDL will not work efficiently. In other words, if the Online DDL will require copying data to a temporary table (algorithm=copy) and the table will be blocked for long time (lock=exclusive) or when altering huge tables in a replication environment then we should use pt-online-schema-change tool.

Online DDL vs pt-online-schema-change

Wed, 2014-02-12 18:14

One of the most expensive database operations is performing Data Definition Language (DDL, e.g. CREATE, DROP, ALTER, etc.) statements, specially, the ALTER statements because MySQL blocks the entire table for both reads and writes while modifying the table.

For the huge tables, this might take hours to get the table changed which affects the application, so that, a good planning is required for such operations in order to avoid doing these changes during the peak times. For those people who have 24/7 services or limited maintenance window, DDL on huge tables is a really nightmare.

Percona developed a very good tool called pt-online-schema-change (version 2.2.6 at the time of writing this article) to perform such operations online without blocking/affecting the application and read/write operations to the table being changed is available.
Also MySQL made some enhancements for DDL statements and introduced the Online DDL feature in MySQL 5.6.

In this article, I will talk about an overview of both ways (Online DDL & pt-online-schema-change) alongside with an example and which one of them should be used in different scenarios.

pt-online-schema-change Overview

This tool is developed by Percona to alter tables without locking them during the ALTER operation.
Simply, this tool creates a new empty table like the original table with the needed structure change, copy the data from the original table in small chunks to the new table, drop the original table and then rename the new table to the original name. During the copy process all new changes to the original table are being applied to the new one because a trigger is created on the original table which ensure that all new changes will be applied on the new table.

For more information about pt-online-schema-change tool, check out the manual documentation.

Example

Altering a table called "test.test1" by adding an index (name_idx) on column "name":

[root@gcservera ~]# pt-online-schema-change --execute --alter "add index name_idx (name)" D=test,t=test1,h=localhost Operation, tries, wait: copy_rows, 10, 0.25 create_triggers, 10, 1 drop_triggers, 10, 1 swap_tables, 10, 1 update_foreign_keys, 10, 1 Altering `test`.`test1`... Creating new table... Created new table test._test1_new OK. Altering new table... Altered `test`.`_test1_new` OK. 2014-02-09T15:33:27 Creating triggers... 2014-02-09T15:33:27 Created triggers OK. 2014-02-09T15:33:27 Copying approximately 1 rows... 2014-02-09T15:33:27 Copied rows OK. 2014-02-09T15:33:27 Swapping tables... 2014-02-09T15:33:27 Swapped original and new tables OK. 2014-02-09T15:33:27 Dropping old table... 2014-02-09T15:33:27 Dropped old table `test`.`_test1_old` OK. 2014-02-09T15:33:27 Dropping triggers... 2014-02-09T15:33:27 Dropped triggers OK. Successfully altered `test`.`test1`.

Note:

The output is perfectly describing all steps that the tool is doing in the background.

Limitations of pt-online-schema-change
  • A PRIMARY KEY or a unique index should be defined for the table before using this tool because it is required for the DELETE trigger.
  • Not supported if the table has already triggers defined.
  • The tool become complicate a little if the table has a foreign key constraint and an additional option --alter-foreign-keys-method should be used.
  • Also because of the foreign keys, the object names might be changed (indexes names , .. etc).
  • In Galera Cluster environment, altering MyISAM tables is not supported and the system variable "wsrep_OSU_method" must be set to "TOI" (total order isolation).
Online DDL Overview

In MySQL 5.5 and 5.1 with the InnoDB plugin, a new feature known as Fast Index Creation was introduced to avoid copying the tables data - when adding or removing secondary indexes - using the optimized CREATE INDEX and DROP INDEX statements.

In MySQL 5.6, the Online DDL method was introduced to allow more changes to be made on the table while accessing and writing to the table being changed is available.

The Online DDL syntax is exactly the same like the normal alter statement after specifying two parameters:

ALGORITHM:
  • INPLACE: the table change will be made in-place without rebuilding the entire table (in most cases, no copying data to temporary table is required).
  • COPY: copying data to a temporary table, rebuilding the table and reconstructing the secondary indexes will be made (equivalent to the traditional method).
LOCK:
  • NONE: Read and write operations are allowed during the altering process.
  • SHARED: Only read operations are allowed during the altering operations (DML is not allowed).
  • EXCLUSIVE: The entire table will be locked for both reading and writing (neither select nor DML are allowed).

The Online DDL is perfectly explained in the online manual documentation, you can check it out here for more information.

Example

Altering a table called "test.test2" by adding an index (name_idx) on column "name":

mysql> alter table test2 -> add index name_idx (name),algorithm=inplace, lock=none; Query OK, 0 rows affected (0.03 sec) Records: 0 Duplicates: 0 Warnings: 0 Limitations of Online DDL
  • Works only with InnoDB (syntax wise it could be used with other storage engines "like MyISAM" but only "algorithm=copy" is allowed which is equivalent to the traditional method).
  • Regardless of the locking used (none,shared or exclusive) a brief period at the beginning and at the end of the process is requiring an exclusive lock on the table.
  • foreign_key_checks should be disabled when adding/dropping foreign keys to avoid table copying behavior.
  • Still some alter operations require table copying or table locking in order to make the change (the old behavior). For more details on which table change require table-copying or table locking, check out this manual page.
  • LOCK=NONE is not allowed in the alter table statement if there are ON...CASCADE or ON...SET NULL constraints on the table.
  • While the Online DDL will be replicated on the slaves the same like the master (if LOCK=NONE no table-locking will take place on the slaves during the alter execution) but the replication itself will be blocked as the replay process executes in a single thread on the replicas which will cause slave lagging problem.
Comparison results

The following is a comparison results between Online DDL and pt-online-schema-change for some alter operations applied on a table contains 1,078,880 rows:


Online DDLpt-online-schema-changeChange OperationRow(s) affectedIs table locked?Time (sec)Row(s) affectedIs table locked?Time (sec)Add Index0No3.76All rowsNo38.12Drop Index0No0.34All rowsNo36.04Add Column0No27.61All rowsNo37.21Rename Column0No0.06All rowsNo34.16Rename Column + change its data typeAll rowsYes30.21All rowsNo34.23Drop Column0No22.41All rowsNo31.57Change table ENGINEAll rowsYes25.30All rowsNo35.54Which method should be used?

Now the question is, which method should we use to perform alter table statements?

While pt-online-schema-change allows read and write operations to the table being altered, it still copies the tables data to a temporary table in the background which adds overhead on the MySQL server. So basically, we should use pt-online-schema-change if the Online DDL will not work efficiently. In other words, if the Online DDL will require copying data to a temporary table (algorithm=copy) and the table will be blocked for long time (lock=exclusive) or when altering huge tables in a replication environment then we should use pt-online-schema-change tool.

Online DDL vs pt-online-schema-change

Wed, 2014-02-12 18:14

One of the most expensive database operations is performing Data Definition Language (DDL, e.g. CREATE, DROP, ALTER, etc.) statements, specially, the ALTER statements because MySQL blocks the entire table for both reads and writes while modifying the table.

For the huge tables, this might take hours to get the table changed which affects the application, so that, a good planning is required for such operations in order to avoid doing these changes during the peak times. For those people who have 24/7 services or limited maintenance window, DDL on huge tables is a really nightmare.

Percona developed a very good tool called pt-online-schema-change (version 2.2.6 at the time of writing this article) to perform such operations online without blocking/affecting the application and read/write operations to the table being changed is available.
Also MySQL made some enhancements for DDL statements and introduced the Online DDL feature in MySQL 5.6.

In this article, I will talk about an overview of both ways (Online DDL & pt-online-schema-change) alongside with an example and which one of them should be used in different scenarios.

pt-online-schema-change Overview

This tool is developed by Percona to alter tables without locking them during the ALTER operation.
Simply, this tool creates a new empty table like the original table with the needed structure change, copy the data from the original table in small chunks to the new table, drop the original table and then rename the new table to the original name. During the copy process all new changes to the original table are being applied to the new one because a trigger is created on the original table which ensure that all new changes will be applied on the new table.

For more information about pt-online-schema-change tool, check out the manual documentation.

Example

Altering a table called "test.test1" by adding an index (name_idx) on column "name":

[root@gcservera ~]# pt-online-schema-change --execute --alter "add index name_idx (name)" D=test,t=test1,h=localhost Operation, tries, wait: copy_rows, 10, 0.25 create_triggers, 10, 1 drop_triggers, 10, 1 swap_tables, 10, 1 update_foreign_keys, 10, 1 Altering `test`.`test1`... Creating new table... Created new table test._test1_new OK. Altering new table... Altered `test`.`_test1_new` OK. 2014-02-09T15:33:27 Creating triggers... 2014-02-09T15:33:27 Created triggers OK. 2014-02-09T15:33:27 Copying approximately 1 rows... 2014-02-09T15:33:27 Copied rows OK. 2014-02-09T15:33:27 Swapping tables... 2014-02-09T15:33:27 Swapped original and new tables OK. 2014-02-09T15:33:27 Dropping old table... 2014-02-09T15:33:27 Dropped old table `test`.`_test1_old` OK. 2014-02-09T15:33:27 Dropping triggers... 2014-02-09T15:33:27 Dropped triggers OK. Successfully altered `test`.`test1`.

Note:

The output is perfectly describing all steps that the tool is doing in the background.

Limitations of pt-online-schema-change
  • A PRIMARY KEY or a unique index should be defined for the table before using this tool because it is required for the DELETE trigger.
  • Not supported if the table has already triggers defined.
  • The tool become complicate a little if the table has a foreign key constraint and an additional option --alter-foreign-keys-method should be used.
  • Also because of the foreign keys, the object names might be changed (indexes names , .. etc).
  • In Galera Cluster environment, altering MyISAM tables is not supported and the system variable "wsrep_OSU_method" must be set to "TOI" (total order isolation).
Online DDL Overview

In MySQL 5.5 and 5.1 with the InnoDB plugin, a new feature known as Fast Index Creation was introduced to avoid copying the tables data - when adding or removing secondary indexes - using the optimized CREATE INDEX and DROP INDEX statements.

In MySQL 5.6, the Online DDL method was introduced to allow more changes to be made on the table while accessing and writing to the table being changed is available.

The Online DDL syntax is exactly the same like the normal alter statement after specifying two parameters:

ALGORITHM:
  • INPLACE: the table change will be made in-place without rebuilding the entire table (in most cases, no copying data to temporary table is required).
  • COPY: copying data to a temporary table, rebuilding the table and reconstructing the secondary indexes will be made (equivalent to the traditional method).
LOCK:
  • NONE: Read and write operations are allowed during the altering process.
  • SHARED: Only read operations are allowed during the altering operations (DML is not allowed).
  • EXCLUSIVE: The entire table will be locked for both reading and writing (neither select nor DML are allowed).

The Online DDL is perfectly explained in the online manual documentation, you can check it out here for more information.

Example

Altering a table called "test.test2" by adding an index (name_idx) on column "name":

mysql> alter table test2 -> add index name_idx (name),algorithm=inplace, lock=none; Query OK, 0 rows affected (0.03 sec) Records: 0 Duplicates: 0 Warnings: 0 Limitations of Online DDL
  • Works only with InnoDB (syntax wise it could be used with other storage engines "like MyISAM" but only "algorithm=copy" is allowed which is equivalent to the traditional method).
  • Regardless of the locking used (none,shared or exclusive) a brief period at the beginning and at the end of the process is requiring an exclusive lock on the table.
  • foreign_key_checks should be disabled when adding/dropping foreign keys to avoid table copying behavior.
  • Still some alter operations require table copying or table locking in order to make the change (the old behavior). For more details on which table change require table-copying or table locking, check out this manual page.
  • LOCK=NONE is not allowed in the alter table statement if there are ON...CASCADE or ON...SET NULL constraints on the table.
  • While the Online DDL will be replicated on the slaves the same like the master (if LOCK=NONE no table-locking will take place on the slaves during the alter execution) but the replication itself will be blocked as the replay process executes in a single thread on the replicas which will cause slave lagging problem.
Comparison results

The following is a comparison results between Online DDL and pt-online-schema-change for some alter operations applied on a table contains 1,078,880 rows:


Online DDLpt-online-schema-changeChange OperationRow(s) affectedIs table locked?Time (sec)Row(s) affectedIs table locked?Time (sec)Add Index0No3.76All rowsNo38.12Drop Index0No0.34All rowsNo36.04Add Column0No27.61All rowsNo37.21Rename Column0No0.06All rowsNo34.16Rename Column + change its data typeAll rowsYes30.21All rowsNo34.23Drop Column0No22.41All rowsNo31.57Change table ENGINEAll rowsYes25.30All rowsNo35.54Which method should be used?

Now the question is, which method should we use to perform alter table statements?

While pt-online-schema-change allows read and write operations to the table being altered, it still copies the tables data to a temporary table in the background which adds overhead on the MySQL server. So basically, we should use pt-online-schema-change if the Online DDL will not work efficiently. In other words, if the Online DDL will require copying data to a temporary table (algorithm=copy) and the table will be blocked for long time (lock=exclusive) or when altering huge tables in a replication environment then we should use pt-online-schema-change tool.

Workbench starting/stopping multiple instance set-ups with myenv

Sat, 2013-11-23 16:03
Table of Content
Introduction

MySQL Workbench is a very good and free GUI tool provided by Oracle to manage MySQL administration and development tasks. Opening many MySQL connections (same or different instances, remote or local MySQL servers) at the same time is one of its main features. While it's working fine to perform SQL statements on the different connections opened for multiple instances, but some people are asking if it is available as well to start and stop multiple MySQL instances using MySQL Workbench? if yes, how to configure it to perform such task? and also does that make any conflict with MyEnv tool - if it's installed - or not?
Yes, MySQL Workbench could be configured to start and stop multiple MySQL instances (local or remote) and it does not make any conflict with MyEnv tool.

In this article, I will describe how to configure MySQL Workbench to start and stop multiple MySQL instances and getting benefits from MyEnv scripts in this purpose.

Prerequisites

System information and installed packages:

  • Operating System: Ubuntu 12.04 (64 bit) .
  • MySQL Server: Any mysql version (I used MySQL 5.5 tarballs).
  • Number of MySQL Instances: Two instances are installed (mysql1 & mysql2).
  • MySQL Workbench: Version 6.0 .
  • MyEnv: Version 1.0.1.
What is MyEnv?

MyEnv is a set of scripts to run comfortably multiple MySQL, Percona Server or MariaDB database instances on the same server. You can even run multiple database instances with different binary versions. If you have MySQL multiple instance setups, you really should try out MyEnv.

I will not talk more about MyEnv features and its benefits rather, I'd like to mention that if you're using MyEnv and want to use MySQL Workbench at the same time, you will not face any conflict between them both and you can manage your MySQL instances by either MyEnv or MySQL Workbench. More over, you can use MyEnv scripts to configure MySQL Workbench starting/stopping multiple instances in an easy way!

For more information about MyEnv tool , you can check it out on our website myenv.


MySQL Workbench configuration
Add MySQL connections to MySQL Workbench
  • Choose a connection name for the 1st instance "mysql1" and specify the connection string:

    If you didn't adjust the "Configure Server Management" in this step - at the left bottom of the previous screen - you can open MySQL connections and perform SQL queries normally to this instance but you can neither edit the instance configuration parameters nor start/stop it.
    BTW, you can adjust it at anytime later and that what I did already in this example.

  • Add another connection for the 2nd instance "mysql2" the same like "mysql1".
Start/Stop instance configurations

To configure MySQL Workbench to start/stop instance, we need to have relevant start and stop commands because it just execute them as they would be execute in the system shell. In this case, we may get benefit of MyEnv scripts for that purpose using the following command:

$MYENV_BASE/bin/database.php $MYENV_DATABASE start|stop
Where $MYENV_BASE is the MyEnv basedir ("/opt/myenv" in this ex.) and $MYENV_DATABASE is the instance name in MyEnv (mysqld1 & mysqld2 for mysql1 & mysql2 respectively in this ex.)

Now, we can use the following window to modify System type,Configuration file path, start , stop and status commands to match each instance configurations:



Now MySQL Workbench should be able to start and stop the configured MySQL instances.


Notes:

  • MyEnv doesn't allow any user to start a mysql instance except mysql user (even if it's the root user), so that mysql OS user should be used to execute those commands and you might need to assign it SUDO permissions.
  • Start/Stop Server button in MySQL Workbench depends on the output of the status command used ("cat /opt/mysql1/data/*.pid 2>/dev/null" for checking mysql1 instance status), and the button label will be changed to execute the appropriate command accordingly (for ex. if the instance is not running, the label should be changed to "Start Server" and the start command will be executed if the button clicked and vise verse), so you should make sure that this command is returning the expected results, otherwise, Workbench wont be able to manage the instance.
  • It doesn't matter how MySQL was installed (RPM, tarballs or from source), it's the same concept, you just need to provide MyEnv start and stop commands along with the status command and then MySQL Workbench will work.
  • If MyEnv is not installed, you can also configure MySQL Workbench to start/stop multiple instances by providing normal start|stop instance commands in the "Manage Server Connections" window but those need to be prepared first.
Taxonomy upgrade extras: MyEnv

Workbench starting/stopping multiple instance set-ups with myenv

Sat, 2013-11-23 16:03
Table of Content
Introduction

MySQL Workbench is a very good and free GUI tool provided by Oracle to manage MySQL administration and development tasks. Opening many MySQL connections (same or different instances, remote or local MySQL servers) at the same time is one of its main features. While it's working fine to perform SQL statements on the different connections opened for multiple instances, but some people are asking if it is available as well to start and stop multiple MySQL instances using MySQL Workbench? if yes, how to configure it to perform such task? and also does that make any conflict with MyEnv tool - if it's installed - or not?
Yes, MySQL Workbench could be configured to start and stop multiple MySQL instances (local or remote) and it does not make any conflict with MyEnv tool.

In this article, I will describe how to configure MySQL Workbench to start and stop multiple MySQL instances and getting benefits from MyEnv scripts in this purpose.

Prerequisites

System information and installed packages:

  • Operating System: Ubuntu 12.04 (64 bit) .
  • MySQL Server: Any mysql version (I used MySQL 5.5 tarballs).
  • Number of MySQL Instances: Two instances are installed (mysql1 & mysql2).
  • MySQL Workbench: Version 6.0 .
  • MyEnv: Version 1.0.1.
What is MyEnv?

MyEnv is a set of scripts to run comfortably multiple MySQL, Percona Server or MariaDB database instances on the same server. You can even run multiple database instances with different binary versions. If you have MySQL multiple instance setups, you really should try out MyEnv.

I will not talk more about MyEnv features and its benefits rather, I'd like to mention that if you're using MyEnv and want to use MySQL Workbench at the same time, you will not face any conflict between them both and you can manage your MySQL instances by either MyEnv or MySQL Workbench. More over, you can use MyEnv scripts to configure MySQL Workbench starting/stopping multiple instances in an easy way!

For more information about MyEnv tool , you can check it out on our website myenv.


MySQL Workbench configuration
Add MySQL connections to MySQL Workbench
  • Choose a connection name for the 1st instance "mysql1" and specify the connection string:

    If you didn't adjust the "Configure Server Management" in this step - at the left bottom of the previous screen - you can open MySQL connections and perform SQL queries normally to this instance but you can neither edit the instance configuration parameters nor start/stop it.
    BTW, you can adjust it at anytime later and that what I did already in this example.

  • Add another connection for the 2nd instance "mysql2" the same like "mysql1".
Start/Stop instance configurations

To configure MySQL Workbench to start/stop instance, we need to have relevant start and stop commands because it just execute them as they would be execute in the system shell. In this case, we may get benefit of MyEnv scripts for that purpose using the following command:

$MYENV_BASE/bin/database.php $MYENV_DATABASE start|stop
Where $MYENV_BASE is the MyEnv basedir ("/opt/myenv" in this ex.) and $MYENV_DATABASE is the instance name in MyEnv (mysqld1 & mysqld2 for mysql1 & mysql2 respectively in this ex.)

Now, we can use the following window to modify System type,Configuration file path, start , stop and status commands to match each instance configurations:



Now MySQL Workbench should be able to start and stop the configured MySQL instances.


Notes:

  • MyEnv doesn't allow any user to start a mysql instance except mysql user (even if it's the root user), so that mysql OS user should be used to execute those commands and you might need to assign it SUDO permissions.
  • Start/Stop Server button in MySQL Workbench depends on the output of the status command used ("cat /opt/mysql1/data/*.pid 2>/dev/null" for checking mysql1 instance status), and the button label will be changed to execute the appropriate command accordingly (for ex. if the instance is not running, the label should be changed to "Start Server" and the start command will be executed if the button clicked and vise verse), so you should make sure that this command is returning the expected results, otherwise, Workbench wont be able to manage the instance.
  • It doesn't matter how MySQL was installed (RPM, tarballs or from source), it's the same concept, you just need to provide MyEnv start and stop commands along with the status command and then MySQL Workbench will work.
  • If MyEnv is not installed, you can also configure MySQL Workbench to start/stop multiple instances by providing normal start|stop instance commands in the "Manage Server Connections" window but those need to be prepared first.
Taxonomy upgrade extras: MyEnv

Workbench starting/stopping multiple instance set-ups with myenv

Sat, 2013-11-23 16:03
Table of Content
Introduction

MySQL Workbench is a very good and free GUI tool provided by Oracle to manage MySQL administration and development tasks. Opening many MySQL connections (same or different instances, remote or local MySQL servers) at the same time is one of its main features. While it's working fine to perform SQL statements on the different connections opened for multiple instances, but some people are asking if it is available as well to start and stop multiple MySQL instances using MySQL Workbench? if yes, how to configure it to perform such task? and also does that make any conflict with MyEnv tool - if it's installed - or not?
Yes, MySQL Workbench could be configured to start and stop multiple MySQL instances (local or remote) and it does not make any conflict with MyEnv tool.

In this article, I will describe how to configure MySQL Workbench to start and stop multiple MySQL instances and getting benefits from MyEnv scripts in this purpose.

Prerequisites

System information and installed packages:

  • Operating System: Ubuntu 12.04 (64 bit) .
  • MySQL Server: Any mysql version (I used MySQL 5.5 tarballs).
  • Number of MySQL Instances: Two instances are installed (mysql1 & mysql2).
  • MySQL Workbench: Version 6.0 .
  • MyEnv: Version 1.0.1.
What is MyEnv?

MyEnv is a set of scripts to run comfortably multiple MySQL, Percona Server or MariaDB database instances on the same server. You can even run multiple database instances with different binary versions. If you have MySQL multiple instance setups, you really should try out MyEnv.

I will not talk more about MyEnv features and its benefits rather, I'd like to mention that if you're using MyEnv and want to use MySQL Workbench at the same time, you will not face any conflict between them both and you can manage your MySQL instances by either MyEnv or MySQL Workbench. More over, you can use MyEnv scripts to configure MySQL Workbench starting/stopping multiple instances in an easy way!

For more information about MyEnv tool , you can check it out on our website myenv.


MySQL Workbench configuration
Add MySQL connections to MySQL Workbench
  • Choose a connection name for the 1st instance "mysql1" and specify the connection string:

    If you didn't adjust the "Configure Server Management" in this step - at the left bottom of the previous screen - you can open MySQL connections and perform SQL queries normally to this instance but you can neither edit the instance configuration parameters nor start/stop it.
    BTW, you can adjust it at anytime later and that what I did already in this example.

  • Add another connection for the 2nd instance "mysql2" the same like "mysql1".
Start/Stop instance configurations

To configure MySQL Workbench to start/stop instance, we need to have relevant start and stop commands because it just execute them as they would be execute in the system shell. In this case, we may get benefit of MyEnv scripts for that purpose using the following command:

$MYENV_BASE/bin/database.php $MYENV_DATABASE start|stop
Where $MYENV_BASE is the MyEnv basedir ("/opt/myenv" in this ex.) and $MYENV_DATABASE is the instance name in MyEnv (mysqld1 & mysqld2 for mysql1 & mysql2 respectively in this ex.)

Now, we can use the following window to modify System type,Configuration file path, start , stop and status commands to match each instance configurations:



Now MySQL Workbench should be able to start and stop the configured MySQL instances.


Notes:

  • MyEnv doesn't allow any user to start a mysql instance except mysql user (even if it's the root user), so that mysql OS user should be used to execute those commands and you might need to assign it SUDO permissions.
  • Start/Stop Server button in MySQL Workbench depends on the output of the status command used ("cat /opt/mysql1/data/*.pid 2>/dev/null" for checking mysql1 instance status), and the button label will be changed to execute the appropriate command accordingly (for ex. if the instance is not running, the label should be changed to "Start Server" and the start command will be executed if the button clicked and vise verse), so you should make sure that this command is returning the expected results, otherwise, Workbench wont be able to manage the instance.
  • It doesn't matter how MySQL was installed (RPM, tarballs or from source), it's the same concept, you just need to provide MyEnv start and stop commands along with the status command and then MySQL Workbench will work.
  • If MyEnv is not installed, you can also configure MySQL Workbench to start/stop multiple instances by providing normal start|stop instance commands in the "Manage Server Connections" window but those need to be prepared first.

Xtrabackup in a nutshell

Wed, 2013-11-06 14:23
Table of Content
Introduction

No one can deny that one of the most important and daily tasks for DBAs is performing backup and restore operations, we're not required to perform backup and restore operations only when we want to add new replication slave, when we want to implement disaster recovery procedures or when we want to prepare testing or staging server for the running production system, but even if we're going to make any changes to the database schema in order to enhance the database performance, it's recommended to have fresh backup copy before making any live changes, so if backup and restore operations cannot be handled smoothly, we're going to face many troubles in our daily work. If we're going to talk about backup and restore operations, Xtrabackup tool will be strongly appeared.

Xtrabackup tool is a free open source tool developed by Percona to perform physical backup and restore operations which is much faster than performing logical backup and restore using the MySQL utilities (mysqldump and mysql), and many other advantages.

Xtrabackup tool has many options and features which are very useful, but in this article, I'll go through only on how to use this tool to perform simple full, incremental and partial backups and restores, advantages and disadvantages of each method and some important tips.

For more information about Xtrabackup tool, you can browse the manual document from here.

Prerequisites Full Backup

If you want to have a full backup from your entire database system with the shortest/fastest backup and restore time, this method will be very useful for you. As compared to the full logical database backup using mysqldump and mysql utilities (very long time to backup and more than the doubled time to restore), taking a full physical backup using Xtrabackup tool will make your life much easier.

Below is the needed steps to make a full physical database backup using XtraBackup tool:

Create Backup

A simple Xtrabackup command to backup the full databases should be something like:

[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!

A timestamped folder (for ex. "2013-11-06_00-00-00") would be created to contain all backup files if we didn't use the option "--no-timestamp" in the above command (I didn't use the timestamped folders here to just simplify the names for the reader, otherwise, it's very useful when using automated backup scripts).

Xtrabackup tool now created the backup files under the folder "full-backup" plus some extra files like "xtrabackup-checkpoints" file which contains some information (useful in the incremental backups) like:

  • backup_type = full-backuped : which indicates the backup type "full backup".
  • from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
  • to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.

Another important file is "xtrabackup_binlog_info" which is very useful in replication setups:

[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191

WHERE:

  • mysql-bin.000027: is the binary log file name of the master when the backup created.
  • 191: is the binary log position of the backup.
Prepare Full Backup

The backed up files are not ready at the moment to be restored, we must prepare the backup files first as follows:

[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup . . . innobackupex: completed OK!

Now, the full backup is ready to be restored ...

Restore Full Backup

To get the full backup restored, the MySQL instance should be stopped first and then one of the following two procedures should be done:

  • Using the copy back option: [root@ ~]# innobackupex --user=db_user –-password='db_password' --copy-back /backup/dir/path/full-backup . . . innobackupex: completed OK!

    Xtrabackup tool - in this method - will copy all files under the backup folder (full-backup) to the MySQL datadir which must be indicated in the my.cnf file, otherwise, it wouldn't know where the datadir should be placed.

  • Using the operating system copy or move commands:

    If you don't want to keep the backup files on your local system (you have another copy in an external server), the move command will be very fast to get your backup restored:

    [root@ ~]# mv /backup/dir/path/full-backup /var/lib/mysql
  • As the user who moves/copies the files into MySQL datadir is not "mysql" user, you should make sure that mysql user has the right permissions on its datadir and also the path "/var/lib/mysql" should be replaced with the MySQL datadir if it's set to a different path.

[root@ ~]# chown -R mysql:mysql /var/lib/mysql

After moving/copying the backup files into MySQL datadir, you are free to start the MySQL instance again.

Prepare slave from full backup

Preparing a slave using Xtrabackup is pretty easy and a straight forward process:

  • Restore the full backup as explained above.
  • Check the binary logs information of the backup:
    [root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191
  • Execute the CHANGE MASTER TO command using the above info and start the slave:
    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_LOG_FILE='mysql-bin.000027', ## taken from xtrabackup_binlog_info -> MASTER_LOG_POS=191; ## taken from xtrabackup_binlog_info SQL> START SLAVE;

For more information on how to set up MySQL Replication, check out this nice manual link.

Prepare GTID slave from full backup

GTID is supported in Xtrabackup starting from version 2.1.0. To restore a GTID slave server, the GTID_MODE should be enabled on the master server before creating its backup, otherwise, no GTID values will be included in the "xtrabackup_binlog_info" file under the backup directory. However, the following steps should be done:

  • Restore the full backup normally as explained above.
  • Check the GTID value of the backup:
    [root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191 b9b4712a-df64-11e3-b391-60672090eb04:1-12
  • Set the variable GTID_PURGED with the GTID value of the backup:
    SQL> SET GLOBAL GTID_PURGED="b9b4712a-df64-11e3-b391-60672090eb04:1-12";
  • Execute the auto position CHANGE MASTER TO command and start the slave:
    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_AUTO_POSITION = 1; SQL> START SLAVE;

For more information on how to set up Transaction-based Replication in MySQL, check out this link.

Advantages / Disadvantages
  • Advantages:
    • Fast, simple and easy way to get your full database backed up and restored.
    • All Xtrabackup tool features (like streaming: move the backed up files directly to a remote server) are supported in the full backup method.
    • Simple way to introduce a new slave to the master.
  • Disadvantages:
    • We have to replace the entire MySQL datadir with the new one (In other words, the datadir folder should be empty/removed before the restore process).
    • We can't extract one single database or single table from the whole backup (Unless it's MyISAM table), which means that you have to take it all or leave it all.
Important Hints
  • The message innobackupex: completed OK! should be printed at the end of every xtrabackup command , otherwise, it would be failed to make a successful command (backup, prepare or restore).
  • The ib_logfile files size should be the same in both source and destination servers, if not, you have to either remove them from the backup folder (which will be restored) before starting the MySQL instance and MySQL will create new ones for you OR change those files size in the destination server's configuration file to match the same size in the backup before starting the MySQL instance
  • The MySQL user used in the Xtrabackup tool, should have at least the following privileges (RELOAD, LOCK TABLES and REPLICATION CLIENT).
  • To prepare a new slave from another slave, just add the two options (“--slave-info" & --safe-slave-backup”) to the backup command and use the information in the file "xtrabackup_slave_info" under the backup folder to issue the "CHANGE MASTER TO" command in the new slave after finishing the restore.
  • To Accelerate the preparation process of your backup, just add the option "--use-memory" in the prepare command in order to allocate more used memory (Xtrabackup will use the specified memory as an internal innodb_buffer_pool_size for the prepare process), for ex: [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --use-memory=512M /backup/dir/path/full-backup . . . innobackupex: completed OK!
  • The preparation process consists of two steps, replaying the committed transactions and rolling back the uncommitted transactions, using the --apply-log option only in the preparation command will do both steps for you.
  • The backup folder "/backup/dir/path/full-backup" SHOULD NOT be created before executing the backup command, because Xtrabackup will create that folder for you, and it will fail to continue processing if that folder was already exist.
Incremental Backup

When you have a very large database system, you will need large enough storage to store your database backups in, and if you want to perform a daily backup then your mission will be more difficult. In such cases, the incremental database backup method will be very useful. It allows you to have only the database changes (delta) - after the physical full backup – with the minimum storage space required in a fast way, and hence, you can perform the daily backup operations to your database system without the need to having large storage available.

The following steps describe a simple way to perform your physical incremental database backup using XtraBackup tool:

Create Incremental Backup

To perform an incremental backup, we should first perform a full backup - the same like we did in the previous section - to be the base backup of the upcoming incremental backups.

Creating the full backup (Base Backup):

[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!

The "xtrabackup-checkpoints" file contents will be something like:

  • backup_type = full-backuped : which indicates the backup type "full backup".
  • from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
  • to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.

Creating the first incremental backup:

[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc1 --incremental-basedir=/backup/dir/path/full-backup . . . innobackupex: completed OK!

We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the full-backup path as the basedir, we informed it from which backup it should start tracking the database changes.

The "xtrabackup-checkpoints" file contents will be something like:

  • backup_type = incremental : which indicates the backup type "incremental backup".
  • from_lsn = 3768762 : which indicates the log sequence number where the backup process started from (the same LSN as the previous full backup ended at).
  • to_lsn = 4908762 : which indicates the log sequence number where the backup process ended at.

Creating the second incremental backup:

[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc2 --incremental-basedir=/backup/dir/path/inc1 . . . innobackupex: completed OK!

We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the 1st incremental backup path as the basedir, we informed it to start tracking the database changes since the last incremental (not the full backup).

The "xtrabackup-checkpoints" file contents will be something like:

  • backup_type = incremental : which indicates the backup type "incremental backup".
  • from_lsn = 4908762 : which indicates the log sequence number where the backup process started from (the same LSN as the 1st incremental backup ended at).
  • to_lsn = 6508762 : which indicates the log sequence number where the backup process ended at.

Note: We can create as many incremental backups as we want by using the same procedure above.

Prepare Incremental Backup

As mentioned earlier in the article, the preparation process consists of two steps (replaying the committed transactions and rolling back the uncommitted transactions) and using the --apply-log option only will do both of them (like we did in the full backup) but in the incremental backups, we MUST do them separately as follows:

  • Replay the committed transactions on the base backup (by adding the option "--redo-only"): [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup . . . innobackupex: completed OK!
  • Replay the committed transactions on the 1st incremental backup: [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc1 . . . innobackupex: completed OK!

    Note: we specified the full backup folder here, because replaying the committed transactions steps, appends all changes from the incremental backup to the full backup.

  • Replay the committed transactions on the 2nd incremental backup: [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc2 . . . innobackupex: completed OK!

    Note: here, we didn't use the 1st incremental backup folder, because all changes in the 1st incremental was already appended to the full backup in the previous step.

  • Finally, roll back all uncommitted transactions: [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup . . . innobackupex: completed OK!

    Note: as the full backup folder contains all data now (full + 1st & 2nd incremental), there's no need to do this step on the incremental backup folders.

Now, the incremental backup is ready to be restored ...

Restore Incremental Backup

The full backup folder will be the only folder to be restored (there's no need to the incremental backup folders) as it contains all data after appending the changes from all incremental backup. We can restore it the same way we did in the full backup restore.

Advantages / Disadvantages
  • Advantages:
    • Less storage resources needed.
    • Faster than the full backup.
  • Disadvantages:

    In addition to the disadvantages of the full backup, there are other ones:

    • Complicate and hard process to implement as compared to the full backup.
    • The incremental backup strategy, based on Log Sequence Number which affects only XtraDB and InnoDB storage engines while the others (like MyISAM) will be backed up completely (changed + unchanged data) in each incremental backup process.
    • If we have many incremental backups, appending all of them might consume time and might be confusing as well.
    • If one of the incremental backups become corrupted or not available for any reason, we will not be able to add all incremental backups after that to the full backup.
Important Hints
  • The backup preparation sequence steps above, MUST be followed using the same order.
  • If the "--redo-only" option was not be used in any of the preparation steps (except the final step), all up coming incremental backups will be useless as we won't be able to add them to the base backup anymore.
  • Replaying the committed transactions steps bring all incremental data and append it to the full backup, so that, the rolling back of the uncommitted transactions step should be execute only on the full backup (as it contains already the whole data).).
  • In the incremental backups, Xtrabackup generates two files for every table ".delta" & ".meta"(for ex. test.ibd.delta & test.ibd.meta), the delta file size reflects the changes which was applied on that table since the last incremental backup.
  • The preparation time of the individual incremental backup will depend on how much data changed there since the last incremental.
  • The preparation time for the full backup - in most cases - is really small as compared to the incremental ones because full backups apply the redo logs only while the incremental backups apply the deltas plus the redo logs. So if the delta files are big, the preparation process will take longer.
  • Full backups is recommended against Incremental backups if there are many changes applied on the DB, while the incremental backups are recommended when there are few changes applied on the DB.
Differential backup

We can use the incremental backup strategy in order to perform differential backups, but we should consider the following points:

  • We always specify the full backup folder as the base backup (in the incremental we specify the previous incremental folder as a base backup)
  • All incremental backups between differential and full backups MUST BE ignored when preparing the backup files because the differential backup contains already all changes since the last full backup.
  • In the backup preparation process, we should consider the last differential backup as the first incremental backup and all incremental backups after that could be applied normally.

Note: Having differnetial backups in the middle of incremental backups will be useful for many reasons, such as:

  • Differential backups reduce the backup preparation steps/time needed because differential backp will replace all its previous incremental backups.
  • Differential backups reduce the chances of loosing the incremental backups if we have corrupted incremental backup in the middle, because in this case, differential backup will act as a backup of the previous incremental backups.
Partial Backup

Unlike MyISAM, having physical database backup for a single database or table is not possible if the table engine type is InnoDB. But by using the partial database backup method in the XtraBackup tool, it will be possible to have physical InnoDB tables backup the same like MyISAM ones (but with some restrictions).

The following steps describing how to perform partial database backup using XtraBackup tool:

Create Partial Backup

A simple Xtrabackup command to backup some databases (or tables) should be something like:

[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --databases=”db1 db2 db3.tbl1” /backup/dir/path/partial-backup . . . innobackupex: completed OK!
Prepare Partial Backup

The same like the other backup methods, the backed up files are not ready until we get them prepared by adding the "--export" option as follows:

[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --export /backup/dir/path/partial-backup . . . innobackupex: completed OK!

Some errors regarding those not included InnoDB tables from the backup may be appeared, but that's fine. Also there will be a notification of creating the ".exp" file for each table which will be used in the import (restore) process.


Now, the partial backup is ready to be restored ...

Restore Partial Backup

The restore process of the partial backup is quite different than the full and incremental backups.

To restore a partial backup, the following steps should be made:

  • Unlike the other methods (Full and Incremental backups), MySQL instance on the destination server shouldn't be stopped because we will have to execute some SQL commands.
  • On the destination server, we should create new tables (as many as we have in the partial backup or as we will restore) with the same structure like the one in the partial backup and then discard its table space: mysql> CREATE TABLE db.tbl1 (...)ENGINE=INNODB; mysql> ALTER TABLE db.tbl1 DISCARD TABLESPACE;
  • Copy “.ibd” and “.exp” files for each table into the corresponding DB directory then assign the right permissions to mysql user: [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.ibd /var/lib/mysql/db [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.exp /var/lib/mysql/db [root@ ~]# chown -R mysql:mysql /var/lib/mysql/db
  • Now we should tell MySQL to use the new table spaces: mysql> ALTER TABLE db.tbl1 IMPORT TABLESPACE;
Advantages / Disadvantages
  • Advantages:
    • Although it's a complicated process, but it allows us to backup and restore individual InnoDB tables the same like MyISAM.
    • Useful when having huge InnoDB tables and we want to backup/restore them only.
  • Disadvantages:
    • The streaming feature is not available in the partial backup.
    • Restoring/importing individual tables or databases from a partial backup is not applicable unless the destination server is Percona Server.
    • In addition to restoring the files(copy back), three SQL statements should be executed for each table (table creation + two ALTER statements) in order to get them ready for use, which means that we might do a very boring job (or we have to create a special script) to get the partial backup restored if it contains many tables.
Important Hints
  • Although we didn't remove the MySQL datadir before the restore process (like full and incremental backups) as well as having the MySQL instance running, but we can restore the partial backup using the same way (remove the datadir contents and copy/move the backup files to the datadir), but we should take into our consideration that we'll have only the backed up databases/tables and all other databases/tables (which are not included in the partial backup) will be missed.
  • “innodb_file_per_table” server option must be enabled (in both source and destination servers).
  • "innodb_expand_import" option must be enabled in the destination server which is available only in Percona server (and that explain why we can restore partial backup on Percona server only).
  • Beside the "--databases" option, other two alternative options to perform the same needs could be used but we must provide each table with the fully qualified naming format:
    • --include='db.tbl'.
    • --tables-file=/path/to/file.txt ==> in that file, we can add multiple tables one per line in the fully qualified naming format.

Now, you can use the Xtrabackup tool to perform full, incremental and partial database backups, you can decide which method(s) of them are suitable for you according to the advantages and disadvantages of each one, and by considering the important hints for each method you can perform your backup efficiently.

I hope you found this article useful for you and to be familiar with such wonderful tool.

Taxonomy upgrade extras: GTIDxtrabackupBackup

Pages