You are here

News

FromDual invites MySQL community to company meeting 2014 in Barcelona

FromDual.en - Wed, 2014-08-06 14:29

FromDual holds its annual company meeting this year in Barcelona, Spain.

We are pleased to invite everybody interested in MySQL technologies (MySQL, Galera Cluster, FromDual Tools, Percona Cluster, MariaDB, etc.) to participate on Thursday evening September 11 at the HCC MONTBLANC, Via Laietana 61, to meet, exchange ideas and discuss MySQL related topics.

The event starts at 18:00, we will meet in the hotel lobby. The planned schedule is:

  • How to Implement GTID Replication in MySQL 5.6 (25') and 5' Questions and Answers.
  • MySQL backup/restore for anonymized exports (25') and 5' Questions and Answers.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.

Please feel free to send us your suggestion about your presentation. Any technical or non-technical MySQL related topic is welcome. For example how you use MySQL in your company or special problems you have faced and solved (or not solved yet), research work you have done on MySQL products, business cases you solve with MySQL products, evaluations or experience you have made, etc. For the proposal please send us a mail.

Please also send us a short notice if you plan to participate or join us with MeetUp.

This gives us the possibility to arrange and organize all the infrastructure with the Hotel.

The event is free of costs for all participants.

We are pleased if you can make it to the event,
Your FromDual Team

FromDual invites MySQL community to company meeting 2014 in Barcelona

FromDual.en - Wed, 2014-08-06 14:29

FromDual holds its annual company meeting this year in Barcelona, Spain.

We are pleased to invite everybody interested in MySQL technologies (MySQL, Galera Cluster, FromDual Tools, Percona Cluster, MariaDB, etc.) to participate on Thursday evening September 11 at the HCC MONTBLANC, Via Laietana 61, to meet, exchange ideas and discuss MySQL related topics.

The event starts at 18:00, we will meet in the hotel lobby. The planned schedule is:

  • How to Implement GTID Replication in MySQL 5.6 (25') and 5' Questions and Answers.
  • MySQL backup/restore for anonymized exports (25') and 5' Questions and Answers.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.

Please feel free to send us your suggestion about your presentation. Any technical or non-technical MySQL related topic is welcome. For example how you use MySQL in your company or special problems you have faced and solved (or not solved yet), research work you have done on MySQL products, business cases you solve with MySQL products, evaluations or experience you have made, etc. For the proposal please send us a mail.

Please also send us a short notice if you plan to participate or join us with MeetUp.

This gives us the possibility to arrange and organize all the infrastructure with the Hotel.

The event is free of costs for all participants.

We are pleased if you can make it to the event,
Your FromDual Team

FromDual invites MySQL community to company meeting 2014 in Barcelona

FromDual.en - Wed, 2014-08-06 14:29

FromDual holds its annual company meeting this year in Barcelona, Spain.

We are pleased to invite everybody interested in MySQL technologies (MySQL, Galera Cluster, FromDual Tools, Percona Cluster, MariaDB, etc.) to participate on Thursday evening September 11 at the HCC MONTBLANC, Via Laietana 61, to meet, exchange ideas and discuss MySQL related topics.

The event starts at 18:00, we will meet in the hotel lobby. The planned schedule is:

  • How to Implement GTID Replication in MySQL 5.6 (25') and 5' Questions and Answers.
  • MySQL backup/restore for anonymized exports (25') and 5' Questions and Answers.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.

Please feel free to send us your suggestion about your presentation. Any technical or non-technical MySQL related topic is welcome. For example how you use MySQL in your company or special problems you have faced and solved (or not solved yet), research work you have done on MySQL products, business cases you solve with MySQL products, evaluations or experience you have made, etc. For the proposal please send us a mail.

Please also send us a short notice if you plan to participate or join us with MeetUp.

This gives us the possibility to arrange and organize all the infrastructure with the Hotel.

The event is free of costs for all participants.

We are pleased if you can make it to the event,
Your FromDual Team

FromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations

FromDual.en - Sun, 2014-07-27 10:46

FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.


To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).


With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.


In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.


Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.



MyEnv v1.0.5

The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.


With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).


The most important improvements in MyEnv v1.0.5 are:

  • Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
  • MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
  • Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
  • The user guidance of the MyEnv installer was made more user friendly.
  • Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
  • The tools for MySQL Partitions were extended and improved.

All improvements in detail you can find in the Release Notes.


Here you can download MyEnv.



MySQL Backup Manager v1.0.5

The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.


At this point we would like to quote a MySQL user:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


In mysql_bman version v1.0.5 the following improvements were integrated:

  • Security improvements (password is not exposed any more).
  • Every instance can be tagged with a name and uniquely identified.
  • The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
  • The compression of backups can be disabled to support de-duplicating drives.
  • The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.

Download (included in MyEnv).



MySQL Ops Center v0.2

Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.


The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:

  • Starting and stopping of MySQL databases on remote machines by a central management console.
  • Starting and stopping of the MySQL replication.
  • Starting and stopping of a virtual IP (VIP).
  • Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
  • Configuration of the master/slave replication.

The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.



MySQL Performance Monitor v0.9.3

The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:

  • Bugs related to sha/sha1 encryption were fixed.
  • A stopped database is better detected now.
  • DRBD informations were improved.
  • New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
  • New measuring points was added (Galera Cluster) and wrong ones fixed.

You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.

Taxonomy upgrade extras: galeraBackupmanagermonitoringOperationsMyEnvfromdual_brman

FromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations

FromDual.en - Sun, 2014-07-27 10:46

FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.


To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).


With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.


In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.


Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.



MyEnv v1.0.5

The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.


With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).


The most important improvements in MyEnv v1.0.5 are:

  • Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
  • MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
  • Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
  • The user guidance of the MyEnv installer was made more user friendly.
  • Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
  • The tools for MySQL Partitions were extended and improved.

All improvements in detail you can find in the Release Notes.


Here you can download MyEnv.



MySQL Backup Manager v1.0.5

The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.


At this point we would like to quote a MySQL user:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


In mysql_bman version v1.0.5 the following improvements were integrated:

  • Security improvements (password is not exposed any more).
  • Every instance can be tagged with a name and uniquely identified.
  • The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
  • The compression of backups can be disabled to support de-duplicating drives.
  • The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.

Download (included in MyEnv).



MySQL Ops Center v0.2

Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.


The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:

  • Starting and stopping of MySQL databases on remote machines by a central management console.
  • Starting and stopping of the MySQL replication.
  • Starting and stopping of a virtual IP (VIP).
  • Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
  • Configuration of the master/slave replication.

The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.



MySQL Performance Monitor v0.9.3

The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:

  • Bugs related to sha/sha1 encryption were fixed.
  • A stopped database is better detected now.
  • DRBD informations were improved.
  • New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
  • New measuring points was added (Galera Cluster) and wrong ones fixed.

You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.

Taxonomy upgrade extras: galeraBackupmanagermonitoringOperationsMyEnvfromdual_brman

FromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations

FromDual.en - Sun, 2014-07-27 10:46
Taxonomy upgrade extras: galeraBackupmanagermonitoringOperations

FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.


To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).


With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.


In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.


Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.



MyEnv v1.0.5

The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.


With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).


The most important improvements in MyEnv v1.0.5 are:

  • Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
  • MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
  • Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
  • The user guidance of the MyEnv installer was made more user friendly.
  • Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
  • The tools for MySQL Partitions were extended and improved.

All improvements in detail you can find in the Release Notes.


Here you can download MyEnv.



MySQL Backup Manager v1.0.5

The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.


At this point we would like to quote a MySQL user:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


In mysql_bman version v1.0.5 the following improvements were integrated:

  • Security improvements (password is not exposed any more).
  • Every instance can be tagged with a name and uniquely identified.
  • The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
  • The compression of backups can be disabled to support de-duplicating drives.
  • The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.

Download (included in MyEnv).



MySQL Ops Center v0.2

Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.


The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:

  • Starting and stopping of MySQL databases on remote machines by a central management console.
  • Starting and stopping of the MySQL replication.
  • Starting and stopping of a virtual IP (VIP).
  • Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
  • Configuration of the master/slave replication.

The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.



MySQL Performance Monitor v0.9.3

The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:

  • Bugs related to sha/sha1 encryption were fixed.
  • A stopped database is better detected now.
  • DRBD informations were improved.
  • New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
  • New measuring points was added (Galera Cluster) and wrong ones fixed.

You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.

FromDual Performance Monitor for MySQL 0.9.3 has been released

FromDual.en - Wed, 2014-07-09 12:25

FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.

This release contains various minor bug fixes and improvements.

You can download mpm from here.

In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

New installation of mpm v0.9.3

Please follow our mpm installation guide.

Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpm

No other upgrade requirements are known.

Changes in mpm v0.9.3 mpm agent
  • Typos fixed.
  • Kill trap reports to the log file as well now.
mpm agent and MaaS
  • Example for timeshift feature added to configuration template.
MySQL module
  • DB down not detected (bug #27/#138).
InnoDB module
  • InnoDB Status module: SHA fix (bug #139).
Master module
  • Missing values in cache file fixed.
mpm templates for Zabbix
  • No changes.
Taxonomy upgrade extras: mysqlperformancemonitormonitoringmpmmaasperformance monitorreleasefpmmm

FromDual Performance Monitor for MySQL 0.9.3 has been released

FromDual.en - Wed, 2014-07-09 12:25

FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.

This release contains various minor bug fixes and improvements.

You can download mpm from here.

In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

New installation of mpm v0.9.3

Please follow our mpm installation guide.

Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpm

No other upgrade requirements are known.

Changes in mpm v0.9.3 mpm agent
  • Typos fixed.
  • Kill trap reports to the log file as well now.
mpm agent and MaaS
  • Example for timeshift feature added to configuration template.
MySQL module
  • DB down not detected (bug #27/#138).
InnoDB module
  • InnoDB Status module: SHA fix (bug #139).
Master module
  • Missing values in cache file fixed.
mpm templates for Zabbix
  • No changes.
Taxonomy upgrade extras: mysqlperformancemonitormonitoringmpmmaasperformance monitorreleasefpmmm

FromDual Performance Monitor for MySQL 0.9.3 has been released

FromDual.en - Wed, 2014-07-09 12:25
Taxonomy upgrade extras: mysqlperformancemonitormonitoringmpmmaasperformance monitor

FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.

This release contains various minor bug fixes and improvements.

You can download mpm from here.

In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

New installation of mpm v0.9.3

Please follow our mpm installation guide.

Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpm

No other upgrade requirements are known.

Changes in mpm v0.9.3 mpm agent
  • Typos fixed.
  • Kill trap reports to the log file as well now.
mpm agent and MaaS
  • Example for timeshift feature added to configuration template.
MySQL module
  • DB down not detected (bug #27/#138).
InnoDB module
  • InnoDB Status module: SHA fix (bug #139).
Master module
  • Missing values in cache file fixed.
mpm templates for Zabbix
  • No changes.

Replication Troubleshooting - Classic VS GTID

Abdel-Mawla Gharieb - Fri, 2014-07-04 15:05

In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).

There are two main operations we might need to do in a replication setup:

  • Skip or ignore a statement that causes the replication to stop.
  • Re-initialize a slave when the Replication is broke and could not be started anymore.
Skip or Ignore statement

Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.

To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).

Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?
CLASSIC REPLICATION

Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:

SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;
GTID REPLICATION

Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.

To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:

  • Check which transaction is causing the problem: SQL> SHOW SLAVE STATUS\G . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7 Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1, b9b4712a-df64-11e3-b391-60672090eb04:1-6 Auto_Position: 1

    Retrieved_Gtid_Set means the retrieved GTIDs from the master

    Executed_Gtid_Set means the executed GTIDs on the slave.

    According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.

  • Inject an empty transaction: SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7'; SQL> BEGIN;COMMIT; SQL> SET GTID_NEXT='AUTOMATIC'; SQL> START SLAVE;

    BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.

Note:

After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.

Re-initialize/ re-build a slave

For many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.

How to solve that issue ?
CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Start the slave: SQL> START SLAVE;

NOTE:

Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.

GTID REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
  • Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
  • Start the slave: SQL> START SLAVE;

NOTES:

  • If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
    shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

    The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.

  • Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
Conclusion

While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.

Taxonomy upgrade extras: GTIDreplication

Replication Troubleshooting - Classic VS GTID

Abdel-Mawla Gharieb - Fri, 2014-07-04 15:05

In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).

There are two main operations we might need to do in a replication setup:

  • Skip or ignore a statement that causes the replication to stop.
  • Re-initialize a slave when the Replication is broke and could not be started anymore.
Skip or Ignore statement

Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.

To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).

Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?
CLASSIC REPLICATION

Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:

SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;
GTID REPLICATION

Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.

To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:

  • Check which transaction is causing the problem: SQL> SHOW SLAVE STATUS\G . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7 Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1, b9b4712a-df64-11e3-b391-60672090eb04:1-6 Auto_Position: 1

    Retrieved_Gtid_Set means the retrieved GTIDs from the master

    Executed_Gtid_Set means the executed GTIDs on the slave.

    According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.

  • Inject an empty transaction: SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7'; SQL> BEGIN;COMMIT; SQL> SET GTID_NEXT='AUTOMATIC'; SQL> START SLAVE;

    BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.

Note:

After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.

Re-initialize/ re-build a slave

For many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.

How to solve that issue ?
CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Start the slave: SQL> START SLAVE;

NOTE:

Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.

GTID REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
  • Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
  • Start the slave: SQL> START SLAVE;

NOTES:

  • If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
    shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

    The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.

  • Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
Conclusion

While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.

Taxonomy upgrade extras: GTIDreplication

Replication Troubleshooting - Classic VS GTID

Abdel-Mawla Gharieb - Fri, 2014-07-04 15:05

In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).

There are two main operations we might need to do in a replication setup:

  • Skip or ignore a statement that causes the replication to stop.
  • Re-initialize a slave when the Replication is broke and could not be started anymore.
Skip or Ignore statement

Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.

To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).

Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?
CLASSIC REPLICATION

Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:

SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;
GTID REPLICATION

Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.

To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:

  • Check which transaction is causing the problem: SQL> SHOW SLAVE STATUS\G . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7 Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1, b9b4712a-df64-11e3-b391-60672090eb04:1-6 Auto_Position: 1

    Retrieved_Gtid_Set means the retrieved GTIDs from the master

    Executed_Gtid_Set means the executed GTIDs on the slave.

    According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.

  • Inject an empty transaction: SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7'; SQL> BEGIN;COMMIT; SQL> SET GTID_NEXT='AUTOMATIC'; SQL> START SLAVE;

    BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.

Note:

After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.

Re-initialize/ re-build a slave

For many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.

How to solve that issue ?
CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Start the slave: SQL> START SLAVE;

NOTE:

Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.

GTID REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
  • Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
  • Start the slave: SQL> START SLAVE;

NOTES:

  • If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
    shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

    The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.

  • Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
Conclusion

While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.

Replication channel fail-over with Galera Cluster for MySQL

Shinguz - Thu, 2014-06-19 07:05
Taxonomy upgrade extras: channelgaleraclusterfail-overreplicationmasterslave

Sometimes it could be desirable to replicate from a Galera Cluster to a single MySQL slave or to an other Galera Cluster. Reasons for this measure could be:

  • An unstable network between two Galera Cluster locations.
  • A separation of a reporting slave and the Galera Cluster so that heavy reports on the slave do not affect the Galera Cluster performance.
  • Mixing different sources in a slave or a Galera Cluster (fan-in replication).

This article is based on earlier research work (see MySQL Cluster - Cluster circular replication with 2 replication channels) and uses the old MySQL replication style (without MySQL GTID).

Preconditions
  • Enable the binary logs on 2 nodes of a Galera Cluster (we call them channel masters) with the log_bin variable.
  • Set log_slave_updates = 1 on ALL Galera nodes.
  • It is recommended to have small binary logs and relay logs in such a situation to reduce overhead of scanning the files (max_binlog_size = 100M).
Scenarios

   

Let us assume that for some reason the actual channel master of channel 1 breaks. As a consequence the slave of channel 1 does not receive any replication events any more. But we have to keep the replication stream up and running. So we have to switch the replication channel to channel master 2.

Switching replication channel

First for security reasons we should stop the slave of replication channel 1 first:

mysql> STOP SLAVE;

Then we have to find the actual relay log on the slave:

mysql> pager grep Relay_Log_File mysql> SHOW SLAVE STATUS\G mysql> nopager Relay_Log_File: slave-relay-bin.000019

Next we have to find the last applied transaction on the slave:

mysql> SHOW RELAYLOG EVENTS IN 'slave-relay-bin.000019'; | slave-relay-bin.000019 | 3386717 | Query | 5201 | 53745015 | BEGIN | | slave-relay-bin.000019 | 3386794 | Table_map | 5201 | 53745067 | table_id: 72 (test.test) | | slave-relay-bin.000019 | 3386846 | Write_rows | 5201 | 53745142 | table_id: 72 flags: STMT_END_F | | slave-relay-bin.000019 | 3386921 | Xid | 5201 | 53745173 | COMMIT /* xid=1457451 */ | +------------------------+---------+-------------+-----------+-------------+--------------------------------+

This is transaction 1457451 which is the same on all Galera nodes.

On the new channel master of channel 2 we have to find now the matching binary log. This can be done best by matching times between the relay log and the binary log of master of channel 2.

On slave:

shell> ll *relay-bin* -rw-rw---- 1 mysql mysql 336 Mai 22 20:32 slave-relay-bin.000018 -rw-rw---- 1 mysql mysql 3387029 Mai 22 20:37 slave-relay-bin.000019

On master of channel 2:

shell> ll *bin-log* -rw-rw---- 1 mysql mysql 2518737 Mai 22 19:57 bin-log.000072 -rw-rw---- 1 mysql mysql 143 Mai 22 19:57 bin-log.000073 -rw-rw---- 1 mysql mysql 165 Mai 22 20:01 bin-log.000074 -rw-rw---- 1 mysql mysql 62953648 Mai 22 20:40 bin-log.000075

It looks like binary log 75 of master 2 matches to relay log of our slave.

Now we have to find the same transaction on the master of channel 2:

mysql> pager grep -B 6 1457451 mysql> SHOW BINLOG EVENTS IN 'bin-log.000075'; mysql> nopager | bin-log.000075 | 53744832 | Write_rows | 5201 | 53744907 | table_id: 72 flags: STMT_END_F | | bin-log.000075 | 53744907 | Xid | 5201 | 53744938 | COMMIT /* xid=1457450 */ | | bin-log.000075 | 53744938 | Query | 5201 | 53745015 | BEGIN | | bin-log.000075 | 53745015 | Table_map | 5201 | 53745067 | table_id: 72 (test.test) | | bin-log.000075 | 53745067 | Write_rows | 5201 | 53745142 | table_id: 72 flags: STMT_END_F | | bin-log.000075 | 53745142 | Xid | 5201 | 53745173 | COMMIT /* xid=1457451 */ | +----------------+----------+-------------+-----------+-------------+---------------------------------------+

We successfully found the transaction and want the position of the next transaction 53745173 where we should continue replicating.

As a last step we have to set the slave to the master of replication channel 2:

mysql> CHANGE MASTER TO master_host='master2', master_port=3306, master_log_file='bin-log.000075', master_log_pos=53745173; mysql> START SLAVE;

After a while the slave has caught up and is ready for the next fail-over back.

Discussion

We found during our experiments that an IST of a channel master does not lead to a gap or loss of events in the replication stream. So restarting a channel master does not require a channel fail-over as long as an IST can be used for resyncing the channel master with the Galera Cluster.

The increase of wsrep_cluster_conf_id is NOT an indication that a channel fail-over is required.

A SST resets the binary logs so after the SST a slave will not replicate any more. So using this method should be safe to use. If you find any situation where you experience troubles with channel fail-over please let us know.

MySQL Environment MyEnv 1.0.5 has been released

FromDual.en - Fri, 2014-06-13 18:29

FromDual has the pleasure to announce the release of the new version 1.0.5 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.

The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.

You can download MyEnv from here.

In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.0.x to 1.0.5 # cd ${HOME}/product # tar xf /download/myenv-1.0.5.tar.gz # rm -f myenv # ln -s myenv-1.0.5 myenv
Upgrade from 1.0.2 or older to 1.0.3 or newer

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/

Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:

# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.5 MyEnv
  • Schema output in up was still ugly.
  • Instance output is split correctly similar to up/down display.
  • Instance list is now shorter when short instance names are used.
  • ignore-passive option added for myEnv to ignore passive databases in an active/passive fail-over cluster. Based on existence of datadir.
  • Upgrade instructions have been improved and denormalized.
  • Only display existing OEM agents, criteria is directory in oratab must exist.
  • Up instances are not reported with missing mysqladmin command (Galera binary tar balls) but it was not visible what is the reason. Reason is displayed as an error message now.
MyEnv Installer
  • Lists each basedir candidate in a separate line when adding a new instance. More conveniant for reading if many basedirs are available.
MyEnv Utilities
  • block_galera_node.sh: Insert instead of Append used for firewall rules. Only block load-balancer ports and not everything else.
  • block_galera_node.sh made more flexible.
MySQL Backup Manager
  • Cleanup job errors with missing target. Fixed for MGB.
  • Password on command line is not exposed anymore to log file.
  • Instance name optionally added to binary-log backup file names.
  • Binary logs are not cleaned-up because they are not copied with bck_ prefix (Bug #143).
  • Config file example in --help output done more nicely.
  • More strict option checking implemented.
  • All schemas with non transactional tables are shown instead of just the first one.
  • Help typo fixed and example improved.
  • --ignore-memory-table-check implemented to avoid error exit with MEMORY tables.
  • Preparation work for blocking MyISAM backup done.
Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationBackuprelease

MySQL Environment MyEnv 1.0.5 has been released

FromDual.en - Fri, 2014-06-13 18:29

FromDual has the pleasure to announce the release of the new version 1.0.5 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.

The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.

You can download MyEnv from here.

In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.0.x to 1.0.5 # cd ${HOME}/product # tar xf /download/myenv-1.0.5.tar.gz # rm -f myenv # ln -s myenv-1.0.5 myenv
Upgrade from 1.0.2 or older to 1.0.3 or newer

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/

Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:

# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.5 MyEnv
  • Schema output in up was still ugly.
  • Instance output is split correctly similar to up/down display.
  • Instance list is now shorter when short instance names are used.
  • ignore-passive option added for myEnv to ignore passive databases in an active/passive fail-over cluster. Based on existence of datadir.
  • Upgrade instructions have been improved and denormalized.
  • Only display existing OEM agents, criteria is directory in oratab must exist.
  • Up instances are not reported with missing mysqladmin command (Galera binary tar balls) but it was not visible what is the reason. Reason is displayed as an error message now.
MyEnv Installer
  • Lists each basedir candidate in a separate line when adding a new instance. More conveniant for reading if many basedirs are available.
MyEnv Utilities
  • block_galera_node.sh: Insert instead of Append used for firewall rules. Only block load-balancer ports and not everything else.
  • block_galera_node.sh made more flexible.
MySQL Backup Manager
  • Cleanup job errors with missing target. Fixed for MGB.
  • Password on command line is not exposed anymore to log file.
  • Instance name optionally added to binary-log backup file names.
  • Binary logs are not cleaned-up because they are not copied with bck_ prefix (Bug #143).
  • Config file example in --help output done more nicely.
  • More strict option checking implemented.
  • All schemas with non transactional tables are shown instead of just the first one.
  • Help typo fixed and example improved.
  • --ignore-memory-table-check implemented to avoid error exit with MEMORY tables.
  • Preparation work for blocking MyISAM backup done.
Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationBackuprelease

MySQL Environment MyEnv 1.0.5 has been released

FromDual.en - Fri, 2014-06-13 18:29
Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationBackup

FromDual has the pleasure to announce the release of the new version 1.0.5 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.

The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.

You can download MyEnv from here.

In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.0.x to 1.0.5 # cd ${HOME}/product # tar xf /download/myenv-1.0.5.tar.gz # rm -f myenv # ln -s myenv-1.0.5 myenv
Upgrade from 1.0.2 or older to 1.0.3 or newer

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/

Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:

# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.5 MyEnv
  • Schema output in up was still ugly.
  • Instance output is split correctly similar to up/down display.
  • Instance list is now shorter when short instance names are used.
  • ignore-passive option added for myEnv to ignore passive databases in an active/passive fail-over cluster. Based on existence of datadir.
  • Upgrade instructions have been improved and denormalized.
  • Only display existing OEM agents, criteria is directory in oratab must exist.
  • Up instances are not reported with missing mysqladmin command (Galera binary tar balls) but it was not visible what is the reason. Reason is displayed as an error message now.
MyEnv Installer
  • Lists each basedir candidate in a separate line when adding a new instance. More conveniant for reading if many basedirs are available.
MyEnv Utilities
  • block_galera_node.sh: Insert instead of Append used for firewall rules. Only block load-balancer ports and not everything else.
  • block_galera_node.sh made more flexible.
MySQL Backup Manager
  • Cleanup job errors with missing target. Fixed for MGB.
  • Password on command line is not exposed anymore to log file.
  • Instance name optionally added to binary-log backup file names.
  • Binary logs are not cleaned-up because they are not copied with bck_ prefix (Bug #143).
  • Config file example in --help output done more nicely.
  • More strict option checking implemented.
  • All schemas with non transactional tables are shown instead of just the first one.
  • Help typo fixed and example improved.
  • --ignore-memory-table-check implemented to avoid error exit with MEMORY tables.
  • Preparation work for blocking MyISAM backup done.

GTID In Action

Abdel-Mawla Gharieb - Thu, 2014-06-12 14:09

In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.

The following topics will be covered in this blog:

What is the concept of GTID protocol?

GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.

GTID consists of two parts separated by a column {source_id:transactions_id}.

WHERE

  • source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
  • transaction_id: A sequence number determining the order of the committed transaction.

The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3

As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):

  • gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
  • enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
    • CREATE TABLE .. SELECT.
    • CREATE TEMPORARY TABLE (inside a transaction).
    • Statements that update nontransactional tables inside a transaction.
  • gtid_purged: The set of transactions that have been purged from the binary logs.
  • gtid_executed: The set of transactions which already executed on that server.
  • gtid_next: The GTID which will be used for the next transaction.
GTID Replication Implementation Fresh Installations

Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.

The implementation process is divided into two parts:

MASTER'S SIDE CONFIGURATION:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';

SLAVE'S SIDE CONFIGURATION:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Set the master information on the slave's:

    Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:

    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;
  • Start replication: SQL> START SLAVE;
  • Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
Adding New Slave

It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:

  • Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
  • Restore the backup file taken from the master.
  • Use change master to with MASTER_AUTO_POSITION=1
  • Start the slave.

Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?

Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:

-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';

After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.


NOTE:

If the database size is big (100GB or so) then using Xtrabackup tool instead of mysqldump here will be a very good idea. Check out this link for more information on how to use Xtrabackup tool to restore a slave server having GTID enabled.

Migration from classic replication to GTID replication How to perform the migration?

To migrate an already running replication using the classical method to GTID replication, the following steps should be done:

  • Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
  • Shutdown MySQL on all servers and add the GTID variables to the configuration files.
  • Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
  • Start MySQL service on all servers.
  • Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
  • Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
Is online migration from classic to GTID replication available?

At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:

  • GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
  • Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
Workaround ??

There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)

The online migration steps would be:

  • Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
  • Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
  • Point the application to write to slaveA instead of the old master.
  • Restart the old master to use GTID_MODE=ON and having slaveA as a master.

Note: This is not yet available in Oracle binaries

More information on this could be find here.

GTID Benefits
  • Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
  • Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
  • Simple to determine whether masters and slaves are consistent or not.
  • Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
  • Automatic fail-over scripts is now much easier to implement.

To know how to troubleshoot GTID replication, check out the Replication Troubleshooting - Classic VS GTID blog.

Taxonomy upgrade extras: GTIDreplication

GTID In Action

Abdel-Mawla Gharieb - Thu, 2014-06-12 14:09

In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.

The following topics will be covered in this blog:

What is the concept of GTID protocol?

GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.

GTID consists of two parts separated by a column {source_id:transactions_id}.

WHERE

  • source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
  • transaction_id: A sequence number determining the order of the committed transaction.

The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3

As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):

  • gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
  • enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
    • CREATE TABLE .. SELECT.
    • CREATE TEMPORARY TABLE (inside a transaction).
    • Statements that update nontransactional tables inside a transaction.
  • gtid_purged: The set of transactions that have been purged from the binary logs.
  • gtid_executed: The set of transactions which already executed on that server.
  • gtid_next: The GTID which will be used for the next transaction.
GTID Replication Implementation Fresh Installations

Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.

The implementation process is divided into two parts:

MASTER'S SIDE CONFIGURATION:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';

SLAVE'S SIDE CONFIGURATION:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Set the master information on the slave's:

    Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:

    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;
  • Start replication: SQL> START SLAVE;
  • Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
Adding New Slave

It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:

  • Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
  • Restore the backup file taken from the master.
  • Use change master to with MASTER_AUTO_POSITION=1
  • Start the slave.

Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?

Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:

-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';

After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.


NOTE:

If the database size is big (100GB or so) then using Xtrabackup tool instead of mysqldump here will be a very good idea. Check out this link for more information on how to use Xtrabackup tool to restore a slave server having GTID enabled.

Migration from classic replication to GTID replication How to perform the migration?

To migrate an already running replication using the classical method to GTID replication, the following steps should be done:

  • Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
  • Shutdown MySQL on all servers and add the GTID variables to the configuration files.
  • Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
  • Start MySQL service on all servers.
  • Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
  • Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
Is online migration from classic to GTID replication available?

At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:

  • GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
  • Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
Workaround ??

There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)

The online migration steps would be:

  • Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
  • Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
  • Point the application to write to slaveA instead of the old master.
  • Restart the old master to use GTID_MODE=ON and having slaveA as a master.

Note: This is not yet available in Oracle binaries

More information on this could be find here.

GTID Benefits
  • Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
  • Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
  • Simple to determine whether masters and slaves are consistent or not.
  • Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
  • Automatic fail-over scripts is now much easier to implement.

To know how to troubleshoot GTID replication, check out the Replication Troubleshooting - Classic VS GTID blog.

Taxonomy upgrade extras: GTIDreplication

GTID In Action

Abdel-Mawla Gharieb - Thu, 2014-06-12 14:09

In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.

The following topics will be covered in this blog:

What is the concept of GTID protocol?

GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.

GTID consists of two parts separated by a column {source_id:transactions_id}.

WHERE

  • source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
  • transaction_id: A sequence number determining the order of the committed transaction.

The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3

As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):

  • gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
  • enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
    • CREATE TABLE .. SELECT.
    • CREATE TEMPORARY TABLE (inside a transaction).
    • Statements that update nontransactional tables inside a transaction.
  • gtid_purged: The set of transactions that have been purged from the binary logs.
  • gtid_executed: The set of transactions which already executed on that server.
  • gtid_next: The GTID which will be used for the next transaction.
GTID Replication Implementation Fresh Installations

Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.

The implementation process is divided into two parts:

MASTER'S SIDE CONFIGURATION:
  • Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';

SLAVE'S SIDE CONFIGURATION:
  • Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
  • Restart MySQL so that configuration changes take place: shell> service mysql restart
  • Set the master information on the slave's:

    Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:

    SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;
  • Start replication: SQL> START SLAVE;
  • Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
Adding New Slave

It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:

  • Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
  • Restore the backup file taken from the master.
  • Use change master to with MASTER_AUTO_POSITION=1
  • Start the slave.

Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?

Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:

-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';

After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.

Migration from classic replication to GTID replication How to perform the migration?

To migrate an already running replication using the classical method to GTID replication, the following steps should be done:

  • Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
  • Shutdown MySQL on all servers and add the GTID variables to the configuration files.
  • Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
  • Start MySQL service on all servers.
  • Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
  • Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
Is online migration from classic to GTID replication available?

At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:

  • GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
  • Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
Workaround ??

There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)

The online migration steps would be:

  • Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
  • Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
  • Point the application to write to slaveA instead of the old master.
  • Restart the old master to use GTID_MODE=ON and having slaveA as a master.

Note: This is not yet available in Oracle binaries

More information on this could be find here.

GTID Benefits
  • Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
  • Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
  • Simple to determine whether masters and slaves are consistent or not.
  • Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
  • Automatic fail-over scripts is now much easier to implement.

In a future post, I will write about how to troubleshoot GTID replication.

Backup Manager for MySQL, MariaDB and Percona Server (mysql_bman)

Shinguz - Tue, 2014-05-06 17:28
Taxonomy upgrade extras: mysqlBackupRestoreRecoverymysql_bmanpitrAbout

The MySQL Backup Manager (mysql_bman) is a wrapper script for standard MySQL backup tools. The Problem with MySQL backup tools is, that they have many options and thus are overcomplicated and errors are easy made.

mysql_bman has the intention to make backups for MySQL easier and technically correct. This means it should per default not allow non-consistent backups or complain if some functions or parameters are used in the wrong way to guarantee proper backups.

In addition it has added some nice features which are missing in standard MySQL backup tools or which are only known from Enterprise backup solutions.

Where to download mysql_bman

The Backup Manager for MySQL (mysql_bman) can be downloaded from our website.

What mysql_bman user say about

Mathias Brem DBA@DBAOnline on LinkedIn:

Ow! Nice!
mysql backup manager is a very nice tool! Congratulations for FromDual! I made a shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best!

Xtrabackup + mysql_bman!!!!

Where can mysql_bman help you

The intention of mysql_bman is to assist you in bigger MySQL set-ups where you have to follow some backup policies and where you need a serious backup concept.

mysql_bman example

To give you an impression of the power of the MySQL Backup Manager let us have a look at a little example:

shell> mysql_bman --target=bman:secret@192.168.1.42 --type=full --mode=logical --policy=daily \ --no-compress --backupdir=/mnt/slowdisk \ --archive --archivedir=/mnt/nfsmount

With this backup method we do a logical full backup (mysqldump is triggered in the background). The backup is stored in the location for backups with the daily policy and is NOT compressed to speed up the backup by saving CPU power AND because the backup device is a de-duplicating drive. Then the backup is archived to and NFS mount.

Backup types

To achieve this we have defined different backup types:

TypeDescriptionfullfull logical backup (mysqldump) of all schemasbinlogbinary-log backupconfigconfiguration file backup (my.cnf)structurestructure backupcleanupclean-up of backup pieces older than n daysschemabackup of one or more schemasprivilegeprivilege dump (SHOW GRANTS FOR)

A backup type is specified with the option --type=<backup_type>.

Backup modes

A backup can either be logical or physical. A logical backup is typically what you do with mysqldump. A physical backup is typically a physical file copy without looking into the data. That is what for example xtrabackup does.

The backup mode is specified with the option --mode=<backup_mode>. The following backup modes are available:

ModeDescriptionlogicaldo a logical backup (mysqldump).physicaldo a physical backup (mysqlbackup/innobackup/xtrabackup)Backup policies

Further we have introduced different backup policies. Policies are there to distinguish how different backups should be treated.

The following backup policies exist:

PolicyDescriptiondailydirectory to store daily backupsweeklydirectory to store weekly backupsmonthlydirectory to store monthly backupsquarterlydirectory to store quarterly backupsyearlydirectory to store yearly backups

For example you could plan to do a daily MySQL backup with binary logs with a retention policy of 7 days. But once a week you want to do a weekly backup consisting of a full backup, a configuration backup and a structure dump. But this weekly backup you want to keep for 6 months. And because of legal reasons you want to do a yearly backup with a retention policy of 10 years.

A backup policy is specified with the --policy=<backup_policy> option. This leads us to the retention time:

Options

The retention time which should be applied to a specific backup policy you can specify with the option --retention=<period_in_days>. The retention option means that a backup is not deleted before this amount of days when you run a clean-up job with mysql_bman.

Let us do an example:

shell> mysql_bman --type=cleanup --policy=daily --retention=30

This means that all backups in the daily policy should be deleted when they are older than 30 days.

Target

With the --target option you specify the connect string to the database to backup. This database can be located either local (all backup types can be used) or remote (only client/server backup types can be used).

A target looks as follows: user/password@host:port (similar to URI specification) whereas you can omit password and port.

Backup location, archiving, compressing and clean-up

The --backupdir option is to control location of the backup files. The policy folders are automatically created under this --backupdir location.
If you have a second layer of backup stores (e.g. tapes or slow backup drives or deduplicated drives or NFS drives) you can use the --archive option to copy your backup files to this second layer storage which is specified with the --archivedir option. For restore performance reasons it is recommended to always keep one or two generations of backups on you fast local drive. If you want to remove the backuped files from the --backupdir destination after the archive job use the --cleanup option.
If you want to omit to compress backups, either to safe time or because your location uses deduplicated drives you can use the --no-compress option.

Per schema backup

Especially for hosting companies a full database backup is typically not the right backup strategy because a restore of one specific customer (= schema) is very complicated. For this case we have the --per-schema option. mysql_bman will do a backup of the whole database schema by schema. Keep in mind: This breaks consistency among schemas!

Sometimes you want to do a schema backup only for some specific schemas for this you can use the --schema option. This option allows you to specify schemas to backup or not to backup. --schema=+a,+b means backup schema a and b. --schema=-a,-b means backup all schemas except a and b.
The second variant is less error prone because you do not forget to backup a new database.

Instance name

MySQL does not know the concept of naming an instance (mysqld). But for bigger environments it could be useful to uniquely name each instance. For this purpose we have introduced the option --instance-name=<give_it_a_name>. This instance name should be unique within your whole company. But we do not enforce it atm. The instance name is used to name backup files and later to identify the backup history of an instance in our backup catalog and to allow us to track the files for restore.

mysql_bman configuration file

Specifying everything on the command line is cumbersome. Thus mysql_bman considers a configuration file specified with the --config=<config_file> option.
A mysql_bman configuration file looks for example as follows:

policy = daily target = root/secret@127.0.0.1:3306 type = schema schema = -mysql policy = daily archive = on archivedir = /mnt/tape per-schema = on no-compress = on
Simulate what happens

For the Sissies among us (as for example me) we have the --simulate option. This option simulates nearly all steps as far as possible without executing really anything. This option is either for testing some features or for debugging purposes.

Logging

If you want to track your backup history you can specify with the --log option where your mysql_bman log file should be located.

Using Catalog

It will be very useful when you can store your backups metadata in the database so you can check them in the future and to find out the backup criteria (type, mode, instance-name, ... etc) for specific backup processes. This could be achieved by using the catalog feature.

To activate this feature you have to create a database for the catalog "default name is bman_catalog" then create its tables by using the option --create in a special mysql_bman command (check examples below).
Finally, to store your backup metadata in the catalog what you only have to do is adding the option --catalog=catalog_connection_string to the normal mysql_bman command.
Check the examples below for using catalog in mysql_bman.

More help

A little more help you can get with the following command:

shell> mysql_bman --help
Examples

Do a full (logical = default) backup and store it in the daily policy folder:

shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=full --policy=daily

Do a full physical backup and store it in the weekly policy folder:

shell> mysql_bman --target=root/secret@127.0.0.1 --type=full --mode=physical --policy=weekly

Do a binary-log backup omitting the password in the target and store it in the daily policy folder:

shell> mysql_bman --target=bman@192.168.1.42:3307 --type=binlog --policy=daily

Do a MySQL configuration backup and store it in the weekly policy folder:

shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=config --policy=weekly

Do a structure backup and store it in the monthly policy folder and name the file with the instance name:

shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=structure --policy=monthly --instance-name=prod-db

Do a weekly structure backup and archive it to an other backup location:

shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=structure --policy=weekly --archive --archivedir=/mnt/tape

Do a schema backup omitting the mysql schema:

shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=schema --schema=-mysql --policy=daily --archive --archivedir=/mnt/tape

Do a schema backup only of foodmart and world and write it to their own files. Omit compressing these backups because they are located for example on deduplicated drives:

shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=schema --schema=+foodmart,+world --per-schema --policy=daily --no-compress

Creation of a backup catalog (assuming you have created already a catalog database with the default name "bman_catalog"):

shell> mysql_bman --catalog=root/secret@127.0.0.1:3306 --create

Backups against catalog:

shell> mysql_bman --target=root/secret@127.0.0.1:3306 --catalog=root/secret@127.0.0.1:3306 --instance-name=test --type=full --policy=daily

Privilege backup:

shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=privilege --policy=daily --mode=logical

Pages

Subscribe to FromDual aggregator - FromDual all (en)