You are here
News
Online DDL vs pt-online-schema-change
One of the most expensive database operations is performing Data Definition Language (DDL, e.g. CREATE, DROP, ALTER, etc.) statements, specially, the ALTER statements because MySQL blocks the entire table for both reads and writes while modifying the table.
For the huge tables, this might take hours to get the table changed which affects the application, so that, a good planning is required for such operations in order to avoid doing these changes during the peak times. For those people who have 24/7 services or limited maintenance window, DDL on huge tables is a really nightmare.
Percona developed a very good tool called pt-online-schema-change (version 2.2.6 at the time of writing this article) to perform such operations online without blocking/affecting the application and read/write operations to the table being changed is available.
Also MySQL made some enhancements for DDL statements and introduced the Online DDL feature in MySQL 5.6.
In this article, I will talk about an overview of both ways (Online DDL & pt-online-schema-change) alongside with an example and which one of them should be used in different scenarios.
pt-online-schema-change OverviewThis tool is developed by Percona to alter tables without locking them during the ALTER operation.
Simply, this tool creates a new empty table like the original table with the needed structure change, copy the data from the original table in small chunks to the new table, drop the original table and then rename the new table to the original name. During the copy process all new changes to the original table are being applied to the new one because a trigger is created on the original table which ensure that all new changes will be applied on the new table.
For more information about pt-online-schema-change tool, check out the manual documentation.
ExampleAltering a table called "test.test1" by adding an index (name_idx) on column "name":
[root@gcservera ~]# pt-online-schema-change --execute --alter "add index name_idx (name)" D=test,t=test1,h=localhost Operation, tries, wait: copy_rows, 10, 0.25 create_triggers, 10, 1 drop_triggers, 10, 1 swap_tables, 10, 1 update_foreign_keys, 10, 1 Altering `test`.`test1`... Creating new table... Created new table test._test1_new OK. Altering new table... Altered `test`.`_test1_new` OK. 2014-02-09T15:33:27 Creating triggers... 2014-02-09T15:33:27 Created triggers OK. 2014-02-09T15:33:27 Copying approximately 1 rows... 2014-02-09T15:33:27 Copied rows OK. 2014-02-09T15:33:27 Swapping tables... 2014-02-09T15:33:27 Swapped original and new tables OK. 2014-02-09T15:33:27 Dropping old table... 2014-02-09T15:33:27 Dropped old table `test`.`_test1_old` OK. 2014-02-09T15:33:27 Dropping triggers... 2014-02-09T15:33:27 Dropped triggers OK. Successfully altered `test`.`test1`.Note:
The output is perfectly describing all steps that the tool is doing in the background.
Limitations of pt-online-schema-change- A PRIMARY KEY or a unique index should be defined for the table before using this tool because it is required for the DELETE trigger.
- Not supported if the table has already triggers defined.
- The tool become complicate a little if the table has a foreign key constraint and an additional option --alter-foreign-keys-method should be used.
- Also because of the foreign keys, the object names might be changed (indexes names , .. etc).
- In Galera Cluster environment, altering MyISAM tables is not supported and the system variable "wsrep_OSU_method" must be set to "TOI" (total order isolation).
In MySQL 5.5 and 5.1 with the InnoDB plugin, a new feature known as Fast Index Creation was introduced to avoid copying the tables data - when adding or removing secondary indexes - using the optimized CREATE INDEX and DROP INDEX statements.
In MySQL 5.6, the Online DDL method was introduced to allow more changes to be made on the table while accessing and writing to the table being changed is available.
The Online DDL syntax is exactly the same like the normal alter statement after specifying two parameters:
ALGORITHM:- INPLACE: the table change will be made in-place without rebuilding the entire table (in most cases, no copying data to temporary table is required).
- COPY: copying data to a temporary table, rebuilding the table and reconstructing the secondary indexes will be made (equivalent to the traditional method).
- NONE: Read and write operations are allowed during the altering process.
- SHARED: Only read operations are allowed during the altering operations (DML is not allowed).
- EXCLUSIVE: The entire table will be locked for both reading and writing (neither select nor DML are allowed).
The Online DDL is perfectly explained in the online manual documentation, you can check it out here for more information.
ExampleAltering a table called "test.test2" by adding an index (name_idx) on column "name":
mysql> alter table test2 -> add index name_idx (name),algorithm=inplace, lock=none; Query OK, 0 rows affected (0.03 sec) Records: 0 Duplicates: 0 Warnings: 0 Limitations of Online DDL- Works only with InnoDB (syntax wise it could be used with other storage engines "like MyISAM" but only "algorithm=copy" is allowed which is equivalent to the traditional method).
- Regardless of the locking used (none,shared or exclusive) a brief period at the beginning and at the end of the process is requiring an exclusive lock on the table.
- foreign_key_checks should be disabled when adding/dropping foreign keys to avoid table copying behavior.
- Still some alter operations require table copying or table locking in order to make the change (the old behavior). For more details on which table change require table-copying or table locking, check out this manual page.
- LOCK=NONE is not allowed in the alter table statement if there are ON...CASCADE or ON...SET NULL constraints on the table.
- While the Online DDL will be replicated on the slaves the same like the master (if LOCK=NONE no table-locking will take place on the slaves during the alter execution) but the replication itself will be blocked as the replay process executes in a single thread on the replicas which will cause slave lagging problem.
The following is a comparison results between Online DDL and pt-online-schema-change for some alter operations applied on a table contains 1,078,880 rows:
Online DDLpt-online-schema-changeChange OperationRow(s) affectedIs table locked?Time (sec)Row(s) affectedIs table locked?Time (sec)Add Index0No3.76All rowsNo38.12Drop Index0No0.34All rowsNo36.04Add Column0No27.61All rowsNo37.21Rename Column0No0.06All rowsNo34.16Rename Column + change its data typeAll rowsYes30.21All rowsNo34.23Drop Column0No22.41All rowsNo31.57Change table ENGINEAll rowsYes25.30All rowsNo35.54Which method should be used?
Now the question is, which method should we use to perform alter table statements?
While pt-online-schema-change allows read and write operations to the table being altered, it still copies the tables data to a temporary table in the background which adds overhead on the MySQL server. So basically, we should use pt-online-schema-change if the Online DDL will not work efficiently. In other words, if the Online DDL will require copying data to a temporary table (algorithm=copy) and the table will be blocked for long time (lock=exclusive) or when altering huge tables in a replication environment then we should use pt-online-schema-change tool.
MySQL single query performance - the truth!
As suggested by morgo I did a little test for the same query and the same data-set mentioned in Impact of column types on MySQL JOIN performance but looking into an other dimension: the time (aka MySQL versions).
The answerTo make it short. As a good consultant the answer must be: "It depends!" :-)
The testThe query was again the following:
SELECT * FROM a JOIN b ON b.a_id = a.id WHERE a.id BETWEEN 10000 AND 15000 ;The Query Execution Plan was the same for all tested releases.
The relevant MySQL variables where used as follows where possible. Should I have considered join buffer, or any other of those local per session buffers (read_buffer_size, read_rnd_buffer_size, join_buffer_size)?
innodb_buffer_pool_size = 768M innodb_buffer_pool_instances = 1 innodb_file_per_table = 1The results mysql-4.0.30mysql-4.1.25mysql-5.0.96mysql-5.1.73mysql-5.5.35mysql-5.6.15mysql-5.7.3AVG40.8638.683.714.694.647.226.05MEDIAN41.0738.133.694.464.656.326.05STDEV1.512.260.060.340.032.210.03MIN39.2736.993.674.404.596.266.02MAX44.1144.453.865.234.6713.166.10COUNT10.0010.0010.0010.0010.0010.0010.00
mariadb-5.1.44mariadb-5.2.10mariadb-5.3.3mariadb-5.5.34mariadb-10.0.6AVG4.588.638.345.026.12MEDIAN4.587.978.015.026.01STDEV0.011.451.100.020.25MIN4.557.867.904.995.97MAX4.6011.3811.465.066.75COUNT10.0010.0010.0010.0010.00
percona-5.0.92-23.85percona-5.1.72-14.10percona-5.5.34-32.0percona-5.6.14-62.0AVG3.794.704.9410.53MEDIAN3.794.704.8912.41STDEV0.020.030.143.35MIN3.764.674.865.68MAX3.834.755.3412.93COUNT10.0010.0010.0010.00
galera-5.5.33-23.7.6 / 2.7AVG4.31MEDIAN3.98STDEV1.18MIN3.76MAX8.54COUNT30.00
The Graph Conclusion
Do not trust benchmarks. They are mostly worthless for your specific workload and pure marketing buzz... Including the one above! ;-)
Database vendors (Oracle/MySQL, Percona, MariaDB) are primarily focussing on throughput and features. In general this is at the costs of single query performance.
MySQL users like Facebook, LinkedIn, Google, Wikpedia, Booking.com, Yahoo! etc. are more interested in throughput than single query performance (so I assume). But most of the MySQL users (95%) do not have a troughput problem but a single query performance problem (I assume here that this is true also for Oracle, MS-SQL Server, DB2, PostgreSQL, etc.).
So database vendors are not primarily producing for the masses but for some specific users/customers (which possibly pay a hell of money for this).
Back to the data:
My first hypothesis: "The old times were always better" is definitely not true. MySQL 4.0 and 4.1 sucked with this specific query. But since MySQL 5.0 the rough trend is: single query performance becomes worse over time (newer versions). I assume this also true for other databases...
Some claims like: "We have the fastest MySQL" or "We have hired the whole optimizer team" does not necessary reflect in better single query performance. At least not for this specific query.
So in short: If you upgrade or side-grade (MySQL <-> Percona <-> MariaDB), test always very carefully! It is not predictable where the traps are. Newer MySQL release can increase performance of your application or not. Do not trust marketing buzz!
ArtefactsSome artefacts we have already found during this tiny test:
- In MySQL 5.0 an optimization was introduced (not in the Optimizer!?!) to speed up this specific query dramatically.
- MariaDB 5.2 and 5.3 were bad for this specific query.
- I have no clue why Galera Cluster has shown the best results for 5.5. It is no intention or manipulation! It is poor luck. But I like it! :-)
- MySQL 5.6 seems to have some problems with this query. To much improvement done by Oracle/MySQL?
- Percona 5.6 sometimes behaves much better with this query than normal MySQL but from time to time something kicks in which makes Percona dramatically slower. Thus the bad results. I have no clue why. I first though about an external influence. But I was capable to reproduce this behaviour (once). So I assume it must be something Percona internally (AHI for example?).
Do not shoot the messenger!
If you want to reproduce the results most information about are already published. If something is missing please let me know.
Please let me know when you do not agree with the results. So I can expand my universe a bit...
It was fun doing this tests today! And MyEnv was a great assistance doing this kind of tests!
If you want us to do such test for you, please let us know. Our consulting team would be happy to assist you with upgrading or side-grading problems.
Impact of column types on MySQL JOIN performance
In our MySQL trainings and consulting engagements we tell our customers always to use the smallest possible data type to get better query performance. Especially for the JOIN columns. This advice is supported as well by the MySQL documentation in the chapter Optimizing Data Types:
Use the most efficient (smallest) data types possible. MySQL has many specialized types that save disk space and memory. For example, use the smaller integer types if possible to get smaller tables. MEDIUMINT is often a better choice than INT because a MEDIUMINT column uses 25% less space.
I remember somewhere the JOIN columns where explicitly mentioned but I cannot find it any more.
Test set-upTo get numbers we have created a little test set-up:
CREATE TABLE `a` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT , `data` varchar(64) DEFAULT NULL , `ts` timestamp NOT NULL , PRIMARY KEY (`id`) ) ENGINE=InnoDB CHARSET=latin1CREATE TABLE `b` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT , `data` varchar(64) DEFAULT NULL , `ts` timestamp NOT NULL , `a_id` int(10) unsigned DEFAULT NULL , PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
1048576 rows 16777216 rows
The following query was used for the test:
EXPLAIN SELECT * FROM a JOIN b ON b.a_id = a.id WHERE a.id BETWEEN 10000 AND 15000; +----+-------------+-------+--------+---------------+---------+---------+-------------+----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------+---------+---------+-------------+----------+-------------+ | 1 | SIMPLE | b | ALL | NULL | NULL | NULL | NULL | 16322446 | Using where | | 1 | SIMPLE | a | eq_ref | PRIMARY | PRIMARY | 4 | test.b.a_id | 1 | NULL | +----+-------------+-------+--------+---------------+---------+---------+-------------+----------+-------------+And yes: I know this query could be more optimal by setting an index on b.a_id.
ResultsThe whole workload was executed completely in memory and thus CPU bound (we did not want to measure the speed of our I/O system).
SEJOIN columnbytesquery timeGainSpaceCharacter setInnoDBMEDIUMINT35.28 s96%4% faster75%InnoDBINT45.48 s100%100%100%InnoDBBIGINT85.65 s107%7% slower200%InnoDBNUMERIC(7, 2)~46.77 s124%24% slower~100%InnoDBVARCHAR(7)7-86.44 s118%18% slower~200%latin1InnoDBVARCHAR(16)7-86.44 s118%18% slower~200%latin1InnoDBVARCHAR(32)7-86.42 s118%18% slower~200%latin1InnoDBVARCHAR(128)7-86.46 s118%18% slower~200%latin1InnoDBVARCHAR(256)8-96.17 s114%14% slower~225%latin1InnoDBVARCHAR(16)7-86.96 s127%27% slower~200%utf8InnoDBVARCHAR(128)7-86.82 s124%24% slower~200%utf8InnoDBCHAR(16)166.85 s125%25% slower400%latin1InnoDBCHAR(128)1289.68 s177%77% slower3200%latin1InnoDBTEXT8-910.7 s195%95% slower~225%latin1MyISAMINT43.16 s58%42% fasterTokuDBINT44.52 s82%18% fasterSome comments to the tests:
- MySQL 5.6.13 was used for most of the tests.
- TokuDB v7.1.0 was tested with MySQL 5.5.30.
- As results the optimistic cases were taken. In reality the results can be slightly worse.
- We did not take into consideration that bigger data types will eventually cause more I/O which is very slow!
MySQL Environment MyEnv 1.0.2 has been released
FromDual has the pleasure to announce the release of the new version 1.0.2 of its popular multi-instance MySQL Environment MyEnv.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
If you have questions about MyEnv we will answer them in the MyEnv forum.
Upgrade from 1.0.x to 1.0.2 # cd /home/mysql/product # tar xf /tmp/myenv-1.0.2.tar.gz # rm -f myenv # ln -s myenv-1.0.2 myenvAdd the following line to your ~/.bash_profile:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile up cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.2 MyEnv- Missing [client] section in my.cnf should not lead to error any more (Bug #95).
- profile.template removed because it is redundant to installer information.
- up removed from myenv.profile.
- Also root should be allowed to start MySQL (Bug #99).
- Status for OEM agent implemented (oem_agent.php). To activate this feature copy it from utl/oem_agent.php to plg/showMyEnvStatus/.
- Plug-in interface for showMyEnvStatus.php implemented.
- Debug information improved.
- Empty line after environment switch removed.
- One missing error line in myenv_start_stop.php was added.
- Return code in myenv_start_stop.php is nicer now.
- Difference between my.cnf and myenv.conf should be warning and not an error.
- Error messages made more precise.
- Made it more clear that MyEnv is distributed under GPL v2.
- Deleting the actual instance with MyEnv Installer should no longer confuse MyEnv (Bug #104).
- RPM packages for CentOS 6, Fedora 18 and 19, RHEL 5 and 6 are available.
- User experience during deleting an instance improved (Bug #96).
- Deleting a running instance is not allowed. Much more verbose information about (Bug #89).
- Instance name black list added.
- Instances should be listed when change was chosen (#93).
- Replace multi-line readline output to fixed crippled output in menu selection (Bug #103).
- alter_engine.pl should now work for socket and port, local and remote.
- Password is not exposed any more in alter_engine.pl.
- Query bench (query_bench.phpx) added for micro bench marks.
- Automated mysql_bman tests should all pass again.
MySQL Environment MyEnv 1.0.2 has been released
FromDual has the pleasure to announce the release of the new version 1.0.2 of its popular multi-instance MySQL Environment MyEnv.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
If you have questions about MyEnv we will answer them in the MyEnv forum.
Upgrade from 1.0.x to 1.0.2 # cd /home/mysql/product # tar xf /tmp/myenv-1.0.2.tar.gz # rm -f myenv # ln -s myenv-1.0.2 myenvAdd the following line to your ~/.bash_profile:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile up cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.2 MyEnv- Missing [client] section in my.cnf should not lead to error any more (Bug #95).
- profile.template removed because it is redundant to installer information.
- up removed from myenv.profile.
- Also root should be allowed to start MySQL (Bug #99).
- Status for OEM agent implemented (oem_agent.php). To activate this feature copy it from utl/oem_agent.php to plg/showMyEnvStatus/.
- Plug-in interface for showMyEnvStatus.php implemented.
- Debug information improved.
- Empty line after environment switch removed.
- One missing error line in myenv_start_stop.php was added.
- Return code in myenv_start_stop.php is nicer now.
- Difference between my.cnf and myenv.conf should be warning and not an error.
- Error messages made more precise.
- Made it more clear that MyEnv is distributed under GPL v2.
- Deleting the actual instance with MyEnv Installer should no longer confuse MyEnv (Bug #104).
- RPM packages for CentOS 6, Fedora 18 and 19, RHEL 5 and 6 are available.
- User experience during deleting an instance improved (Bug #96).
- Deleting a running instance is not allowed. Much more verbose information about (Bug #89).
- Instance name black list added.
- Instances should be listed when change was chosen (#93).
- Replace multi-line readline output to fixed crippled output in menu selection (Bug #103).
- alter_engine.pl should now work for socket and port, local and remote.
- Password is not exposed any more in alter_engine.pl.
- Query bench (query_bench.phpx) added for micro bench marks.
- Automated mysql_bman tests should all pass again.
MySQL Environment MyEnv 1.0.2 has been released
FromDual has the pleasure to announce the release of the new version 1.0.2 of its popular multi-instance MySQL Environment MyEnv.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
If you have questions about MyEnv we will answer them in the MyEnv forum.
Upgrade from 1.0.x to 1.0.2 # cd /home/mysql/product # tar xf /tmp/myenv-1.0.2.tar.gz # rm -f myenv # ln -s myenv-1.0.2 myenvAdd the following line to your ~/.bash_profile:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile up cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.2 MyEnv- Missing [client] section in my.cnf should not lead to error any more (Bug #95).
- profile.template removed because it is redundant to installer information.
- up removed from myenv.profile.
- Also root should be allowed to start MySQL (Bug #99).
- Status for OEM agent implemented (oem_agent.php). To activate this feature copy it from utl/oem_agent.php to plg/showMyEnvStatus/.
- Plug-in interface for showMyEnvStatus.php implemented.
- Debug information improved.
- Empty line after environment switch removed.
- One missing error line in myenv_start_stop.php was added.
- Return code in myenv_start_stop.php is nicer now.
- Difference between my.cnf and myenv.conf should be warning and not an error.
- Error messages made more precise.
- Made it more clear that MyEnv is distributed under GPL v2.
- Deleting the actual instance with MyEnv Installer should no longer confuse MyEnv (Bug #104).
- RPM packages for CentOS 6, Fedora 18 and 19, RHEL 5 and 6 are available.
- User experience during deleting an instance improved (Bug #96).
- Deleting a running instance is not allowed. Much more verbose information about (Bug #89).
- Instance name black list added.
- Instances should be listed when change was chosen (#93).
- Replace multi-line readline output to fixed crippled output in menu selection (Bug #103).
- alter_engine.pl should now work for socket and port, local and remote.
- Password is not exposed any more in alter_engine.pl.
- Query bench (query_bench.phpx) added for micro bench marks.
- Automated mysql_bman tests should all pass again.
Workbench starting/stopping multiple instance set-ups with myenv
Introduction
MySQL Workbench is a very good and free GUI tool provided by Oracle to manage MySQL administration and development tasks. Opening many MySQL connections (same or different instances, remote or local MySQL servers) at the same time is one of its main features. While it's working fine to perform SQL statements on the different connections opened for multiple instances, but some people are asking if it is available as well to start and stop multiple MySQL instances using MySQL Workbench? if yes, how to configure it to perform such task? and also does that make any conflict with MyEnv tool - if it's installed - or not?
Yes, MySQL Workbench could be configured to start and stop multiple MySQL instances (local or remote) and it does not make any conflict with MyEnv tool.
In this article, I will describe how to configure MySQL Workbench to start and stop multiple MySQL instances and getting benefits from MyEnv scripts in this purpose.
PrerequisitesSystem information and installed packages:
- Operating System: Ubuntu 12.04 (64 bit) .
- MySQL Server: Any mysql version (I used MySQL 5.5 tarballs).
- Number of MySQL Instances: Two instances are installed (mysql1 & mysql2).
- MySQL Workbench: Version 6.0 .
- MyEnv: Version 1.0.1.
MyEnv is a set of scripts to run comfortably multiple MySQL, Percona Server or MariaDB database instances on the same server. You can even run multiple database instances with different binary versions. If you have MySQL multiple instance setups, you really should try out MyEnv.
I will not talk more about MyEnv features and its benefits rather, I'd like to mention that if you're using MyEnv and want to use MySQL Workbench at the same time, you will not face any conflict between them both and you can manage your MySQL instances by either MyEnv or MySQL Workbench. More over, you can use MyEnv scripts to configure MySQL Workbench starting/stopping multiple instances in an easy way!
For more information about MyEnv tool , you can check it out on our website myenv.
MySQL Workbench configuration
Add MySQL connections to MySQL Workbench
- Choose a connection name for the 1st instance "mysql1" and specify the connection string:
If you didn't adjust the "Configure Server Management" in this step - at the left bottom of the previous screen - you can open MySQL connections and perform SQL queries normally to this instance but you can neither edit the instance configuration parameters nor start/stop it.
BTW, you can adjust it at anytime later and that what I did already in this example. - Add another connection for the 2nd instance "mysql2" the same like "mysql1".
To configure MySQL Workbench to start/stop instance, we need to have relevant start and stop commands because it just execute them as they would be execute in the system shell. In this case, we may get benefit of MyEnv scripts for that purpose using the following command:
$MYENV_BASE/bin/database.php $MYENV_DATABASE start|stopWhere $MYENV_BASE is the MyEnv basedir ("/opt/myenv" in this ex.) and $MYENV_DATABASE is the instance name in MyEnv (mysqld1 & mysqld2 for mysql1 & mysql2 respectively in this ex.)
Now, we can use the following window to modify System type,Configuration file path, start , stop and status commands to match each instance configurations:
Now MySQL Workbench should be able to start and stop the configured MySQL instances.
Notes:
- MyEnv doesn't allow any user to start a mysql instance except mysql user (even if it's the root user), so that mysql OS user should be used to execute those commands and you might need to assign it SUDO permissions.
- Start/Stop Server button in MySQL Workbench depends on the output of the status command used ("cat /opt/mysql1/data/*.pid 2>/dev/null" for checking mysql1 instance status), and the button label will be changed to execute the appropriate command accordingly (for ex. if the instance is not running, the label should be changed to "Start Server" and the start command will be executed if the button clicked and vise verse), so you should make sure that this command is returning the expected results, otherwise, Workbench wont be able to manage the instance.
- It doesn't matter how MySQL was installed (RPM, tarballs or from source), it's the same concept, you just need to provide MyEnv start and stop commands along with the status command and then MySQL Workbench will work.
- If MyEnv is not installed, you can also configure MySQL Workbench to start/stop multiple instances by providing normal start|stop instance commands in the "Manage Server Connections" window but those need to be prepared first.
Workbench starting/stopping multiple instance set-ups with myenv
Introduction
MySQL Workbench is a very good and free GUI tool provided by Oracle to manage MySQL administration and development tasks. Opening many MySQL connections (same or different instances, remote or local MySQL servers) at the same time is one of its main features. While it's working fine to perform SQL statements on the different connections opened for multiple instances, but some people are asking if it is available as well to start and stop multiple MySQL instances using MySQL Workbench? if yes, how to configure it to perform such task? and also does that make any conflict with MyEnv tool - if it's installed - or not?
Yes, MySQL Workbench could be configured to start and stop multiple MySQL instances (local or remote) and it does not make any conflict with MyEnv tool.
In this article, I will describe how to configure MySQL Workbench to start and stop multiple MySQL instances and getting benefits from MyEnv scripts in this purpose.
PrerequisitesSystem information and installed packages:
- Operating System: Ubuntu 12.04 (64 bit) .
- MySQL Server: Any mysql version (I used MySQL 5.5 tarballs).
- Number of MySQL Instances: Two instances are installed (mysql1 & mysql2).
- MySQL Workbench: Version 6.0 .
- MyEnv: Version 1.0.1.
MyEnv is a set of scripts to run comfortably multiple MySQL, Percona Server or MariaDB database instances on the same server. You can even run multiple database instances with different binary versions. If you have MySQL multiple instance setups, you really should try out MyEnv.
I will not talk more about MyEnv features and its benefits rather, I'd like to mention that if you're using MyEnv and want to use MySQL Workbench at the same time, you will not face any conflict between them both and you can manage your MySQL instances by either MyEnv or MySQL Workbench. More over, you can use MyEnv scripts to configure MySQL Workbench starting/stopping multiple instances in an easy way!
For more information about MyEnv tool , you can check it out on our website myenv.
MySQL Workbench configuration
Add MySQL connections to MySQL Workbench
- Choose a connection name for the 1st instance "mysql1" and specify the connection string:
If you didn't adjust the "Configure Server Management" in this step - at the left bottom of the previous screen - you can open MySQL connections and perform SQL queries normally to this instance but you can neither edit the instance configuration parameters nor start/stop it.
BTW, you can adjust it at anytime later and that what I did already in this example. - Add another connection for the 2nd instance "mysql2" the same like "mysql1".
To configure MySQL Workbench to start/stop instance, we need to have relevant start and stop commands because it just execute them as they would be execute in the system shell. In this case, we may get benefit of MyEnv scripts for that purpose using the following command:
$MYENV_BASE/bin/database.php $MYENV_DATABASE start|stopWhere $MYENV_BASE is the MyEnv basedir ("/opt/myenv" in this ex.) and $MYENV_DATABASE is the instance name in MyEnv (mysqld1 & mysqld2 for mysql1 & mysql2 respectively in this ex.)
Now, we can use the following window to modify System type,Configuration file path, start , stop and status commands to match each instance configurations:
Now MySQL Workbench should be able to start and stop the configured MySQL instances.
Notes:
- MyEnv doesn't allow any user to start a mysql instance except mysql user (even if it's the root user), so that mysql OS user should be used to execute those commands and you might need to assign it SUDO permissions.
- Start/Stop Server button in MySQL Workbench depends on the output of the status command used ("cat /opt/mysql1/data/*.pid 2>/dev/null" for checking mysql1 instance status), and the button label will be changed to execute the appropriate command accordingly (for ex. if the instance is not running, the label should be changed to "Start Server" and the start command will be executed if the button clicked and vise verse), so you should make sure that this command is returning the expected results, otherwise, Workbench wont be able to manage the instance.
- It doesn't matter how MySQL was installed (RPM, tarballs or from source), it's the same concept, you just need to provide MyEnv start and stop commands along with the status command and then MySQL Workbench will work.
- If MyEnv is not installed, you can also configure MySQL Workbench to start/stop multiple instances by providing normal start|stop instance commands in the "Manage Server Connections" window but those need to be prepared first.
Workbench starting/stopping multiple instance set-ups with myenv
Introduction
MySQL Workbench is a very good and free GUI tool provided by Oracle to manage MySQL administration and development tasks. Opening many MySQL connections (same or different instances, remote or local MySQL servers) at the same time is one of its main features. While it's working fine to perform SQL statements on the different connections opened for multiple instances, but some people are asking if it is available as well to start and stop multiple MySQL instances using MySQL Workbench? if yes, how to configure it to perform such task? and also does that make any conflict with MyEnv tool - if it's installed - or not?
Yes, MySQL Workbench could be configured to start and stop multiple MySQL instances (local or remote) and it does not make any conflict with MyEnv tool.
In this article, I will describe how to configure MySQL Workbench to start and stop multiple MySQL instances and getting benefits from MyEnv scripts in this purpose.
PrerequisitesSystem information and installed packages:
- Operating System: Ubuntu 12.04 (64 bit) .
- MySQL Server: Any mysql version (I used MySQL 5.5 tarballs).
- Number of MySQL Instances: Two instances are installed (mysql1 & mysql2).
- MySQL Workbench: Version 6.0 .
- MyEnv: Version 1.0.1.
MyEnv is a set of scripts to run comfortably multiple MySQL, Percona Server or MariaDB database instances on the same server. You can even run multiple database instances with different binary versions. If you have MySQL multiple instance setups, you really should try out MyEnv.
I will not talk more about MyEnv features and its benefits rather, I'd like to mention that if you're using MyEnv and want to use MySQL Workbench at the same time, you will not face any conflict between them both and you can manage your MySQL instances by either MyEnv or MySQL Workbench. More over, you can use MyEnv scripts to configure MySQL Workbench starting/stopping multiple instances in an easy way!
For more information about MyEnv tool , you can check it out on our website myenv.
MySQL Workbench configuration
Add MySQL connections to MySQL Workbench
- Choose a connection name for the 1st instance "mysql1" and specify the connection string:
If you didn't adjust the "Configure Server Management" in this step - at the left bottom of the previous screen - you can open MySQL connections and perform SQL queries normally to this instance but you can neither edit the instance configuration parameters nor start/stop it.
BTW, you can adjust it at anytime later and that what I did already in this example. - Add another connection for the 2nd instance "mysql2" the same like "mysql1".
To configure MySQL Workbench to start/stop instance, we need to have relevant start and stop commands because it just execute them as they would be execute in the system shell. In this case, we may get benefit of MyEnv scripts for that purpose using the following command:
$MYENV_BASE/bin/database.php $MYENV_DATABASE start|stopWhere $MYENV_BASE is the MyEnv basedir ("/opt/myenv" in this ex.) and $MYENV_DATABASE is the instance name in MyEnv (mysqld1 & mysqld2 for mysql1 & mysql2 respectively in this ex.)
Now, we can use the following window to modify System type,Configuration file path, start , stop and status commands to match each instance configurations:
Now MySQL Workbench should be able to start and stop the configured MySQL instances.
Notes:
- MyEnv doesn't allow any user to start a mysql instance except mysql user (even if it's the root user), so that mysql OS user should be used to execute those commands and you might need to assign it SUDO permissions.
- Start/Stop Server button in MySQL Workbench depends on the output of the status command used ("cat /opt/mysql1/data/*.pid 2>/dev/null" for checking mysql1 instance status), and the button label will be changed to execute the appropriate command accordingly (for ex. if the instance is not running, the label should be changed to "Start Server" and the start command will be executed if the button clicked and vise verse), so you should make sure that this command is returning the expected results, otherwise, Workbench wont be able to manage the instance.
- It doesn't matter how MySQL was installed (RPM, tarballs or from source), it's the same concept, you just need to provide MyEnv start and stop commands along with the status command and then MySQL Workbench will work.
- If MyEnv is not installed, you can also configure MySQL Workbench to start/stop multiple instances by providing normal start|stop instance commands in the "Manage Server Connections" window but those need to be prepared first.
Galera Cluster 3.1 GA is out!
Great News: Galera Cluster v3.1 GA for MySQL 5.6 was released at Percona Live London (PLUK) 2013. The information is still a bit hidden...
You can find it here:
- The Plug-in: https://launchpad.net/galera/3.x/25.3.1
- The MySQL: https://launchpad.net/codership-mysql/5.6/5.6.14-25.1
Or directly on our download page.
Careful: Online-Upgrade from 5.5 to 5.6 will not work yet. We have to find a work-around...
Xtrabackup in a nutshell
Introduction
No one can deny that one of the most important and daily tasks for DBAs is performing backup and restore operations, we're not required to perform backup and restore operations only when we want to add new replication slave, when we want to implement disaster recovery procedures or when we want to prepare testing or staging server for the running production system, but even if we're going to make any changes to the database schema in order to enhance the database performance, it's recommended to have fresh backup copy before making any live changes, so if backup and restore operations cannot be handled smoothly, we're going to face many troubles in our daily work. If we're going to talk about backup and restore operations, Xtrabackup tool will be strongly appeared.
Xtrabackup tool is a free open source tool developed by Percona to perform physical backup and restore operations which is much faster than performing logical backup and restore using the MySQL utilities (mysqldump and mysql), and many other advantages.
Xtrabackup tool has many options and features which are very useful, but in this article, I'll go through only on how to use this tool to perform simple full, incremental and partial backups and restores, advantages and disadvantages of each method and some important tips.
For more information about Xtrabackup tool, you can browse the manual document from here.
Prerequisites- MySQL server installed.
- Download the xtrabackup tool.
- Install it as explained in the manual document.
If you want to have a full backup from your entire database system with the shortest/fastest backup and restore time, this method will be very useful for you. As compared to the full logical database backup using mysqldump and mysql utilities (very long time to backup and more than the doubled time to restore), taking a full physical backup using Xtrabackup tool will make your life much easier.
Below is the needed steps to make a full physical database backup using XtraBackup tool:
Create BackupA simple Xtrabackup command to backup the full databases should be something like:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!A timestamped folder (for ex. "2013-11-06_00-00-00") would be created to contain all backup files if we didn't use the option "--no-timestamp" in the above command (I didn't use the timestamped folders here to just simplify the names for the reader, otherwise, it's very useful when using automated backup scripts).
Xtrabackup tool now created the backup files under the folder "full-backup" plus some extra files like "xtrabackup-checkpoints" file which contains some information (useful in the incremental backups) like:
- backup_type = full-backuped : which indicates the backup type "full backup".
- from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
- to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.
Another important file is "xtrabackup_binlog_info" which is very useful in replication setups:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191WHERE:
- mysql-bin.000027: is the binary log file name of the master when the backup created.
- 191: is the binary log position of the backup.
The backed up files are not ready at the moment to be restored, we must prepare the backup files first as follows:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup . . . innobackupex: completed OK!Now, the full backup is ready to be restored ...
Restore Full BackupTo get the full backup restored, the MySQL instance should be stopped first and then one of the following two procedures should be done:
- Using the copy back option:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --copy-back /backup/dir/path/full-backup
.
.
.
innobackupex: completed OK!
Xtrabackup tool - in this method - will copy all files under the backup folder (full-backup) to the MySQL datadir which must be indicated in the my.cnf file, otherwise, it wouldn't know where the datadir should be placed.
- Using the operating system copy or move commands:
If you don't want to keep the backup files on your local system (you have another copy in an external server), the move command will be very fast to get your backup restored:
[root@ ~]# mv /backup/dir/path/full-backup /var/lib/mysql
As the user who moves/copies the files into MySQL datadir is not "mysql" user, you should make sure that mysql user has the right permissions on its datadir and also the path "/var/lib/mysql" should be replaced with the MySQL datadir if it's set to a different path.
After moving/copying the backup files into MySQL datadir, you are free to start the MySQL instance again.
Prepare slave from full backupPreparing a slave using Xtrabackup is pretty easy and a straight forward process:
- Restore the full backup as explained above.
- Check the binary logs information of the backup:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191 - Execute the CHANGE MASTER TO command using the above info and start the slave:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_LOG_FILE='mysql-bin.000027', ## taken from xtrabackup_binlog_info -> MASTER_LOG_POS=191; ## taken from xtrabackup_binlog_info SQL> START SLAVE;
For more information on how to set up MySQL Replication, check out this nice manual link.
Prepare GTID slave from full backupGTID is supported in Xtrabackup starting from version 2.1.0. To restore a GTID slave server, the GTID_MODE should be enabled on the master server before creating its backup, otherwise, no GTID values will be included in the "xtrabackup_binlog_info" file under the backup directory. However, the following steps should be done:
- Restore the full backup normally as explained above.
- Check the GTID value of the backup:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191 b9b4712a-df64-11e3-b391-60672090eb04:1-12 - Set the variable GTID_PURGED with the GTID value of the backup:
SQL> SET GLOBAL GTID_PURGED="b9b4712a-df64-11e3-b391-60672090eb04:1-12"; - Execute the auto position CHANGE MASTER TO command and start the slave:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_AUTO_POSITION = 1; SQL> START SLAVE;
For more information on how to set up Transaction-based Replication in MySQL, check out this link.
Advantages / Disadvantages- Advantages:
- Fast, simple and easy way to get your full database backed up and restored.
- All Xtrabackup tool features (like streaming: move the backed up files directly to a remote server) are supported in the full backup method.
- Simple way to introduce a new slave to the master.
- Disadvantages:
- We have to replace the entire MySQL datadir with the new one (In other words, the datadir folder should be empty/removed before the restore process).
- We can't extract one single database or single table from the whole backup (Unless it's MyISAM table), which means that you have to take it all or leave it all.
- The message innobackupex: completed OK! should be printed at the end of every xtrabackup command , otherwise, it would be failed to make a successful command (backup, prepare or restore).
- The ib_logfile files size should be the same in both source and destination servers, if not, you have to either remove them from the backup folder (which will be restored) before starting the MySQL instance and MySQL will create new ones for you OR change those files size in the destination server's configuration file to match the same size in the backup before starting the MySQL instance
- The MySQL user used in the Xtrabackup tool, should have at least the following privileges (RELOAD, LOCK TABLES and REPLICATION CLIENT).
- To prepare a new slave from another slave, just add the two options (“--slave-info" & --safe-slave-backup”) to the backup command and use the information in the file "xtrabackup_slave_info" under the backup folder to issue the "CHANGE MASTER TO" command in the new slave after finishing the restore.
- To Accelerate the preparation process of your backup, just add the option "--use-memory" in the prepare command in order to allocate more used memory (Xtrabackup will use the specified memory as an internal innodb_buffer_pool_size for the prepare process), for ex: [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --use-memory=512M /backup/dir/path/full-backup . . . innobackupex: completed OK!
- The preparation process consists of two steps, replaying the committed transactions and rolling back the uncommitted transactions, using the --apply-log option only in the preparation command will do both steps for you.
- The backup folder "/backup/dir/path/full-backup" SHOULD NOT be created before executing the backup command, because Xtrabackup will create that folder for you, and it will fail to continue processing if that folder was already exist.
When you have a very large database system, you will need large enough storage to store your database backups in, and if you want to perform a daily backup then your mission will be more difficult. In such cases, the incremental database backup method will be very useful. It allows you to have only the database changes (delta) - after the physical full backup – with the minimum storage space required in a fast way, and hence, you can perform the daily backup operations to your database system without the need to having large storage available.
The following steps describe a simple way to perform your physical incremental database backup using XtraBackup tool:
Create Incremental BackupTo perform an incremental backup, we should first perform a full backup - the same like we did in the previous section - to be the base backup of the upcoming incremental backups.
Creating the full backup (Base Backup):
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = full-backuped : which indicates the backup type "full backup".
- from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
- to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.
Creating the first incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc1 --incremental-basedir=/backup/dir/path/full-backup . . . innobackupex: completed OK!We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the full-backup path as the basedir, we informed it from which backup it should start tracking the database changes.
The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = incremental : which indicates the backup type "incremental backup".
- from_lsn = 3768762 : which indicates the log sequence number where the backup process started from (the same LSN as the previous full backup ended at).
- to_lsn = 4908762 : which indicates the log sequence number where the backup process ended at.
Creating the second incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc2 --incremental-basedir=/backup/dir/path/inc1 . . . innobackupex: completed OK!We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the 1st incremental backup path as the basedir, we informed it to start tracking the database changes since the last incremental (not the full backup).
The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = incremental : which indicates the backup type "incremental backup".
- from_lsn = 4908762 : which indicates the log sequence number where the backup process started from (the same LSN as the 1st incremental backup ended at).
- to_lsn = 6508762 : which indicates the log sequence number where the backup process ended at.
Note: We can create as many incremental backups as we want by using the same procedure above.
Prepare Incremental BackupAs mentioned earlier in the article, the preparation process consists of two steps (replaying the committed transactions and rolling back the uncommitted transactions) and using the --apply-log option only will do both of them (like we did in the full backup) but in the incremental backups, we MUST do them separately as follows:
- Replay the committed transactions on the base backup (by adding the option "--redo-only"): [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup . . . innobackupex: completed OK!
- Replay the committed transactions on the 1st incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc1
.
.
.
innobackupex: completed OK!
Note: we specified the full backup folder here, because replaying the committed transactions steps, appends all changes from the incremental backup to the full backup.
- Replay the committed transactions on the 2nd incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc2
.
.
.
innobackupex: completed OK!
Note: here, we didn't use the 1st incremental backup folder, because all changes in the 1st incremental was already appended to the full backup in the previous step.
- Finally, roll back all uncommitted transactions:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup
.
.
.
innobackupex: completed OK!
Note: as the full backup folder contains all data now (full + 1st & 2nd incremental), there's no need to do this step on the incremental backup folders.
Now, the incremental backup is ready to be restored ...
Restore Incremental BackupThe full backup folder will be the only folder to be restored (there's no need to the incremental backup folders) as it contains all data after appending the changes from all incremental backup. We can restore it the same way we did in the full backup restore.
Advantages / Disadvantages- Advantages:
- Less storage resources needed.
- Faster than the full backup.
- Disadvantages:
In addition to the disadvantages of the full backup, there are other ones:
- Complicate and hard process to implement as compared to the full backup.
- The incremental backup strategy, based on Log Sequence Number which affects only XtraDB and InnoDB storage engines while the others (like MyISAM) will be backed up completely (changed + unchanged data) in each incremental backup process.
- If we have many incremental backups, appending all of them might consume time and might be confusing as well.
- If one of the incremental backups become corrupted or not available for any reason, we will not be able to add all incremental backups after that to the full backup.
- The backup preparation sequence steps above, MUST be followed using the same order.
- If the "--redo-only" option was not be used in any of the preparation steps (except the final step), all up coming incremental backups will be useless as we won't be able to add them to the base backup anymore.
- Replaying the committed transactions steps bring all incremental data and append it to the full backup, so that, the rolling back of the uncommitted transactions step should be execute only on the full backup (as it contains already the whole data).).
- In the incremental backups, Xtrabackup generates two files for every table ".delta" & ".meta"(for ex. test.ibd.delta & test.ibd.meta), the delta file size reflects the changes which was applied on that table since the last incremental backup.
- The preparation time of the individual incremental backup will depend on how much data changed there since the last incremental.
- The preparation time for the full backup - in most cases - is really small as compared to the incremental ones because full backups apply the redo logs only while the incremental backups apply the deltas plus the redo logs. So if the delta files are big, the preparation process will take longer.
- Full backups is recommended against Incremental backups if there are many changes applied on the DB, while the incremental backups are recommended when there are few changes applied on the DB.
We can use the incremental backup strategy in order to perform differential backups, but we should consider the following points:
- We always specify the full backup folder as the base backup (in the incremental we specify the previous incremental folder as a base backup)
- All incremental backups between differential and full backups MUST BE ignored when preparing the backup files because the differential backup contains already all changes since the last full backup.
- In the backup preparation process, we should consider the last differential backup as the first incremental backup and all incremental backups after that could be applied normally.
Note: Having differnetial backups in the middle of incremental backups will be useful for many reasons, such as:
- Differential backups reduce the backup preparation steps/time needed because differential backp will replace all its previous incremental backups.
- Differential backups reduce the chances of loosing the incremental backups if we have corrupted incremental backup in the middle, because in this case, differential backup will act as a backup of the previous incremental backups.
Unlike MyISAM, having physical database backup for a single database or table is not possible if the table engine type is InnoDB. But by using the partial database backup method in the XtraBackup tool, it will be possible to have physical InnoDB tables backup the same like MyISAM ones (but with some restrictions).
The following steps describing how to perform partial database backup using XtraBackup tool:
Create Partial BackupA simple Xtrabackup command to backup some databases (or tables) should be something like:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --databases=”db1 db2 db3.tbl1” /backup/dir/path/partial-backup . . . innobackupex: completed OK!Prepare Partial Backup
The same like the other backup methods, the backed up files are not ready until we get them prepared by adding the "--export" option as follows:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --export /backup/dir/path/partial-backup . . . innobackupex: completed OK!Some errors regarding those not included InnoDB tables from the backup may be appeared, but that's fine. Also there will be a notification of creating the ".exp" file for each table which will be used in the import (restore) process.
Now, the partial backup is ready to be restored ...
Restore Partial BackupThe restore process of the partial backup is quite different than the full and incremental backups.
To restore a partial backup, the following steps should be made:
- Unlike the other methods (Full and Incremental backups), MySQL instance on the destination server shouldn't be stopped because we will have to execute some SQL commands.
- On the destination server, we should create new tables (as many as we have in the partial backup or as we will restore) with the same structure like the one in the partial backup and then discard its table space: mysql> CREATE TABLE db.tbl1 (...)ENGINE=INNODB; mysql> ALTER TABLE db.tbl1 DISCARD TABLESPACE;
- Copy “.ibd” and “.exp” files for each table into the corresponding DB directory then assign the right permissions to mysql user: [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.ibd /var/lib/mysql/db [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.exp /var/lib/mysql/db [root@ ~]# chown -R mysql:mysql /var/lib/mysql/db
- Now we should tell MySQL to use the new table spaces: mysql> ALTER TABLE db.tbl1 IMPORT TABLESPACE;
- Advantages:
- Although it's a complicated process, but it allows us to backup and restore individual InnoDB tables the same like MyISAM.
- Useful when having huge InnoDB tables and we want to backup/restore them only.
- Disadvantages:
- The streaming feature is not available in the partial backup.
- Restoring/importing individual tables or databases from a partial backup is not applicable unless the destination server is Percona Server.
- In addition to restoring the files(copy back), three SQL statements should be executed for each table (table creation + two ALTER statements) in order to get them ready for use, which means that we might do a very boring job (or we have to create a special script) to get the partial backup restored if it contains many tables.
- Although we didn't remove the MySQL datadir before the restore process (like full and incremental backups) as well as having the MySQL instance running, but we can restore the partial backup using the same way (remove the datadir contents and copy/move the backup files to the datadir), but we should take into our consideration that we'll have only the backed up databases/tables and all other databases/tables (which are not included in the partial backup) will be missed.
- “innodb_file_per_table” server option must be enabled (in both source and destination servers).
- "innodb_expand_import" option must be enabled in the destination server which is available only in Percona server (and that explain why we can restore partial backup on Percona server only).
- Beside the "--databases" option, other two alternative options to perform the same needs could be used but we must provide each table with the fully qualified naming format:
- --include='db.tbl'.
- --tables-file=/path/to/file.txt ==> in that file, we can add multiple tables one per line in the fully qualified naming format.
Now, you can use the Xtrabackup tool to perform full, incremental and partial database backups, you can decide which method(s) of them are suitable for you according to the advantages and disadvantages of each one, and by considering the important hints for each method you can perform your backup efficiently.
I hope you found this article useful for you and to be familiar with such wonderful tool.
Taxonomy upgrade extras: GTIDxtrabackupBackupXtrabackup in a nutshell
Introduction
No one can deny that one of the most important and daily tasks for DBAs is performing backup and restore operations, we're not required to perform backup and restore operations only when we want to add new replication slave, when we want to implement disaster recovery procedures or when we want to prepare testing or staging server for the running production system, but even if we're going to make any changes to the database schema in order to enhance the database performance, it's recommended to have fresh backup copy before making any live changes, so if backup and restore operations cannot be handled smoothly, we're going to face many troubles in our daily work. If we're going to talk about backup and restore operations, Xtrabackup tool will be strongly appeared.
Xtrabackup tool is a free open source tool developed by Percona to perform physical backup and restore operations which is much faster than performing logical backup and restore using the MySQL utilities (mysqldump and mysql), and many other advantages.
Xtrabackup tool has many options and features which are very useful, but in this article, I'll go through only on how to use this tool to perform simple full, incremental and partial backups and restores, advantages and disadvantages of each method and some important tips.
For more information about Xtrabackup tool, you can browse the manual document from here.
Prerequisites- MySQL server installed.
- Download the xtrabackup tool.
- Install it as explained in the manual document.
If you want to have a full backup from your entire database system with the shortest/fastest backup and restore time, this method will be very useful for you. As compared to the full logical database backup using mysqldump and mysql utilities (very long time to backup and more than the doubled time to restore), taking a full physical backup using Xtrabackup tool will make your life much easier.
Below is the needed steps to make a full physical database backup using XtraBackup tool:
Create BackupA simple Xtrabackup command to backup the full databases should be something like:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!A timestamped folder (for ex. "2013-11-06_00-00-00") would be created to contain all backup files if we didn't use the option "--no-timestamp" in the above command (I didn't use the timestamped folders here to just simplify the names for the reader, otherwise, it's very useful when using automated backup scripts).
Xtrabackup tool now created the backup files under the folder "full-backup" plus some extra files like "xtrabackup-checkpoints" file which contains some information (useful in the incremental backups) like:
- backup_type = full-backuped : which indicates the backup type "full backup".
- from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
- to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.
Another important file is "xtrabackup_binlog_info" which is very useful in replication setups:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191WHERE:
- mysql-bin.000027: is the binary log file name of the master when the backup created.
- 191: is the binary log position of the backup.
The backed up files are not ready at the moment to be restored, we must prepare the backup files first as follows:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup . . . innobackupex: completed OK!Now, the full backup is ready to be restored ...
Restore Full BackupTo get the full backup restored, the MySQL instance should be stopped first and then one of the following two procedures should be done:
- Using the copy back option:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --copy-back /backup/dir/path/full-backup
.
.
.
innobackupex: completed OK!
Xtrabackup tool - in this method - will copy all files under the backup folder (full-backup) to the MySQL datadir which must be indicated in the my.cnf file, otherwise, it wouldn't know where the datadir should be placed.
- Using the operating system copy or move commands:
If you don't want to keep the backup files on your local system (you have another copy in an external server), the move command will be very fast to get your backup restored:
[root@ ~]# mv /backup/dir/path/full-backup /var/lib/mysql
As the user who moves/copies the files into MySQL datadir is not "mysql" user, you should make sure that mysql user has the right permissions on its datadir and also the path "/var/lib/mysql" should be replaced with the MySQL datadir if it's set to a different path.
After moving/copying the backup files into MySQL datadir, you are free to start the MySQL instance again.
Prepare slave from full backupPreparing a slave using Xtrabackup is pretty easy and a straight forward process:
- Restore the full backup as explained above.
- Check the binary logs information of the backup:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191 - Execute the CHANGE MASTER TO command using the above info and start the slave:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_LOG_FILE='mysql-bin.000027', ## taken from xtrabackup_binlog_info -> MASTER_LOG_POS=191; ## taken from xtrabackup_binlog_info SQL> START SLAVE;
For more information on how to set up MySQL Replication, check out this nice manual link.
Prepare GTID slave from full backupGTID is supported in Xtrabackup starting from version 2.1.0. To restore a GTID slave server, the GTID_MODE should be enabled on the master server before creating its backup, otherwise, no GTID values will be included in the "xtrabackup_binlog_info" file under the backup directory. However, the following steps should be done:
- Restore the full backup normally as explained above.
- Check the GTID value of the backup:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191 b9b4712a-df64-11e3-b391-60672090eb04:1-12 - Set the variable GTID_PURGED with the GTID value of the backup:
SQL> SET GLOBAL GTID_PURGED="b9b4712a-df64-11e3-b391-60672090eb04:1-12"; - Execute the auto position CHANGE MASTER TO command and start the slave:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_AUTO_POSITION = 1; SQL> START SLAVE;
For more information on how to set up Transaction-based Replication in MySQL, check out this link.
Advantages / Disadvantages- Advantages:
- Fast, simple and easy way to get your full database backed up and restored.
- All Xtrabackup tool features (like streaming: move the backed up files directly to a remote server) are supported in the full backup method.
- Simple way to introduce a new slave to the master.
- Disadvantages:
- We have to replace the entire MySQL datadir with the new one (In other words, the datadir folder should be empty/removed before the restore process).
- We can't extract one single database or single table from the whole backup (Unless it's MyISAM table), which means that you have to take it all or leave it all.
- The message innobackupex: completed OK! should be printed at the end of every xtrabackup command , otherwise, it would be failed to make a successful command (backup, prepare or restore).
- The ib_logfile files size should be the same in both source and destination servers, if not, you have to either remove them from the backup folder (which will be restored) before starting the MySQL instance and MySQL will create new ones for you OR change those files size in the destination server's configuration file to match the same size in the backup before starting the MySQL instance
- The MySQL user used in the Xtrabackup tool, should have at least the following privileges (RELOAD, LOCK TABLES and REPLICATION CLIENT).
- To prepare a new slave from another slave, just add the two options (“--slave-info" & --safe-slave-backup”) to the backup command and use the information in the file "xtrabackup_slave_info" under the backup folder to issue the "CHANGE MASTER TO" command in the new slave after finishing the restore.
- To Accelerate the preparation process of your backup, just add the option "--use-memory" in the prepare command in order to allocate more used memory (Xtrabackup will use the specified memory as an internal innodb_buffer_pool_size for the prepare process), for ex: [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --use-memory=512M /backup/dir/path/full-backup . . . innobackupex: completed OK!
- The preparation process consists of two steps, replaying the committed transactions and rolling back the uncommitted transactions, using the --apply-log option only in the preparation command will do both steps for you.
- The backup folder "/backup/dir/path/full-backup" SHOULD NOT be created before executing the backup command, because Xtrabackup will create that folder for you, and it will fail to continue processing if that folder was already exist.
When you have a very large database system, you will need large enough storage to store your database backups in, and if you want to perform a daily backup then your mission will be more difficult. In such cases, the incremental database backup method will be very useful. It allows you to have only the database changes (delta) - after the physical full backup – with the minimum storage space required in a fast way, and hence, you can perform the daily backup operations to your database system without the need to having large storage available.
The following steps describe a simple way to perform your physical incremental database backup using XtraBackup tool:
Create Incremental BackupTo perform an incremental backup, we should first perform a full backup - the same like we did in the previous section - to be the base backup of the upcoming incremental backups.
Creating the full backup (Base Backup):
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = full-backuped : which indicates the backup type "full backup".
- from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
- to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.
Creating the first incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc1 --incremental-basedir=/backup/dir/path/full-backup . . . innobackupex: completed OK!We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the full-backup path as the basedir, we informed it from which backup it should start tracking the database changes.
The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = incremental : which indicates the backup type "incremental backup".
- from_lsn = 3768762 : which indicates the log sequence number where the backup process started from (the same LSN as the previous full backup ended at).
- to_lsn = 4908762 : which indicates the log sequence number where the backup process ended at.
Creating the second incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc2 --incremental-basedir=/backup/dir/path/inc1 . . . innobackupex: completed OK!We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the 1st incremental backup path as the basedir, we informed it to start tracking the database changes since the last incremental (not the full backup).
The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = incremental : which indicates the backup type "incremental backup".
- from_lsn = 4908762 : which indicates the log sequence number where the backup process started from (the same LSN as the 1st incremental backup ended at).
- to_lsn = 6508762 : which indicates the log sequence number where the backup process ended at.
Note: We can create as many incremental backups as we want by using the same procedure above.
Prepare Incremental BackupAs mentioned earlier in the article, the preparation process consists of two steps (replaying the committed transactions and rolling back the uncommitted transactions) and using the --apply-log option only will do both of them (like we did in the full backup) but in the incremental backups, we MUST do them separately as follows:
- Replay the committed transactions on the base backup (by adding the option "--redo-only"): [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup . . . innobackupex: completed OK!
- Replay the committed transactions on the 1st incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc1
.
.
.
innobackupex: completed OK!
Note: we specified the full backup folder here, because replaying the committed transactions steps, appends all changes from the incremental backup to the full backup.
- Replay the committed transactions on the 2nd incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc2
.
.
.
innobackupex: completed OK!
Note: here, we didn't use the 1st incremental backup folder, because all changes in the 1st incremental was already appended to the full backup in the previous step.
- Finally, roll back all uncommitted transactions:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup
.
.
.
innobackupex: completed OK!
Note: as the full backup folder contains all data now (full + 1st & 2nd incremental), there's no need to do this step on the incremental backup folders.
Now, the incremental backup is ready to be restored ...
Restore Incremental BackupThe full backup folder will be the only folder to be restored (there's no need to the incremental backup folders) as it contains all data after appending the changes from all incremental backup. We can restore it the same way we did in the full backup restore.
Advantages / Disadvantages- Advantages:
- Less storage resources needed.
- Faster than the full backup.
- Disadvantages:
In addition to the disadvantages of the full backup, there are other ones:
- Complicate and hard process to implement as compared to the full backup.
- The incremental backup strategy, based on Log Sequence Number which affects only XtraDB and InnoDB storage engines while the others (like MyISAM) will be backed up completely (changed + unchanged data) in each incremental backup process.
- If we have many incremental backups, appending all of them might consume time and might be confusing as well.
- If one of the incremental backups become corrupted or not available for any reason, we will not be able to add all incremental backups after that to the full backup.
- The backup preparation sequence steps above, MUST be followed using the same order.
- If the "--redo-only" option was not be used in any of the preparation steps (except the final step), all up coming incremental backups will be useless as we won't be able to add them to the base backup anymore.
- Replaying the committed transactions steps bring all incremental data and append it to the full backup, so that, the rolling back of the uncommitted transactions step should be execute only on the full backup (as it contains already the whole data).).
- In the incremental backups, Xtrabackup generates two files for every table ".delta" & ".meta"(for ex. test.ibd.delta & test.ibd.meta), the delta file size reflects the changes which was applied on that table since the last incremental backup.
- The preparation time of the individual incremental backup will depend on how much data changed there since the last incremental.
- The preparation time for the full backup - in most cases - is really small as compared to the incremental ones because full backups apply the redo logs only while the incremental backups apply the deltas plus the redo logs. So if the delta files are big, the preparation process will take longer.
- Full backups is recommended against Incremental backups if there are many changes applied on the DB, while the incremental backups are recommended when there are few changes applied on the DB.
We can use the incremental backup strategy in order to perform differential backups, but we should consider the following points:
- We always specify the full backup folder as the base backup (in the incremental we specify the previous incremental folder as a base backup)
- All incremental backups between differential and full backups MUST BE ignored when preparing the backup files because the differential backup contains already all changes since the last full backup.
- In the backup preparation process, we should consider the last differential backup as the first incremental backup and all incremental backups after that could be applied normally.
Note: Having differnetial backups in the middle of incremental backups will be useful for many reasons, such as:
- Differential backups reduce the backup preparation steps/time needed because differential backp will replace all its previous incremental backups.
- Differential backups reduce the chances of loosing the incremental backups if we have corrupted incremental backup in the middle, because in this case, differential backup will act as a backup of the previous incremental backups.
Unlike MyISAM, having physical database backup for a single database or table is not possible if the table engine type is InnoDB. But by using the partial database backup method in the XtraBackup tool, it will be possible to have physical InnoDB tables backup the same like MyISAM ones (but with some restrictions).
The following steps describing how to perform partial database backup using XtraBackup tool:
Create Partial BackupA simple Xtrabackup command to backup some databases (or tables) should be something like:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --databases=”db1 db2 db3.tbl1” /backup/dir/path/partial-backup . . . innobackupex: completed OK!Prepare Partial Backup
The same like the other backup methods, the backed up files are not ready until we get them prepared by adding the "--export" option as follows:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --export /backup/dir/path/partial-backup . . . innobackupex: completed OK!Some errors regarding those not included InnoDB tables from the backup may be appeared, but that's fine. Also there will be a notification of creating the ".exp" file for each table which will be used in the import (restore) process.
Now, the partial backup is ready to be restored ...
Restore Partial BackupThe restore process of the partial backup is quite different than the full and incremental backups.
To restore a partial backup, the following steps should be made:
- Unlike the other methods (Full and Incremental backups), MySQL instance on the destination server shouldn't be stopped because we will have to execute some SQL commands.
- On the destination server, we should create new tables (as many as we have in the partial backup or as we will restore) with the same structure like the one in the partial backup and then discard its table space: mysql> CREATE TABLE db.tbl1 (...)ENGINE=INNODB; mysql> ALTER TABLE db.tbl1 DISCARD TABLESPACE;
- Copy “.ibd” and “.exp” files for each table into the corresponding DB directory then assign the right permissions to mysql user: [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.ibd /var/lib/mysql/db [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.exp /var/lib/mysql/db [root@ ~]# chown -R mysql:mysql /var/lib/mysql/db
- Now we should tell MySQL to use the new table spaces: mysql> ALTER TABLE db.tbl1 IMPORT TABLESPACE;
- Advantages:
- Although it's a complicated process, but it allows us to backup and restore individual InnoDB tables the same like MyISAM.
- Useful when having huge InnoDB tables and we want to backup/restore them only.
- Disadvantages:
- The streaming feature is not available in the partial backup.
- Restoring/importing individual tables or databases from a partial backup is not applicable unless the destination server is Percona Server.
- In addition to restoring the files(copy back), three SQL statements should be executed for each table (table creation + two ALTER statements) in order to get them ready for use, which means that we might do a very boring job (or we have to create a special script) to get the partial backup restored if it contains many tables.
- Although we didn't remove the MySQL datadir before the restore process (like full and incremental backups) as well as having the MySQL instance running, but we can restore the partial backup using the same way (remove the datadir contents and copy/move the backup files to the datadir), but we should take into our consideration that we'll have only the backed up databases/tables and all other databases/tables (which are not included in the partial backup) will be missed.
- “innodb_file_per_table” server option must be enabled (in both source and destination servers).
- "innodb_expand_import" option must be enabled in the destination server which is available only in Percona server (and that explain why we can restore partial backup on Percona server only).
- Beside the "--databases" option, other two alternative options to perform the same needs could be used but we must provide each table with the fully qualified naming format:
- --include='db.tbl'.
- --tables-file=/path/to/file.txt ==> in that file, we can add multiple tables one per line in the fully qualified naming format.
Now, you can use the Xtrabackup tool to perform full, incremental and partial database backups, you can decide which method(s) of them are suitable for you according to the advantages and disadvantages of each one, and by considering the important hints for each method you can perform your backup efficiently.
I hope you found this article useful for you and to be familiar with such wonderful tool.
Taxonomy upgrade extras: GTIDxtrabackupBackupXtrabackup in a nutshell
Introduction
No one can deny that one of the most important and daily tasks for DBAs is performing backup and restore operations, we're not required to perform backup and restore operations only when we want to add new replication slave, when we want to implement disaster recovery procedures or when we want to prepare testing or staging server for the running production system, but even if we're going to make any changes to the database schema in order to enhance the database performance, it's recommended to have fresh backup copy before making any live changes, so if backup and restore operations cannot be handled smoothly, we're going to face many troubles in our daily work. If we're going to talk about backup and restore operations, Xtrabackup tool will be strongly appeared.
Xtrabackup tool is a free open source tool developed by Percona to perform physical backup and restore operations which is much faster than performing logical backup and restore using the MySQL utilities (mysqldump and mysql), and many other advantages.
Xtrabackup tool has many options and features which are very useful, but in this article, I'll go through only on how to use this tool to perform simple full, incremental and partial backups and restores, advantages and disadvantages of each method and some important tips.
For more information about Xtrabackup tool, you can browse the manual document from here.
Prerequisites- MySQL server installed.
- Download the xtrabackup tool.
- Install it as explained in the manual document.
If you want to have a full backup from your entire database system with the shortest/fastest backup and restore time, this method will be very useful for you. As compared to the full logical database backup using mysqldump and mysql utilities (very long time to backup and more than the doubled time to restore), taking a full physical backup using Xtrabackup tool will make your life much easier.
Below is the needed steps to make a full physical database backup using XtraBackup tool:
Create BackupA simple Xtrabackup command to backup the full databases should be something like:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!A timestamped folder (for ex. "2013-11-06_00-00-00") would be created to contain all backup files if we didn't use the option "--no-timestamp" in the above command (I didn't use the timestamped folders here to just simplify the names for the reader, otherwise, it's very useful when using automated backup scripts).
Xtrabackup tool now created the backup files under the folder "full-backup" plus some extra files like "xtrabackup-checkpoints" file which contains some information (useful in the incremental backups) like:
- backup_type = full-backuped : which indicates the backup type "full backup".
- from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
- to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.
Another important file is "xtrabackup_binlog_info" which is very useful in replication setups:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191WHERE:
- mysql-bin.000027: is the binary log file name of the master when the backup created.
- 191: is the binary log position of the backup.
The backed up files are not ready at the moment to be restored, we must prepare the backup files first as follows:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup . . . innobackupex: completed OK!Now, the full backup is ready to be restored ...
Restore Full BackupTo get the full backup restored, the MySQL instance should be stopped first and then one of the following two procedures should be done:
- Using the copy back option:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --copy-back /backup/dir/path/full-backup
.
.
.
innobackupex: completed OK!
Xtrabackup tool - in this method - will copy all files under the backup folder (full-backup) to the MySQL datadir which must be indicated in the my.cnf file, otherwise, it wouldn't know where the datadir should be placed.
- Using the operating system copy or move commands:
If you don't want to keep the backup files on your local system (you have another copy in an external server), the move command will be very fast to get your backup restored:
[root@ ~]# mv /backup/dir/path/full-backup /var/lib/mysql
As the user who moves/copies the files into MySQL datadir is not "mysql" user, you should make sure that mysql user has the right permissions on its datadir and also the path "/var/lib/mysql" should be replaced with the MySQL datadir if it's set to a different path.
After moving/copying the backup files into MySQL datadir, you are free to start the MySQL instance again.
Prepare slave from full backupPreparing a slave using Xtrabackup is pretty easy and a straight forward process:
- Restore the full backup as explained above.
- Check the binary logs information of the backup:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191 - Execute the CHANGE MASTER TO command using the above info and start the slave:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_LOG_FILE='mysql-bin.000027', ## taken from xtrabackup_binlog_info -> MASTER_LOG_POS=191; ## taken from xtrabackup_binlog_info SQL> START SLAVE;
For more information on how to set up MySQL Replication, check out this nice manual link.
Prepare GTID slave from full backupGTID is supported in Xtrabackup starting from version 2.1.0. To restore a GTID slave server, the GTID_MODE should be enabled on the master server before creating its backup, otherwise, no GTID values will be included in the "xtrabackup_binlog_info" file under the backup directory. However, the following steps should be done:
- Restore the full backup normally as explained above.
- Check the GTID value of the backup:
[root@ ~]# cat xtrabackup_binlog_info mysql-bin.000027 191 b9b4712a-df64-11e3-b391-60672090eb04:1-12 - Set the variable GTID_PURGED with the GTID value of the backup:
SQL> SET GLOBAL GTID_PURGED="b9b4712a-df64-11e3-b391-60672090eb04:1-12"; - Execute the auto position CHANGE MASTER TO command and start the slave:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=master_port, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='slave_password', -> MASTER_AUTO_POSITION = 1; SQL> START SLAVE;
For more information on how to set up Transaction-based Replication in MySQL, check out this link.
Advantages / Disadvantages- Advantages:
- Fast, simple and easy way to get your full database backed up and restored.
- All Xtrabackup tool features (like streaming: move the backed up files directly to a remote server) are supported in the full backup method.
- Simple way to introduce a new slave to the master.
- Disadvantages:
- We have to replace the entire MySQL datadir with the new one (In other words, the datadir folder should be empty/removed before the restore process).
- We can't extract one single database or single table from the whole backup (Unless it's MyISAM table), which means that you have to take it all or leave it all.
- The message innobackupex: completed OK! should be printed at the end of every xtrabackup command , otherwise, it would be failed to make a successful command (backup, prepare or restore).
- The ib_logfile files size should be the same in both source and destination servers, if not, you have to either remove them from the backup folder (which will be restored) before starting the MySQL instance and MySQL will create new ones for you OR change those files size in the destination server's configuration file to match the same size in the backup before starting the MySQL instance
- The MySQL user used in the Xtrabackup tool, should have at least the following privileges (RELOAD, LOCK TABLES and REPLICATION CLIENT).
- To prepare a new slave from another slave, just add the two options (“--slave-info" & --safe-slave-backup”) to the backup command and use the information in the file "xtrabackup_slave_info" under the backup folder to issue the "CHANGE MASTER TO" command in the new slave after finishing the restore.
- To Accelerate the preparation process of your backup, just add the option "--use-memory" in the prepare command in order to allocate more used memory (Xtrabackup will use the specified memory as an internal innodb_buffer_pool_size for the prepare process), for ex: [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --use-memory=512M /backup/dir/path/full-backup . . . innobackupex: completed OK!
- The preparation process consists of two steps, replaying the committed transactions and rolling back the uncommitted transactions, using the --apply-log option only in the preparation command will do both steps for you.
- The backup folder "/backup/dir/path/full-backup" SHOULD NOT be created before executing the backup command, because Xtrabackup will create that folder for you, and it will fail to continue processing if that folder was already exist.
When you have a very large database system, you will need large enough storage to store your database backups in, and if you want to perform a daily backup then your mission will be more difficult. In such cases, the incremental database backup method will be very useful. It allows you to have only the database changes (delta) - after the physical full backup – with the minimum storage space required in a fast way, and hence, you can perform the daily backup operations to your database system without the need to having large storage available.
The following steps describe a simple way to perform your physical incremental database backup using XtraBackup tool:
Create Incremental BackupTo perform an incremental backup, we should first perform a full backup - the same like we did in the previous section - to be the base backup of the upcoming incremental backups.
Creating the full backup (Base Backup):
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp /backup/dir/path/full-backup . . . innobackupex: completed OK!The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = full-backuped : which indicates the backup type "full backup".
- from_lsn = 0 : which indicates the log sequence number where the backup process started from (0 means from the beginning).
- to_lsn = 3768762 : which indicates the log sequence number where the backup process ended at.
Creating the first incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc1 --incremental-basedir=/backup/dir/path/full-backup . . . innobackupex: completed OK!We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the full-backup path as the basedir, we informed it from which backup it should start tracking the database changes.
The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = incremental : which indicates the backup type "incremental backup".
- from_lsn = 3768762 : which indicates the log sequence number where the backup process started from (the same LSN as the previous full backup ended at).
- to_lsn = 4908762 : which indicates the log sequence number where the backup process ended at.
Creating the second incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --incremental /backup/dir/path/inc2 --incremental-basedir=/backup/dir/path/inc1 . . . innobackupex: completed OK!We informed the Xtrabackup tool to perform an incremental backup by adding the command "--incremental", and by specifying the 1st incremental backup path as the basedir, we informed it to start tracking the database changes since the last incremental (not the full backup).
The "xtrabackup-checkpoints" file contents will be something like:
- backup_type = incremental : which indicates the backup type "incremental backup".
- from_lsn = 4908762 : which indicates the log sequence number where the backup process started from (the same LSN as the 1st incremental backup ended at).
- to_lsn = 6508762 : which indicates the log sequence number where the backup process ended at.
Note: We can create as many incremental backups as we want by using the same procedure above.
Prepare Incremental BackupAs mentioned earlier in the article, the preparation process consists of two steps (replaying the committed transactions and rolling back the uncommitted transactions) and using the --apply-log option only will do both of them (like we did in the full backup) but in the incremental backups, we MUST do them separately as follows:
- Replay the committed transactions on the base backup (by adding the option "--redo-only"): [root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup . . . innobackupex: completed OK!
- Replay the committed transactions on the 1st incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc1
.
.
.
innobackupex: completed OK!
Note: we specified the full backup folder here, because replaying the committed transactions steps, appends all changes from the incremental backup to the full backup.
- Replay the committed transactions on the 2nd incremental backup:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --redo-only /backup/dir/path/full-backup --incremental-dir=/backup/dir/path/inc2
.
.
.
innobackupex: completed OK!
Note: here, we didn't use the 1st incremental backup folder, because all changes in the 1st incremental was already appended to the full backup in the previous step.
- Finally, roll back all uncommitted transactions:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log /backup/dir/path/full-backup
.
.
.
innobackupex: completed OK!
Note: as the full backup folder contains all data now (full + 1st & 2nd incremental), there's no need to do this step on the incremental backup folders.
Now, the incremental backup is ready to be restored ...
Restore Incremental BackupThe full backup folder will be the only folder to be restored (there's no need to the incremental backup folders) as it contains all data after appending the changes from all incremental backup. We can restore it the same way we did in the full backup restore.
Advantages / Disadvantages- Advantages:
- Less storage resources needed.
- Faster than the full backup.
- Disadvantages:
In addition to the disadvantages of the full backup, there are other ones:
- Complicate and hard process to implement as compared to the full backup.
- The incremental backup strategy, based on Log Sequence Number which affects only XtraDB and InnoDB storage engines while the others (like MyISAM) will be backed up completely (changed + unchanged data) in each incremental backup process.
- If we have many incremental backups, appending all of them might consume time and might be confusing as well.
- If one of the incremental backups become corrupted or not available for any reason, we will not be able to add all incremental backups after that to the full backup.
- The backup preparation sequence steps above, MUST be followed using the same order.
- If the "--redo-only" option was not be used in any of the preparation steps (except the final step), all up coming incremental backups will be useless as we won't be able to add them to the base backup anymore.
- Replaying the committed transactions steps bring all incremental data and append it to the full backup, so that, the rolling back of the uncommitted transactions step should be execute only on the full backup (as it contains already the whole data).).
- In the incremental backups, Xtrabackup generates two files for every table ".delta" & ".meta"(for ex. test.ibd.delta & test.ibd.meta), the delta file size reflects the changes which was applied on that table since the last incremental backup.
- The preparation time of the individual incremental backup will depend on how much data changed there since the last incremental.
- The preparation time for the full backup - in most cases - is really small as compared to the incremental ones because full backups apply the redo logs only while the incremental backups apply the deltas plus the redo logs. So if the delta files are big, the preparation process will take longer.
- Full backups is recommended against Incremental backups if there are many changes applied on the DB, while the incremental backups are recommended when there are few changes applied on the DB.
We can use the incremental backup strategy in order to perform differential backups, but we should consider the following points:
- We always specify the full backup folder as the base backup (in the incremental we specify the previous incremental folder as a base backup)
- All incremental backups between differential and full backups MUST BE ignored when preparing the backup files because the differential backup contains already all changes since the last full backup.
- In the backup preparation process, we should consider the last differential backup as the first incremental backup and all incremental backups after that could be applied normally.
Note: Having differnetial backups in the middle of incremental backups will be useful for many reasons, such as:
- Differential backups reduce the backup preparation steps/time needed because differential backp will replace all its previous incremental backups.
- Differential backups reduce the chances of loosing the incremental backups if we have corrupted incremental backup in the middle, because in this case, differential backup will act as a backup of the previous incremental backups.
Unlike MyISAM, having physical database backup for a single database or table is not possible if the table engine type is InnoDB. But by using the partial database backup method in the XtraBackup tool, it will be possible to have physical InnoDB tables backup the same like MyISAM ones (but with some restrictions).
The following steps describing how to perform partial database backup using XtraBackup tool:
Create Partial BackupA simple Xtrabackup command to backup some databases (or tables) should be something like:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --no-timestamp --databases=”db1 db2 db3.tbl1” /backup/dir/path/partial-backup . . . innobackupex: completed OK!Prepare Partial Backup
The same like the other backup methods, the backed up files are not ready until we get them prepared by adding the "--export" option as follows:
[root@ ~]# innobackupex --user=db_user –-password='db_password' --apply-log --export /backup/dir/path/partial-backup . . . innobackupex: completed OK!Some errors regarding those not included InnoDB tables from the backup may be appeared, but that's fine. Also there will be a notification of creating the ".exp" file for each table which will be used in the import (restore) process.
Now, the partial backup is ready to be restored ...
Restore Partial BackupThe restore process of the partial backup is quite different than the full and incremental backups.
To restore a partial backup, the following steps should be made:
- Unlike the other methods (Full and Incremental backups), MySQL instance on the destination server shouldn't be stopped because we will have to execute some SQL commands.
- On the destination server, we should create new tables (as many as we have in the partial backup or as we will restore) with the same structure like the one in the partial backup and then discard its table space: mysql> CREATE TABLE db.tbl1 (...)ENGINE=INNODB; mysql> ALTER TABLE db.tbl1 DISCARD TABLESPACE;
- Copy “.ibd” and “.exp” files for each table into the corresponding DB directory then assign the right permissions to mysql user: [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.ibd /var/lib/mysql/db [root@ ~]# cp /backup/dir/path/partial-backup/db/tbl1.exp /var/lib/mysql/db [root@ ~]# chown -R mysql:mysql /var/lib/mysql/db
- Now we should tell MySQL to use the new table spaces: mysql> ALTER TABLE db.tbl1 IMPORT TABLESPACE;
- Advantages:
- Although it's a complicated process, but it allows us to backup and restore individual InnoDB tables the same like MyISAM.
- Useful when having huge InnoDB tables and we want to backup/restore them only.
- Disadvantages:
- The streaming feature is not available in the partial backup.
- Restoring/importing individual tables or databases from a partial backup is not applicable unless the destination server is Percona Server.
- In addition to restoring the files(copy back), three SQL statements should be executed for each table (table creation + two ALTER statements) in order to get them ready for use, which means that we might do a very boring job (or we have to create a special script) to get the partial backup restored if it contains many tables.
- Although we didn't remove the MySQL datadir before the restore process (like full and incremental backups) as well as having the MySQL instance running, but we can restore the partial backup using the same way (remove the datadir contents and copy/move the backup files to the datadir), but we should take into our consideration that we'll have only the backed up databases/tables and all other databases/tables (which are not included in the partial backup) will be missed.
- “innodb_file_per_table” server option must be enabled (in both source and destination servers).
- "innodb_expand_import" option must be enabled in the destination server which is available only in Percona server (and that explain why we can restore partial backup on Percona server only).
- Beside the "--databases" option, other two alternative options to perform the same needs could be used but we must provide each table with the fully qualified naming format:
- --include='db.tbl'.
- --tables-file=/path/to/file.txt ==> in that file, we can add multiple tables one per line in the fully qualified naming format.
Now, you can use the Xtrabackup tool to perform full, incremental and partial database backups, you can decide which method(s) of them are suitable for you according to the advantages and disadvantages of each one, and by considering the important hints for each method you can perform your backup efficiently.
I hope you found this article useful for you and to be familiar with such wonderful tool.
MySQL Environment MyEnv 1.0.1 has been released
FromDual has the pleasure to announce the release of the new version 1.0.1 of its popular multi-instance MySQL Environment MyEnv.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
If you have questions about MyEnv we will answer them in the MyEnv forum.
Upgrade from 1.0 or 1.0.x to 1.0.y # cd /home/mysql/product # tar xf /tmp/myenv-1.0.y.tar.gz # rm -f myenv # ln -s myenv-1.0.y myenvChanges in MyEnv 1.0.1 MyEnv
- New alias v introduce for echo $MYENV_VERSION.
- Function start was extended by --option. This allows you to start a Galera cluster node as follows: start --wsrep-cluster-address="gcomm://" (Bug #58).
- Requirements for Ubuntu in README added.
- Mismatch between my.cnf and myenv.conf or [mysqld] section and [client] section in my.cnf is checked and reported (Bug #71).
- Before an instance is started MyEnv will check if file mysql/user.frm is created just in case installation was missing (Bug #82).
- Non default PID file name is handled properly (Bug #64).
- Starting an instance will also report SUCCESS or ERROR.
- Instance check mysqladmin ping was improved by adding a timeout (Bug #56).
- User to start MySQL instance is checked (Bug #69).
- The scripts start and stop are much more robust (Bug #66).
- Internal sequence of checks during start and stop has been changed.
- Package recommendation for SLES/OpenSuSE completed.
- Fixes around the POSIX functions for SLES/OpenSuSE.
- Fake around missing signal constants for SLES/OpenSuSE and more verbose error messages.
- Check for missing POSIX features added (Bug #59).
- MyEnv start-up and check sequence changed.
- Deleting a non existing instance is handled properly.
- Empty basedir is handled properly.
- Installation on /etc/my.cnf is handled properly (Bug #62).
- Installation of my.cnf is redesigned, prepared for future features and error handling is improved.
- mysql_install_db is also searched in bin path for NDB (MySQL Cluster) set-ups.
- Installer is checking at the right location to decide if an instance is already installed or not (#Bug #83).
- Installer should not be started as root user. Throwing warning (Bug #61).
- Installer has additional suggestion for socket name: mysql-instance.sock, whole installation process is broken up in more pieces for future improvements, test cases were added (Bug #60).
- Delete instance error messages made more clear (Bug #89).
No changes in this release.
MySQL Backup ManagerNo changes in this release.
Upgrade from Galera Cluster 2.x to 3.0
- Introduction
- Prerequisites
- Upgrade the first node
- Rolling upgrade the other nodes
- Get rid of old release option
Introduction
Codership announced from weeks ago introducing the Galera Cluster new release 3.0 having many bug fixes, performance enhancements plus the main purpose which is working with MySQL 5.6. In this article, I'll go through the upgrade steps from Galera 2.x to the new release 3.0, but at the time of writing this article - as mentioned in the Codership release notes - THIS IS A BETA QUALITY RELEASE FOR TESTING PURPOSES. NOT RECOMMENDED FOR PRODUCTION YET.
Important note: a new Galera version (3.1) will be available soon for production and it will be INCOMPATIBLE with this beta version (3.0) but still compatible with 2.x, so again DO NOT go for production using 3.0 and postpone the production upgrade process until 3.1 become available.
Prerequisites System informationThe following are the cluster system information:
- Operating System: CentOS release 6.4 (64 bit)
- Cluster system consists of 3 cluster nodes (192.168.1.251 "gcservera",192.168.1.252 "gcserverb" & 192.168.1.253 "gcserverc")
The following are the packages installed on the three cluster nodes:
- MySQL version: mysql-5.5.33_wsrep_23.7.6 (RPM)
- Galera provider version:galera-23.2.7 (RPM)
The following are the needed packages to be installed:
- MySQL 5.6 + Galera plugin: Could be downloaded from here.
- Galera provider 3.0: Could be downloaded from here
To upgrade the installed binaries, you have to stop the mysqld first.
Important: If you are using a load balancer in your cluster system, you should bring the node in question out from the load balancer before stopping the mysqld.
[root@gcservera ~]# /etc/init.d/mysql stopAnd then, upgrade with the binaries:
[root@gcservera ~]# rpm -qa|grep MySQL MySQL-server-5.5.33_wsrep_23.7.6-1.rhel6.x86_64 MySQL-client-5.5.34-1.el6.x86_64 [root@gcservera ~]# rpm -e MySQL-server-5.5.33_wsrep_23.7.6-1.rhel6.x86_64 [root@gcservera ~]# rpm -ivh /downloads/MySQL-server-5.6.13_wsrep_24.0-1.rhel6.x86_64.rpm Preparing... ########################################### [100%] 1:MySQL-server ########################################### [100%] [root@gcservera ~]# rpm -U /downloads/galera-24.3.0-1.rhel6.x86_64.rpm Upgrade mysql schemaIt's recommended to perform the mysql schema upgrade before joining the cluster, so that, we need to start the node first as a standalone instance by disabling the provider option:
#my.cnf [mysqld] . . #wsrep_provider=/usr/lib64/galera/libgalera_smm.soThen start the instance and perform the upgrade using the mysql_upgrade utility:
[root@gcservera ~]# /etc/init.d/mysql start Starting MySQL........... SUCCESS! [root@gcservera ~]# mysql_upgrade Prepare the node to join the clusterAll running nodes (second and third nodes) are using the old galera version (2.7) at the moment, so that a backward compatibility option (wsrep_provider_options="socket.checksum=1") MUST be added in the node's configuration in order to join the cluser, otherwise, it will fail to join it back.
Also don't forget to enable again the provider option:
#my.cnf [mysqld] . . wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_provider_options="socket.checksum=1" Joining the clusterNow we are ready to join the cluster by restarting the node
[root@gcservera ~]# /etc/init.d/mysql restart Stopping MySQL........... SUCCESS! Starting MySQL........... SUCCESS!We can check the new version as follows:
mysql> show global variables like'%version%'; +-------------------------+------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------+ | innodb_version | 5.6.13 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.6.13-log | | version_comment | MySQL Community Server (GPL), wsrep_24.0.r3937 | | version_compile_machine | x86_64 | | version_compile_os | Linux | +-------------------------+------------------------------------------------+ 7 rows in set (0.05 sec)And the cluster status as well:
mysql> show global status like'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | ac53dc1e-3aff-11e3-b970-eb7044f6dc77 | . . | wsrep_local_state_comment | Synced | . . | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | ac53dc1e-3aff-11e3-b970-eb7044f6dc77 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_index | 2 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy | | wsrep_provider_version | 24.3.0(r159) | | wsrep_ready | ON | +----------------------------+--------------------------------------+ 43 rows in set (0.13 sec)Important: if you are using a load balancer in the cluster, it's now the time to add this node back again to it.
Rolling upgrade the other nodes
You can start doing the rolling upgrade to the other nodes the same like the first one but after making sure that all nodes state (wsrep_local_state_comment) is Synced. If you stopped a node while it's in the Donor state, then the donor and the joiner nodes might be crashed, so make sure of that first.
Get rid of the old release option
After making the upgrade on all the cluster nodes, the backward compatibility option (wsrep_provider_options="socket.checksum=1") is not needed anymore, so removing it from the configuration files and doing a rolling restart on all nodes will do the mission.
Have fun with the new MySQL and Galera releases :)
Taxonomy upgrade extras: upgradegalera
Upgrade from Galera Cluster 2.x to 3.0
- Introduction
- Prerequisites
- Upgrade the first node
- Rolling upgrade the other nodes
- Get rid of old release option
Introduction
Codership announced from weeks ago introducing the Galera Cluster new release 3.0 having many bug fixes, performance enhancements plus the main purpose which is working with MySQL 5.6. In this article, I'll go through the upgrade steps from Galera 2.x to the new release 3.0, but at the time of writing this article - as mentioned in the Codership release notes - THIS IS A BETA QUALITY RELEASE FOR TESTING PURPOSES. NOT RECOMMENDED FOR PRODUCTION YET.
Important note: a new Galera version (3.1) will be available soon for production and it will be INCOMPATIBLE with this beta version (3.0) but still compatible with 2.x, so again DO NOT go for production using 3.0 and postpone the production upgrade process until 3.1 become available.
Prerequisites System informationThe following are the cluster system information:
- Operating System: CentOS release 6.4 (64 bit)
- Cluster system consists of 3 cluster nodes (192.168.1.251 "gcservera",192.168.1.252 "gcserverb" & 192.168.1.253 "gcserverc")
The following are the packages installed on the three cluster nodes:
- MySQL version: mysql-5.5.33_wsrep_23.7.6 (RPM)
- Galera provider version:galera-23.2.7 (RPM)
The following are the needed packages to be installed:
- MySQL 5.6 + Galera plugin: Could be downloaded from here.
- Galera provider 3.0: Could be downloaded from here
To upgrade the installed binaries, you have to stop the mysqld first.
Important: If you are using a load balancer in your cluster system, you should bring the node in question out from the load balancer before stopping the mysqld.
[root@gcservera ~]# /etc/init.d/mysql stopAnd then, upgrade with the binaries:
[root@gcservera ~]# rpm -qa|grep MySQL MySQL-server-5.5.33_wsrep_23.7.6-1.rhel6.x86_64 MySQL-client-5.5.34-1.el6.x86_64 [root@gcservera ~]# rpm -e MySQL-server-5.5.33_wsrep_23.7.6-1.rhel6.x86_64 [root@gcservera ~]# rpm -ivh /downloads/MySQL-server-5.6.13_wsrep_24.0-1.rhel6.x86_64.rpm Preparing... ########################################### [100%] 1:MySQL-server ########################################### [100%] [root@gcservera ~]# rpm -U /downloads/galera-24.3.0-1.rhel6.x86_64.rpm Upgrade mysql schemaIt's recommended to perform the mysql schema upgrade before joining the cluster, so that, we need to start the node first as a standalone instance by disabling the provider option:
#my.cnf [mysqld] . . #wsrep_provider=/usr/lib64/galera/libgalera_smm.soThen start the instance and perform the upgrade using the mysql_upgrade utility:
[root@gcservera ~]# /etc/init.d/mysql start Starting MySQL........... SUCCESS! [root@gcservera ~]# mysql_upgrade Prepare the node to join the clusterAll running nodes (second and third nodes) are using the old galera version (2.7) at the moment, so that a backward compatibility option (wsrep_provider_options="socket.checksum=1") MUST be added in the node's configuration in order to join the cluser, otherwise, it will fail to join it back.
Also don't forget to enable again the provider option:
#my.cnf [mysqld] . . wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_provider_options="socket.checksum=1" Joining the clusterNow we are ready to join the cluster by restarting the node
[root@gcservera ~]# /etc/init.d/mysql restart Stopping MySQL........... SUCCESS! Starting MySQL........... SUCCESS!We can check the new version as follows:
mysql> show global variables like'%version%'; +-------------------------+------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------+ | innodb_version | 5.6.13 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.6.13-log | | version_comment | MySQL Community Server (GPL), wsrep_24.0.r3937 | | version_compile_machine | x86_64 | | version_compile_os | Linux | +-------------------------+------------------------------------------------+ 7 rows in set (0.05 sec)And the cluster status as well:
mysql> show global status like'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | ac53dc1e-3aff-11e3-b970-eb7044f6dc77 | . . | wsrep_local_state_comment | Synced | . . | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | ac53dc1e-3aff-11e3-b970-eb7044f6dc77 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_index | 2 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy | | wsrep_provider_version | 24.3.0(r159) | | wsrep_ready | ON | +----------------------------+--------------------------------------+ 43 rows in set (0.13 sec)Important: if you are using a load balancer in the cluster, it's now the time to add this node back again to it.
Rolling upgrade the other nodes
You can start doing the rolling upgrade to the other nodes the same like the first one but after making sure that all nodes state (wsrep_local_state_comment) is Synced. If you stopped a node while it's in the Donor state, then the donor and the joiner nodes might be crashed, so make sure of that first.
Get rid of the old release option
After making the upgrade on all the cluster nodes, the backward compatibility option (wsrep_provider_options="socket.checksum=1") is not needed anymore, so removing it from the configuration files and doing a rolling restart on all nodes will do the mission.
Have fun with the new MySQL and Galera releases :)
Upgrade from Galera Cluster 2.x to 3.0
- Introduction
- Prerequisites
- Upgrade the first node
- Rolling upgrade the other nodes
- Get rid of old release option
Introduction
Codership announced from weeks ago introducing the Galera Cluster new release 3.0 having many bug fixes, performance enhancements plus the main purpose which is working with MySQL 5.6. In this article, I'll go through the upgrade steps from Galera 2.x to the new release 3.0, but at the time of writing this article - as mentioned in the Codership release notes - THIS IS A BETA QUALITY RELEASE FOR TESTING PURPOSES. NOT RECOMMENDED FOR PRODUCTION YET.
Important note: a new Galera version (3.1) will be available soon for production and it will be INCOMPATIBLE with this beta version (3.0) but still compatible with 2.x, so again DO NOT go for production using 3.0 and postpone the production upgrade process until 3.1 become available.
Prerequisites System informationThe following are the cluster system information:
- Operating System: CentOS release 6.4 (64 bit)
- Cluster system consists of 3 cluster nodes (192.168.1.251 "gcservera",192.168.1.252 "gcserverb" & 192.168.1.253 "gcserverc")
The following are the packages installed on the three cluster nodes:
- MySQL version: mysql-5.5.33_wsrep_23.7.6 (RPM)
- Galera provider version:galera-23.2.7 (RPM)
The following are the needed packages to be installed:
- MySQL 5.6 + Galera plugin: Could be downloaded from here.
- Galera provider 3.0: Could be downloaded from here
To upgrade the installed binaries, you have to stop the mysqld first.
Important: If you are using a load balancer in your cluster system, you should bring the node in question out from the load balancer before stopping the mysqld.
[root@gcservera ~]# /etc/init.d/mysql stopAnd then, upgrade with the binaries:
[root@gcservera ~]# rpm -qa|grep MySQL MySQL-server-5.5.33_wsrep_23.7.6-1.rhel6.x86_64 MySQL-client-5.5.34-1.el6.x86_64 [root@gcservera ~]# rpm -e MySQL-server-5.5.33_wsrep_23.7.6-1.rhel6.x86_64 [root@gcservera ~]# rpm -ivh /downloads/MySQL-server-5.6.13_wsrep_24.0-1.rhel6.x86_64.rpm Preparing... ########################################### [100%] 1:MySQL-server ########################################### [100%] [root@gcservera ~]# rpm -U /downloads/galera-24.3.0-1.rhel6.x86_64.rpm Upgrade mysql schemaIt's recommended to perform the mysql schema upgrade before joining the cluster, so that, we need to start the node first as a standalone instance by disabling the provider option:
#my.cnf [mysqld] . . #wsrep_provider=/usr/lib64/galera/libgalera_smm.soThen start the instance and perform the upgrade using the mysql_upgrade utility:
[root@gcservera ~]# /etc/init.d/mysql start Starting MySQL........... SUCCESS! [root@gcservera ~]# mysql_upgrade Prepare the node to join the clusterAll running nodes (second and third nodes) are using the old galera version (2.7) at the moment, so that a backward compatibility option (wsrep_provider_options="socket.checksum=1") MUST be added in the node's configuration in order to join the cluser, otherwise, it will fail to join it back.
Also don't forget to enable again the provider option:
#my.cnf [mysqld] . . wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_provider_options="socket.checksum=1" Joining the clusterNow we are ready to join the cluster by restarting the node
[root@gcservera ~]# /etc/init.d/mysql restart Stopping MySQL........... SUCCESS! Starting MySQL........... SUCCESS!We can check the new version as follows:
mysql> show global variables like'%version%'; +-------------------------+------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------+ | innodb_version | 5.6.13 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.6.13-log | | version_comment | MySQL Community Server (GPL), wsrep_24.0.r3937 | | version_compile_machine | x86_64 | | version_compile_os | Linux | +-------------------------+------------------------------------------------+ 7 rows in set (0.05 sec)And the cluster status as well:
mysql> show global status like'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | ac53dc1e-3aff-11e3-b970-eb7044f6dc77 | . . | wsrep_local_state_comment | Synced | . . | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | ac53dc1e-3aff-11e3-b970-eb7044f6dc77 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_index | 2 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy | | wsrep_provider_version | 24.3.0(r159) | | wsrep_ready | ON | +----------------------------+--------------------------------------+ 43 rows in set (0.13 sec)Important: if you are using a load balancer in the cluster, it's now the time to add this node back again to it.
Rolling upgrade the other nodes
You can start doing the rolling upgrade to the other nodes the same like the first one but after making sure that all nodes state (wsrep_local_state_comment) is Synced. If you stopped a node while it's in the Donor state, then the donor and the joiner nodes might be crashed, so make sure of that first.
Get rid of the old release option
After making the upgrade on all the cluster nodes, the backward compatibility option (wsrep_provider_options="socket.checksum=1") is not needed anymore, so removing it from the configuration files and doing a rolling restart on all nodes will do the mission.
Have fun with the new MySQL and Galera releases :)
Murphy’s Law is also valid for Galera Cluster for MySQL
We had a Galera Cluster support case recently. The customer was drenched in tears because his Galera Cluster did not work any more and he could not make it work any more.
Upsss! What has happened?
A bit of the background of this case: The customer wanted to do a rolling-restart of the Galera Cluster under load because of an Operating System upgrade which requires a reboot of the system.
Lets have a look at the MySQL error log to see what was going on. Customer restarted server with NodeC:
12:20:42 NodeC: normal shutdown --> Group 2/2 12:20:46 NodeC: shutdown complete 12:22:09 NodeC: started 12:22:15 NodeC: start replication 12:22:16 NodeC: CLOSED -> OPEN 12:22:16 all : Group 2/3 component all PRIMARY 12:22:17 NodeC: Gap in state sequence. Need state transfer. 12:22:18 all : Node 1 (NodeC) requested state transfer from '*any*'. Selected 0 (NodeB)(SYNCED) as donor. 12:22:18 NodeB: Shifting SYNCED -> DONOR/DESYNCED (TO: 660966498) 12:22:19 NodeC: Shifting PRIMARY -> JOINER (TO: 660966498) 12:22:19 NodeC: Receiving IST: 14761 writesets, seqnos 660951493-660966254 12:22:21 NodeC: 0 (NodeB): State transfer to 1 (NodeC) complete.Everything went fine so far NodeC came up again and did an IST as expected. But then the first operational error happened: The customer did not wait to reboot NodeB until NodeC was completely recovered. It seems like NodeC took some time for the IST recovery. This should be checked on all nodes with SHOW GLOBAL STATUS LIKE 'wsrep%';...
12:22:21 NodeC: Member 0 (NodeB) synced with group. 12:22:21 NodeB: Shifting JOINED -> SYNCED (TO: 660966845) 12:22:21 NodeB: Synchronized with group, ready for connections --> NodeC seems not to be ready yet! 12:23:21 NodeB: Normal shutdown 12:23:21 all : Group 1/2 12:23:21 NodeC: Aborted (core dumped)And now Murphy was acting already the first time: We hit a situation in the Galera Cluster which is not covered as expected. Now we have 2 nodes out of 3 not working. As a result the Cluster gets a quorum loss (non-Primary, more than 50% of nodes disappeared) and does not reply to any SQL queries any more. This is a bug because both nodes left the cluster gracefully. The third node should have stayed primary:
12:23:21 NodeB: Received SELF-LEAVE. Closing connection. 12:23:23 NodeB: Shifting CLOSED -> DESTROYED (TO: 660973981) 12:23:25 NodeB: Shutdown complete 12:23:29 NodeC: mysqld_safe WSREP: sleeping 15 seconds before restart 12:23:37 NodeA: Received NON-PRIMARY. 12:23:44 NodeC: mysqld_safe mysqld restarted 12:23:48 NodeC: Shifting CLOSED -> OPEN 12:23:48 NodeC: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 2 12:23:48 NodeC: Received NON-PRIMARY. 12:23:48 NodeA: New COMPONENT: primary = no, bootstrap = no, my_idx = 1, memb_num = 2 12:23:48 NodeA: Received NON-PRIMARY. 12:24:30 NodeB: mysqld_safe Starting mysqld daemon 12:24:36 NodeB: Start replication 12:24:37 NodeB: Received NON-PRIMARY.As a result the customer decided to shutdown the whole cluster. Which was not necessary but is a acceptable approach:
12:27:55 NodeB: /usr/sbin/mysqld: Normal shutdown 12:27:58 NodeB: /usr/sbin/mysqld: Shutdown complete 12:28:14 NodeA: /usr/sbin/mysqld: Normal shutdown 12:28:19 NodeA: /usr/sbin/mysqld: Shutdown complete 12:31:45 NodeC: /usr/sbin/mysqld: Normal shutdown 12:31:49 NodeC: /usr/sbin/mysqld: Shutdown completeWe experience a complete cluster outage now. An then the next operational error happened: The customer has chosen the node (NodeC) with the worst (= oldest) data as the starting node for the new Cluster:
12:31:55 NodeC: Starting mysqld daemon 12:31:58 NodeC: PRIMARY, 1/1 12:31:58 NodeC: /usr/sbin/mysqld: ready for connections. 12:33:29 NodeB: mysqld_safe Starting mysqld daemon 12:33:33 NodeB: PRIMARY, 1/2An alternative approach would have been to run the command SET GLOBAL wsrep_provider_options='pc.bootstrap=yes'; on the node (NodeA) with the most recent data...
After connecting NodeB (with the newer state) requested an state transfer from the older NodeC:
And now Mister Murphy is acting a second time: We hit another situation: The newer node requests an IST from the older node which has progressed in the meanwhile to an even newer state. So the newer joiner node receives data from the older donor node which causes an AUTO_INCREMENT Primary Key violation. As a consequence the node crashes:
12:33:36 NodeB: receiving IST failed, node restart required: Failed to apply app buffer: äJR#, seqno: 660974010, status: WSREP_FATAL 12:33:36 NodeB: Closed send monitor. 12:33:37 NodeB: Closing slave action queue. 12:33:37 NodeB: Aborted (core dumped) 12:33:37 NodeC: PRIMARY 1/1 12:33:44 NodeC: Shifting DONOR/DESYNCED -> JOINED (TO: 660983204) 12:33:59 NodeB: mysqld_safe mysqld restarted 12:34:04 NodeB: Shifting CLOSED -> OPEN 12:34:07 NodeB: Aborted (core dumped) ... LoopThis situation keeps the node NodeB now in a crashing loop. Restarted by the mysqld_safe process requesting an IST. This is another bug which is fixed in a newer Galera MySQL (5.5.33). And now the next operational error happened: Instead of killing NodeB and forcing an SST by deleting the grastat.dat file They started the third node as well...
12:37:12 NodeA: mysqld_safe Starting mysqld daemon ... --> code dumped ... LoopNodeB and NodeA both have the same problem now...
As a result: Node NodeA and NodeB are now looping in a crash. But at least the node NodeC was up and running all the time.
Learnings- Most important: Have an ntpd service running on all Cluster nodes to not mess up the times on different nodes while investigating in errors. This makes problem solving much easier...
- In case of split-brain or quorum loss choose the node with the most recent data as your initial Cluster node.
- If you have a Cluster in split-brain you do not have to restart it. You can bring the node out of split-brain with pc.bootstrap=yes variable if you found out which node is the most recent one.
- Analyse error log files carefully to understand what went wrong. Forcing an SST only takes a few seconds.
- Upgrade your software regularly to not hit old known bugs. The rule Do not touch a running system! does not apply here because we are already touching the running system! So regular upgrade from time to time can be very helpful!
- Be familiar with operational issues of your Cluster software. A Cluster does not only mean high-availability. It means also you have to train your staff to handle it.
- It is always valuable to have support for your business critical systems.
MySQL Environment MyEnv 1.0 has been released
FromDual has the pleasure to announce the release of the new version 1.0 of its popular multi-instance MySQL Environment MyEnv.
Thanks to the countless feedback of many big customers and the community we finally feel ready to publish version 1.0. A major step forward from the last version 0.4 (dated March 2011) and not published version 0.5 (dated August 2013).
What is MyEnv?MyEnv is a CLI environment for operating multiple MySQL instances on the same server. If you are using mysqld_multi or/and if you have set-up multiple mysqld instances on the same machine you should really have a look at MyEnv.
If you consider to consolidate your MySQL databases on one machine but if you do not want to pack everything in one instance you are a candidate for MyEnv as well...
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it in our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 0.x to 1.0Package php-posix needs to be installed.
If the following manual steps are too complicated just run the installer installMyEnv.sh and copy your old myenv.conf to /etc/myenv.
# cd /home/mysql/product # tar xf /tmp/myenv-1.0.tar.gz # sudo mkdir /etc/myenv # sudo chown mysql: /etc/myenv # echo 'export MYENV_BASE=/home/mysql/product/myenv' > /etc/myenv/MYENV_BASE # cp /home/mysql/product/myenv/etc/myenv.conf /etc/myenv/ # rm -f myenv # ln -s myenv-1.y myenv # cp myenv/etc/aliases.conf.template /etc/myenv/aliases.conf # cp myenv/etc/variables.conf.template /etc/myenv/variables.conf # cat > ~/.bash_profile # BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_PWD # END MyEnv _EOFChanges in MyEnv 1.0 MyEnv
- cd abc xyz alias failed when one of the patterns contained a space. This is fixed now (Bug #10).
- Different smaller bug fixes.
- timezone added to MyEnv to avoid nasty error messages.
- Old path in PATH variables should be cleaned-up properly now.
- Incompatible Change: DEBUG variables replaced by MYENV_DEBUG.
- MyEnv start/stop script (myenv_start_stop.php) accepts as parameter and instance name now (customer enhancement request).
- hideschema parameter implemented for myenv.conf (customer enhancement request).
- Default section is now configurable in section [default] (customer enhancement request).
- Better support for Percona Server (customer enhancement request).
- Readline support added for set-ups not having it by default (customer enhancement request).
- cdb alias added for cd $basedir or cd $MYSQL_HOME.
- Security check for my.cnf introduced.
- Password was shown in clear text in error log (Bug #12).
- tmp directory is added to MyEnv structure.
- unknown version fixed.
- Made hideschema variable inheritable.
- Parameter drizzle and type removed. De-support for Drizzle.
- The utl directory should be added to the PATH as well.
- Add Path to the right of PATH is fixed now.
- Possibility to start mysqld without angel process (mysqld_safe) now.
- /tmp/myEnv.xxx files where not removed. This is fixed now.
- 5.6.12 as version pattern is now accepted (customer request).
- Incompatible Change: Configuration file (myenv.conf) is now relocated from $MYENV_HOME/etc to /etc/myenv.
- MySQL stop functionality was changed from mysqladmin shutdown to kill (MySQL like).
- my.cnf privilege check is less restrictive now (rwxr-----).
MyEnv Installer
- Many bug fixes and improvements for MyEnv Installer.
- Incompatible Change: MyEnv Installer rewritten, renamed (bin/installMyEnv.php) and code cleaned-up.
- MySQL database can now be created in installer.
- Check for mysql user is added to installer.
- Missing mysql_install_db is caught.
- Missing socket error is caught.
- Only a stopped database can be deleted.
- Configuration files are versioned when changed.
- Delete process more user-friendly.
- Missing basedir and datadir variables in my.cnf fixed.
- Privileges of my.cnf file set correctly to avoid warning later.
- Not existing my.cnf file caught.
- Selection characters made more intuitive.
- Empty my.cnf string caught.
- Installer is aware of missing environment variables.
- Start/stop instance parameter was added in installer.
MyEnv Utilities
- Row Extractor (utl/row_extractor.pl) to extract rows from corrupt tables (leading to a DB crash) added.
- Script to flush Query Cache regularly (utl/flush_query_cache.sh) added.
- Script to find last access to tables (utl/filesystem_table.php) added.
- MySQL Backup Manager (mysql_bman) added.
- Script to do a NDB Cluster channel fail-over (utl/channel_failover.pl) added.
- README about MyEnv utilities usage updated (utl/README).
- Compare script to compare two GLOBAL STATUS outputs added (utl/compare.php).
- Rotate log script to rotate different MySQL logs added (utl/rotate_log.sh).
- Script to convert \G output of MySQL client to one-line output added (utl/backslashG2table.pl).
- alter_engine.sh script improved by a Perl version (for windows customer) (utl/alter_engine.pl).
- Different resources, configuration files and monitors added for Heartbeat v1/v2: (utl/{stop-heartbeat.alert|ping.monitor|node.monitor|mysql.monitor|mail.alert|Crontab|canias}, etc/{drbd.conf.template|ha.cf.template|mon.cf.template|my.cnf.template}).)
- Slave Monitor added (utl/{slave_monitoring.php|slave_monitor.php}).
- DRBD Monitor added (utl/check_drbd.sh).
- MySQL Profiler added (utl/profiler/{profiler.pl|tracer.pl}).
- split_partition.php and drop_partition.php scripts added (utl/{split_partition.php|drop_partition.php}).
- Insert test added (utl/{insert_test.sh|insert_test.phpx}). Shell script is for connection testing. PHP script is more for INSERT load testing...
- Utility mem_map.pl added (utl/mem_map.pl).
- Start/stop script for VIP added (utl/vip).
- Script to fix wrong encoding added (utl/fix_encoding.php).
- Alter Engine script enhanced with MySQL 5.6 and Galera features.
- Ping Log utility added (utl/ping_log.sh).
- Script added to block MySQL port so Load Balancer recognizes if Galera Cluster node is away (utl/block_galera_node.sh, customer request).
- Create table statement for test.test table for insert_test.* added as comment to the script.
MySQL Backup Manager (mysql_bman)
- Several bugs fixed and improvements.
- Schema dump implemented now (customer enhancement request).
- Backup is now possible with MySQL 5.5 as well (Bug #31).
- All parameters can now be put in the configuration file.
- Configuration file is now more dynamic.
- Archive job fixed.
- Configuration backup fixed.
- Xtrabackup wrapper added.
- Empty schema option defaults now to all databases.
- Per schema backup should not contain drop/create database.
Have fun,
The FromDual Consulting Team
MySQL community is invited to join FromDuals company meeting
FromDual holds its annual company meeting this year in Leoforos Vravronos near Athens (20 km outside) in Greece.
We are happy to invite everybody interested in MySQL technologies (MySQL, Percona Cluster, MariaDB, Galera Cluster, etc.) to participate Wednesday evening October 9th at the Mare Nostrum Hotel in Leoforos Vravronos for exchanging ideas about MySQL.
Meeting at 17:00 in the hotel lobby.
- Presentation about Xtrabackup (25') and 5' Questions and Answers (Abdel-Mawla Gharieb, Support Engineer at FromDual)
- Presentation of a MySQL community member (25') and 5' Questions and Answers
- Break 15 min
- Presentation about Galera Cluster (25') and 5' Questions and Answers (Oli Sennhauser, CTO of FromDual))
- Presentation of a MySQL community member or discussion and questions about MySQL (25') and 5' Questions and Answers
- Dinner at the Mare Nostrum Hotel Restaurant
Please feel free to send us your suggestion about your presentation. Any technical or non-technical MySQL related topic is welcome. For example how you use MySQL in your companies or special problems you have faced and solved (or not solved yet), research work you have done on MySQL products. Business cases you solve with MySQL products, Evaluations or experience you have made. The proposal can be sent to contac@fromdual.com.
Please also let us know when you plan to participate at contact@fromdual.com. So we can arrange and organize all the infrastructure with the Hotel.
The event is free of costs for all participants.
Best Regards,Your FromDual Team