You are here

MySQL Tech-Feed (en)

We need you: MySQL DBA for FromDual Support line

Shinguz - Tue, 2013-04-02 11:17

FromDual is looking for professional, enthusiastic and experienced people who:

  • Know MySQL, Percona Server or MariaDB extensively
  • Are Familiar with the open source eco-system
  • Know how to operate database systems, as a DBA or a DevOps
  • Understand what can go wrong in operating a database
  • Are happy to work autonomously, remotely and to communicate with IRC, Skype, Mail and Phone
  • Are comfortable on Linux systems
  • Are team players, keen to contribute to the growth of the company
  • Are Comfortable dealing direct with clients and
  • Look for new challenges

Job description

We are looking for full-time MySQL support engineers (female or male) to primarily take care of our MySQL support services and help our customers operating their MySQL databases (remote-DBA and emergency interventions).

You are well trained and have good experience in:

  • Operating critical highly available MySQL production databases mostly on Linux.
  • Running MySQL-Replication in all variants is your daily business.
  • The working of the most used MySQL HA set-up's and how to fix them efficiently if problems occur. (If you are already experienced in running Galera Cluster this would be a plus!
  • Open Source Technologies (LAMP stack, etc.)
  • Bash scripting and you can do some simple programs in at least one popular programming/scripting language (Perl, PHP, ...).

You will be in direct contact with the customers and you need good antennae to listen to them, know how to respond and get the answers to their real problems. You also have to be proactive when something goes wrong and direct the customer back to the right track.

You need to have good communication skills and be an active team player.

To fulfil your job you have to work in European Time Zones. You can organize your working time flexible within certain ranges. Participating in the on call duty is expected. FromDual is a completely virtual company and relocation is not needed (home office). Good English verbally and in writing is a must. Most of our current customers speak German and having German skills is a plus.

Beside being our support engineer we expect you to improve your knowledge and skills and to contribute to improving our monitoring solution, our database controlling solution and our other tools. Further, we expect that you write regular technical articles and give help wherever it is needed or requested...

You should be prepared to work, think and act autonomously most of the time and to teach yourself (using Google, MySQL documentation, testing etc.). If you are ever stuck, your colleagues at FromDual will assist you.

If you need somebody holding your hand all the time, FromDual is not a good choice for you.

Who is FromDual?

FromDual is the leading independent and professional MySQL database consulting and service company in Europe with its Headquarters in Switzerland.

Our customers are mostly located in Europe and range from small start-up companies to some of the top-500 companies of Europe.

You will be joining us at an exciting time. We are growing and we need like-minded people to grow with us, individually and collectively. As our horizons expand, we need our team to expand in its skills, knowledge and expertise.

Applying to join FromDual could be the best decision you make.

How to continue

If you are interested in this opportunity and if you feel you are a good "fit" (we know that there will not be a 100% match!) we would be glad to hear from you.

Please send your true CV with your salary expectation and a list of your open source involvements, blog articles, slides, tweets etc. to jobs@fromdual.com. If you want to know more about this job opportunity or if you want to speak with me, please call me at +41 79 830 09 33 (Oli Sennhauser, CTO). Only candidates, NO head hunters please!

After we received and screened your CV we will invite you to prove your technical skills by taking an exam in operating MySQL. If you pass the exam you will be invited for the final interviews.

This job opportunity is open until May 31st 2013.

Switching from MySQL/MyISAM to Galera Cluster

Shinguz - Tue, 2013-03-12 08:23
Taxonomy upgrade extras: galeramyisaminnodbstorage engine

Switching from MySQL/MyISAM to Galera Cluster requires that all tables (except those from the mysql, information_schema and performance_schema) are using the InnoDB Storage Engine.

For altering the Storage Engine of the tables we wrote a script (alter_engine.pl) long time ago already. Because we have made many of those switches recently we have extended its functionality.

New features
  • Recognizes VIEW's and does NOT try to alter their Storage Engine (bug).
  • Script is MySQL version aware. Complain if too old MySQL version is used.
  • Find tables without a Primary Key.
  • Check for too long InnoDB Primary Keys
  • Check for FULLTEXT indexes in MySQL 5.1 and 5.5 and write a note if version is older.
Example ./alter_engine.pl User [root] : Password [] : secret Schema from (or all) [test] : all Engine to [InnoDB] : Version is : 5.6.10 MR Version is: 050610 The following tables might not have a Primary Key: +--------------+----------------------+ | table_schema | table_name | +--------------+----------------------+ | test | innodb_table_monitor | | test | log_event | | test | parent | | test | t | +--------------+----------------------+ The tables above not having a Primary Key will negatively affect perfor- mance and data consistency in MySQL Master/Slave replication and Galera Cluster replication. The following tables might have a too long Primary Key for InnoDB (> 767 bytes): +--------------+------------+-------------+ | table_schema | table_name | column_name | +--------------+------------+-------------+ | test | test | data | +--------------+------------+-------------+ The following tables might have a FULLTEXT index (which is only supported in MySQL 5.6 and newer): +--------------+------------+-------------+ | table_schema | table_name | column_name | +--------------+------------+-------------+ | test | test | data | +--------------+------------+-------------+ Output written to /tmp/alter_table_all.sql After reviewing it you can apply it with mysql --user=root --password=secret

Block MySQL traffic for maintenance windows

Shinguz - Mon, 2013-02-18 22:32

From time to time some maintenance work on the MySQL database has to be done. During the maintenance window we do not want to have application traffic on the database.

Sometimes it is hard to shut down all applications spread over the whole company. Or we want to allow only some specific hosts to access mysql from remote (for example the monitoring system or the backup server).

For this purpose we can use the Linux packet filtering.

To see what packet filtering rules are available we can run the following command:

iptables -L INPUT -v

To close the MySQL port on all interfaces we use:

iptables -A INPUT -p tcp --dport mysql -j DROP

and to open the MySQL port again after the maintenance window:

iptables -D INPUT -p tcp --dport mysql -j DROP

With the -i option we can restrict the rule to a specific interface for example eth0 and with the option -s we can specify a specific source only. Or with a ! -s we can implement an inverse rule (all but).

Bootstrapping Galera Cluster the new way

Shinguz - Tue, 2013-02-05 19:18
Taxonomy upgrade extras: galeracluster

A while ago it was pretty inconvenient to start a complete Galera Cluster from scratch. Rolling restart an such things are already working well but bootstrapping was a pain.

With Galera v2.2 new functionality came in. We tried it out and it did not work as documented. :-( Thanks to Teemu's help we found there was a documentation bug in the Galera documentation.

The settings which were working for us are:

wsrep_cluster_address = "gcomm://192.168.1.2,192.168.1.3?pc.wait_prim=no"

And when all 3 nodes of the Galera Cluster are started and ready to join you can run:

SET GLOBAL wsrep_provider_options="pc.bootstrap=1";

I hope we can go life on Thursday with the new Telco VoIP Cluster for 2500 employees...

Have fun and enjoy an even better Galera Cluster for MySQL!

Privileges of MySQL backup user for mysqldump

Shinguz - Tue, 2012-12-11 21:34
Taxonomy upgrade extras: Backup Restore RecoveryBackup

Some MySQL customers do not want to use the root user for mysqldump backups. For this user you have to grant the following minimal MySQL privileges:

mysqldump --single-transaction (InnoDB) CREATE USER 'backup'@'localhost' IDENTIFIED BY 'secret'; GRANT SELECT, SHOW VIEW, RELOAD, REPLICATION CLIENT, EVENT, TRIGGER ON *.* TO 'backup'@'localhost';
mysqldump --lock-all-tables (MyISAM) GRANT LOCK TABLES ON *.* TO 'backup'@'localhost';

If we missed a privilege please let us know.

Shrinking InnoDB system tablespace file ibdata1 PoC

Shinguz - Sat, 2012-12-08 13:43
Taxonomy upgrade extras: innodb

In this weeks MySQL workshop we were discussing, beside other things, about the innodb_file_per_table parameter and its advantages of enabling it. In addition there was a discussion if the InnoDB system tablespace file can be shrinked once it has been grown very large or not. We all know the answer: The InnoDB system tablespace file does never shrink again.

But why should it not be possible? Other databases like for example Oracle can shrink or even get rid of tablespace files... After some philosophising about it we came to the conclusion that we should give it a try if this is possible with InnoDB as well.

The scenario we considered was the following: You inherit a MySQL database with InnoDB tables but innodb_file_per_table was set to 0. So all the tables are located in the InnoDB tablespace file. And only a small amount of space is left on the device and there is a lot of free space in the InnoDB system tablespace file. The database itself is much too big to dump and restore and we want to get rid of the one big InnoDB system tablespace file and have many small tablespace files as we get them with innodb_file_per_table = 1.

So what we did is the following: We created InnoDB tables inside the InnoDB system tablespace (ibdata1) and bloat them up. Then we altered them to be placed in their own tablespace files by OPTIMIZE TABLE. And now the tricky part starts: How can we shrink the InnoDB system tablespace file to free the disk space again?

CAUTION: This is a prove of concept and should never be used on a production system!!!

First we move all tables out of the InnoDB system tablespace (with innodb_file_per_table = 1):

mysqlcheck --optimize --all-databases --user=root ... note : Table does not support optimize, doing recreate + analyze instead status : OK ...

Now all tables have been moved out of the system tablespace, but the file is still about 674 Mbyte in size:

ll ibdata1 -rw-rw----. 1 mysql mysql 706740224 Dec 6 23:37 ibdata1

Then we search for empty blocks at the end of the InnoDB data files:

innochecksum -v -d ibdata1 file ibdata1 = 706740224 bytes (43136 pages)... checking pages in range 0 to 43135 page 0: log sequence number: first = 3558400819; second = 3558400819 page 0: old style: calculated = 148443420; recorded = 148443420 page 0: new style: calculated = 4252778336; recorded = 4252778336 ... page 42508: log sequence number: first = 0; second = 0 page 42508: old style: calculated = 1371122432; recorded = 0 page 42508: new style: calculated = 1575996416; recorded = 0 ... page 43135: log sequence number: first = 0; second = 0 page 43135: old style: calculated = 1371122432; recorded = 0 page 43135: new style: calculated = 1575996416; recorded = 0

In ideal case we should also find blocks which are not used any more but not blanked out. Theses 627 blocks (of 16k = 10 Mbyte) can easily be removed...

Next we shrink the InnoDB system tablespace file after stopping the mysqld:

printf '' | dd of=ibdata1 bs=16384 seek=42508 ll ibdata1 -rw-rw----. 1 mysql mysql 696451072 Dec 6 23:42 ibdata1

As a next step we have to change the number of blocks in the header of the InnoDB system tablespace file. This can be done with a tool like hexedit (aptitude install hexedit). We have to change at position 0x0030 the value from 43136 (0xA880) to 42508 (0xA60C):

hexdump -C -n 256 ibdata1 00000000 fd 7c 3f 60 00 00 00 00 00 00 00 00 00 00 00 00 |.|?`............| 00000010 00 00 00 00 d4 18 e3 33 00 08 00 00 00 00 d4 18 |.......3........| 00000020 e4 13 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000030 a8 80 00 00 a6 c0 00 00 00 00 00 00 01 21 00 00 |.............!..|

Otherwise we would get an error like:

InnoDB: Error: tablespace size stored in header is 43146 pages, but InnoDB: the sum of data file sizes is only 42508 pages

It looks like InnoDB itself corrects somehow the block number to a 0x100 boundary (4 Mbyte) later.

As the next step we have to fix the new style check sum (at position 0x0000) and the old style check sum (at position 0x3FFC). You have to do this until innochecksum does not complain anymore:

innochecksum -d -p 0 ibdata1 file ibdata1 = 696451072 bytes (42508 pages)... checking pages in range 0 to 0 page 0: log sequence number: first = 3558400819; second = 3558400819 page 0: old style: calculated = 2354503790; recorded = 2354503790 page 0: new style: calculated = 3427457314; recorded = 3587772574

When you have done this the database should be ready to start.

The tables later on can be possibly transferred with the transportable tablespace feature which comes with MySQL 5.6.

I have not found a good way yet to find the highest used block in the tablespace file. So it is a wild guess which is dangerous. Especially because some InnoDB UNDO LOG blocks seems to be located there at very high positions:

SELECT page_type, MAX(page_number) AS max_page_number FROM information_schema.innodb_buffer_page WHERE space = 0 AND page_number != 0 GROUP BY page_type ORDER BY max_page_number; +-------------------+-----------------+ | page_type | max_page_number | +-------------------+-----------------+ | TRX_SYSTEM | 5 | | SYSTEM | 300 | | BLOB | 9366 | | EXTENT_DESCRIPTOR | 32768 | | IBUF_BITMAP | 32769 | | INODE | 42123 | | INDEX | 45229 | | ALLOCATED | 45247 | | UNDO_LOG | 45503 | +-------------------+-----------------+

It would be good if we have a method to relocate those blocks somehow...

To verify that everything works I have tried to increase the system tablespace again. This seems to work if the number of blocks is dividable by 256 (4 Mbyte, or 128 2 Mbyte?). But growing the system tablespace again should not be the intention.

Further according to our tests this method of shrinking the InnoDB system tablespace seems to work with MySQL 5.1, 5.5 and 5.6.

Thanks to Ralf, Torsten and Stefan for assistance!

It would be nice to get some feedback from the InnoDB and Percona guys about how this feature could be implemented correctly...

And finally: Do not blame and beat me. I know that this is an evil hack, but I like to play in my sandbox as I want!

Last login of MySQL database users

Shinguz - Sat, 2012-12-01 10:05
Taxonomy upgrade extras: userlogin triggerlogon triggerloggingaudithosterhosting

MySQL hosting providers can easily loose the overview over their customers and which user or schema is still in use and which not.

The MySQL database becomes bigger and bigger, uses more and more RAM and disk space and the backup takes longer and longer.

In this situation it would be nice to know which MySQL database user has logged in within the last 6 months for example. MySQL database users who did not login within a defined period can be backuped and removed from the production MySQL database.

The following MySQL login trigger helps to track the login of all non-super privileged MySQL users.

First we need a table where to log the login of the users:

-- DROP DATABASE tracking; CREATE DATABASE tracking; use tracking; -- DROP TABLE IF EXISTS login_tracking; CREATE TABLE login_tracking ( user VARCHAR(16) , host VARCHAR(60) , ts TIMESTAMP , PRIMARY KEY (user, host) ) engine = MyISAM;

Then we need a MySQL stored procedure which does the logging of the login:

-- DROP PROCEDURE IF EXISTS login_trigger; DELIMITER // CREATE PROCEDURE login_trigger() SQL SECURITY DEFINER BEGIN INSERT INTO login_tracking (user, host, ts) VALUES (SUBSTR(USER(), 1, instr(USER(), '@')-1), substr(USER(), instr(USER(), '@')+1), NOW()) ON DUPLICATE KEY UPDATE ts = NOW(); END; // DELIMITER ;

Then we have to grant the EXECUTE privilege to all users of the database which do not have the SUPER privilege. MySQL users with the SUPER privilege are not logged with the init_connect login trigger hook:

-- REVOKE EXECUTE ON PROCEDURE tracking.login_trigger FROM 'oli'@'%'; GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO 'oli'@'%';

Those GRANTSs can be created with the following query:

tee /tmp/grants.sql SELECT CONCAT("GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO '", user, "'@'", host, "';") AS query FROM mysql.user WHERE Super_priv = 'N'; notee +---------------------------------------------------------------------------------+ | query | +---------------------------------------------------------------------------------+ | GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO 'oli'@'localhost'; | | GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO 'replication'@'127.0.0.1'; | | GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO 'oli'@'%'; | | GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO ''@'localhost'; | +---------------------------------------------------------------------------------+

As the last step we have to activate the stored procedure by hooking it into the login trigger hook:

-- SET GLOBAL init_connect=""; SET GLOBAL init_connect="CALL tracking.login_trigger()";

If something went wrong with the login trigger you find the needed information in the MySQL error log.

Reporting

To find out, which users have logged in we can run the following query:

SELECT * FROM tracking.login_tracking; +------+-----------+---------------------+ | user | host | ts | +------+-----------+---------------------+ | oli | localhost | 2012-11-30 15:36:39 | +------+-----------+---------------------+

To find at what time a user has logged in last you can run:

SELECT u.user, u.host, l.ts FROM mysql.user AS u LEFT JOIN tracking.login_tracking AS l ON u.user = l.user AND l.host = u.host WHERE u.Super_priv = 'N'; +-------------+-----------+---------------------+ | user | host | ts | +-------------+-----------+---------------------+ | oli | localhost | 2012-12-01 09:55:33 | | replication | 127.0.0.1 | NULL | | crm | 127.0.0.1 | NULL | +-------------+-----------+---------------------+

And to find users which are logged but could not be found from the mysql user table you can run:

SELECT l.user, l.host FROM tracking.login_tracking AS l LEFT JOIN mysql.user AS u ON u.user = l.user AND l.host = u.host WHERE u.user IS NULL;

MySQL backup to file, gzip and load in one step

Shinguz - Fri, 2012-11-30 00:26
Taxonomy upgrade extras: Backupcompress

When a MySQL Slave is set-up with mysqldump you have 2 possibilities:

  • You dump into a file and then load the data into the Slave with the mysql client utility.
  • You dump directly into the mysql client utility.

The first possibility has the advantage that you can start the load again if it failed. You can look into the file (and do some changes if needed).
The second possibility has the advantage that you do not need disk space and that it is possibly faster. But when the load fails you have to start from the very beginning.

What I was looking for is a way to combine everything in one step: Dumping to a file including compression and in the same step load the database to a slave. This is what I found to solve these requirements:

mysqldump --user=root --all-databases --flush-privileges --single-transaction --master-data=1 --quick \ --flush-logs --triggers --routines --events | tee >(gzip > /tmp/full_backup.sql.gz) | mysql --user=root --host=192.168.1.60 --port 3306

With this command you can even load several CPUs of the system:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24747 mysql 20 0 534m 56m 5504 S 36.1 0.7 4:12.35 mysqld 4967 mysql 20 0 402m 33m 5236 S 7.0 0.4 0:02.06 mysqld 4982 mysql 20 0 23348 2112 1216 S 6.6 0.0 0:01.64 mysqldump 4984 mysql 20 0 28608 3856 1372 S 5.6 0.0 0:01.58 mysql 4986 mysql 20 0 4296 688 304 S 5.3 0.0 0:02.10 gzip 4983 mysql 20 0 98.5m 628 544 S 0.7 0.0 0:00.13 tee

If gzip becomes the bottleneck you can try with pigz.

Resize XFS file system for MySQL

Shinguz - Sat, 2012-11-17 14:30

Important: Before you start any operation mentioned below do a proper file system backup of your XFS file system you want to resize. If MySQL is running on this mount point do this with a stopped mysqld. Alternatively you can also use mysqldump to do the MySQL backup but test the restore time before continuing to not experience ugly surprises...

All these operations have to be performed as the root user. First we want to see what mount points are available:

shell> df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 485M 77M 383M 17% / /dev/sdb1 496M 314M 157M 67% /var/lib/mysql

Our MySQL data are located on /dev/sdb1.

After the file system backup unmount /dev/sdb1 and resize the disk, partition or volume (works for VMware, NetApp filer and similar equipment, for LVM use lvextend):

shell> tar cvf /backup/mysql.tar /var/lib/mysql shell> umount /var/lib/mysql shell> fdisk /dev/sdb

Change the units in fdisk to have a better overview over your begin and end of your partition:

fdisk> u Changing display/entry units to sectors fdisk> p Disk /dev/sdb: 1073 MB, 1073741824 bytes 139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xea17dfd0 Device Boot Start End Blocks Id System /dev/sdb1 8 1047503 523748 83 Linux fdisk> d Selected partition 1 fdisk> n Command action e extended p primary partition (1-4) fdisk> p Partition number (1-4): fdisk> 1 First sector (8-2097151, default 8): Using default value 8 Last sector, +sectors or +size{K,M,G} (8-2097151, default 2097151): Using default value 2097151 fdisk> w shell> fdisk /dev/sdb fdisk> p Disk /dev/sdb: 1073 MB, 1073741824 bytes 139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xea17dfd0 Device Boot Start End Blocks Id System /dev/sdb1 8 2097151 1048572 83 Linux

Now the partition has the new size. The next step is to resize the XFS file system. Install the XFS tools if they are not already there:

apt-get install xfsprogs yum install xfsprogs

And then extend the XFS file system on-line and mount it again:

shell> xfs_growfs /var/lib/mysql shell> df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 485M 77M 383M 17% / /dev/sdb1 992M 314M 627M 34% /var/lib/mysql

MySQL tmpdir on RAM-disk

Shinguz - Thu, 2012-11-15 19:15
Taxonomy upgrade extras: temporarymemory tablemyisam

MySQL temporary tables are created either in memory (as MEMORY tables) or on disk (as MyISAM tables). How many tables went to disk and how many tables went to memory you can find with:

mysql> SHOW GLOBAL STATUS LIKE 'Created_tmp%tables'; +-------------------------+----------+ | Variable_name | Value | +-------------------------+----------+ | Created_tmp_disk_tables | 49094 | | Created_tmp_tables | 37842181 | +-------------------------+----------+

Tables created in memory are typically faster than tables created on disk. Thus we want as many as possible tables to be created in memory.

To achieve this we can configure the variables accordingly:

mysql> SHOW GLOBAL VARIABLES LIKE '%table_size'; +---------------------+----------+ | Variable_name | Value | +---------------------+----------+ | max_heap_table_size | 25165824 | | tmp_table_size | 25165824 | +---------------------+----------+

All result sets which are smaller than these values can be handled as MEMORY tables. All result sets bigger than these values are handled as MyISAM tables an go to disk.

But there is still an other reason for tables going to disk: MEMORY tables cannot handle TEXT or BLOB attributes as it often occurs in CMS like Typo3. In these cases MySQL has to do directly MyISAM tables on disk and they are counted as Created_tmp_disk_tables.

If these temporary disk tables are causing serious I/O performance problems one could consider to use a RAM-disk instead of normal physical disks instead.

On Linux we have 2 possibilities to create a RAM-disk: ramfs and tmpfs [ 1 ].

We recommend to use tmpfs.

A RAM-disk can be created as follows:

shell> mkdir -p /mnt/ramdisk shell> chown mysql:mysql /mnt/ramdisk shell> mount -t tmpfs -o size=512M tmpfs /mnt/ramdisk

To make this persistent we have to add it to the fstab:

# # /etc/fstab # tmpfs /mnt/ramdisk tmpfs rw,mode=1777,size=512M 0 0

MySQL still writes to the default location which is found as follows:

mysql> SHOW GLOBAL VARIABLES LIKE 'tmpdir'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | tmpdir | /tmp | +---------------+-------+

To changes this value you have to configure your my.cnf accordingly and restart the database...

Galera Cluster Nagios Plugin

Shinguz - Fri, 2012-09-14 10:49
Taxonomy upgrade extras: monitoringgaleraplugincluster

Based on customer feedback we have decided to add a plugin Galera Cluster for MySQL to our MySQL Nagios/Icinga Plugins.

The module checks, if the node is in status Primary and if the expected amount of Galera Cluster nodes is available. If not, a warning or an alarm is returned.

The script is written in Perl and is Nagios Plugin API v3.0 compatible.

You can download it from our download page.

If you have suggestions for improvements, please contact us. Bugs can be reported at our bugs database.

The following modules are contained in the package:

  • check_db_mysql.pl
  • check_errorlog_mysql.pl
  • check_galera_nodes.pl
  • check_repl_mysql_cnt_slave_hosts.pl
  • check_repl_mysql_hearbeat.pl
  • check_repl_mysql_io_thread.pl
  • check_repl_mysql_read_exec_pos.pl
  • check_repl_mysql_readonly.pl
  • check_repl_mysql_seconds_behind_master.pl
  • check_repl_mysql_sql_thread.pl
  • perf_mysql.pl

Galera Cluster discussions at FrOSCon 2012

Shinguz - Mon, 2012-08-27 17:18
Taxonomy upgrade extras: galeraclusterMySQL Clusterchannelfail-overreplication

During and after Henriks great talk about Galera Cluster at the FrOSCon 2012 in St. Augustin we found 2 important things related to Galera Cluster for MySQL:

  • The InnoDB double write buffer (innodb_doublewrite) should not be disabled anymore for Galera when using v2.0 and higher!!! The reason for this is: When MySQL crashes InnoDB pages might get corrupted during the crash. They would be fixed by the blocks from the double write buffer during auto-recovery. But if the double write buffer is disabled they are not available. With Galera v1.x that was not a problem because after a crash a SST would have happened and the corrupted InnoDB block are corrected. But now with IST in Galera v2.0 MySQL will start without noticing the corruption (as usual) and only an IST is performed. This leads to a running MySQL database with possibly corrupted InnoDB blocks. And this might cause you later troubles for example if this node is used as a donor. Then the corrupted page is inherited to other nodes (using rsync or Xtrabackup?). And in some bad cased then the whole Cluster could crash at once when hitting the corrupted page. Thanks to Monty W. for bringing this up!
    Recommendation is: Do NOT disable InnoDB double write buffer (innodb_doublewrite) with Galera Cluster >= v2.0 if your care about your data!
  • The second discussion was about the event sequence in the binary-log (for those who where present: the A-B vs B-A discussion). Codership confirmed that the binary-log sequence on 2 different Galera nodes of the same Galera Cluster should be the same (everything else is considered to be a bug). As a result this leads to 2 different consequences:
    a) The binary-log of node B can be used for a PiTR of node A in case we need it. Finding the right position is a bit tricky and it needs some manual work on this (finding XID with binlog-pos of node B, then finding binlog-pos of node A with XID). But Codership told me they are planning a tool for automatizing this.
    b) The binary-log of node B can be used for a Channel fail-over in case we have 2 different Galera Clusters in 2 different data centers connect to each other through MySQL asynchronous replication... For more on this topic see also MySQL Cluster and channel failover...

Deadlocks, indexing and Primary Key's

Shinguz - Thu, 2012-08-16 10:51
Taxonomy upgrade extras: indexprimary keyinnodbtuningperformanceOptimizer

Recently a customer has shown up with some deadlocks occurring frequently. They were of the following type (I have shortened the output a bit):

*** (1) TRANSACTION: TRANSACTION 22723019234, fetching rows mysql tables in use 1, locked 1 LOCK WAIT 7 lock struct(s), heap size 1216, 14 row lock(s) update location set expires='2012-08-10 04:50:29' where username='12345678901' AND contact='sip:12345678901@192.168.0.191:5060' AND callid='945a20d4-8945dcb5-15511d3e@192.168.0.191' *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 2203904 n bits 136 index `GEN_CLUST_INDEX` of table `location` trx id 22723019234 lock_mode X locks rec but not gap waiting *** (2) TRANSACTION: TRANSACTION 22723019222, fetching rows, thread declared inside InnoDB 225 mysql tables in use 1, locked 1 192 lock struct(s), heap size 30704, 9483 row lock(s) delete from location where expires<'2012-08-10 04:49:30' AND expires!='1969-12-31 19:00:00' *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 0 page no 2203904 n bits 136 index `GEN_CLUST_INDEX` of table `location` trx id 22723019222 lock_mode X *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 2203951 n bits 136 index `GEN_CLUST_INDEX` of table `location` trx id 22723019222 lock_mode X waiting *** WE ROLL BACK TRANSACTION (1)

They want us to have this fixed. And they did not like the answer that the application has to cope with deadlocks [ 1 ].

But one thing looks suspicious here: The GEN_CLUSTER_INDEX! This basically means there was NO explicit Primary Key on the InnoDB table and InnoDB was creating its own internal Primary Key.

After some discussion we started to examine the whole situation. For this we transformed the UPDATE and the DELETE statement into SELECT's:

UPDATE EXPLAIN SELECT * FROM location WHERE username='12345678901' AND contact='sip:12345678901@192.168.0.191:5060' AND callid='945a20d4-8945dcb5-15511d3e@192.168.0.191' ; +----+-------------+----------+------+---------------+----------+---------+-------+------+------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+------+---------------+----------+---------+-------+------+------------------------------------+ | 1 | SIMPLE | location | ref | username | username | 66 | const | 9 | Using index condition; Using where | +----+-------------+----------+------+---------------+----------+---------+-------+------+------------------------------------+

The first strange thing is, that the MySQL optimizer expects 9 rows (on a long key of 66 bytes) which theoretically should be a Primary Key access! This sound non optimal. And as shorter and faster transactions are as less probable are deadlocks.

So we tried to look at the transaction with SHOW ENGINE INNODB STATUS: START TRANSACTION; SELECT * FROM location WHERE username='12345678901' AND contact='sip:12345678901@192.168.0.191:5060' AND callid='945a20d4-8945dcb5-15511d3e@192.168.0.191' FOR UPDATE ; ---TRANSACTION 14033 4 lock struct(s), heap size 1248, 11 row lock(s) MySQL thread id 5, OS thread handle 0x7f3647b9e700, query id 526 localhost root cleaning up

We can see that the same query uses 4 lock structs and locks in total 11 rows. This is similar to what we have seen in the deadlock.

DELETE EXPLAIN SELECT * FROM location WHERE expires < '2012-08-10 04:49:30' AND expires != '1969-12-31 19:00:00' ; +----+-------------+----------+------+---------------+------+---------+------+-------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+------+---------------+------+---------+------+-------+-------------+ | 1 | SIMPLE | location | ALL | NULL | NULL | NULL | NULL | 10754 | Using where | +----+-------------+----------+------+---------------+------+---------+------+-------+-------------+

Uiii! The DELETE does not use an index at all but does a full table scan. Which is not so optimal performance wise...

START TRANSACTION; SELECT * FROM location WHERE expires < '2012-08-10 04:49:30' AND expires != '1969-12-31 19:00:00' FOR UPDATE ; ---TRANSACTION 14034 168 lock struct(s), heap size 31160, 11007 row lock(s) MySQL thread id 6, OS thread handle 0x7f3647b9e700, query id 663 localhost root cleaning up

And we can see a huge amount of locked rows... The table contains 10840 rows in total. Those numbers differ a bit from the deadlock but it is OK because they do not represent the same point in time.

So we started looking at the table structure. The table which was provided by the customer looks as follows:

CREATE TABLE `location` ( `username` varchar(64) NOT NULL DEFAULT '', `domain` varchar(128) NOT NULL DEFAULT '', `contact` varchar(255) NOT NULL DEFAULT '', `received` varchar(255) DEFAULT NULL, `path` varchar(255) DEFAULT NULL, `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00', `q` float(10,2) NOT NULL DEFAULT '1.00', `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID', `cseq` int(11) NOT NULL DEFAULT '42', `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `replicate` int(10) unsigned NOT NULL DEFAULT '0', `state` tinyint(1) unsigned NOT NULL DEFAULT '0', `flags` int(11) NOT NULL DEFAULT '0', `cflags` int(11) NOT NULL DEFAULT '0', `user_agent` varchar(100) NOT NULL DEFAULT '', `socket` varchar(128) DEFAULT NULL, `methods` int(11) DEFAULT NULL, `id` int(10) NOT NULL DEFAULT '0', KEY `username` (`username`,`domain`,`contact`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ;

First I want to fix the problem of the full table scan:

ALTER TABLE location ADD INDEX (expires);

Then the DELETE looks much better:

+----+-------------+----------+-------+---------------+---------+---------+------+------+-----------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+-------+---------------+---------+---------+------+------+-----------------------+ | 1 | SIMPLE | location | range | expires | expires | 5 | NULL | 2 | Using index condition | +----+-------------+----------+-------+---------------+---------+---------+------+------+-----------------------+ ---TRANSACTION 14074 2 lock struct(s), heap size 1248, 1 row lock(s) MySQL thread id 6, OS thread handle 0x7f3647b9e700, query id 671 localhost root cleaning up

But I do not know how realistic my actual data are. This can change with an other data-set!

Now I want to see if there is any difference with the KEY declared as a Primary Key:

ALTER TABLE location DROP INDEX username, ADD PRIMARY KEY (username, domain, contact);

Long indexes are bad for InnoDB. See blog post which will hopefully appear soon!

+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+ | 1 | SIMPLE | location | ref | PRIMARY | PRIMARY | 66 | const | 9 | Using where | +----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+ ---TRANSACTION 14145 3 lock struct(s), heap size 1248, 10 row lock(s) MySQL thread id 6, OS thread handle 0x7f3647b9e700, query id 684 localhost root cleaning up

Execution plan looks the same. OK. 1 row less is locked. This means 10% less probability of deadlocks?

The effect was not as big as expected. So rolling back last change (making at least a unique key out of it).

As already mentioned, short Primary Keys are good for InnoDB. And as we will show in a blog post soon VARCHAR are bad performance wise. So we try to use the non used? field id:

ALTER TABLE location DROP PRIMARY KEY, ADD UNIQUE KEY (username, domain, contact); ALTER TABLE location MODIFY COLUMN id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY; EXPLAIN SELECT * FROM location WHERE id = 2984 ; +----+-------------+----------+-------+---------------+---------+---------+-------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+-------+---------------+---------+---------+-------+------+-------+ | 1 | SIMPLE | location | const | PRIMARY | PRIMARY | 4 | const | 1 | NULL | +----+-------------+----------+-------+---------------+---------+---------+-------+------+-------+ START TRANSACTION; SELECT * FROM location WHERE id = 2984 FOR UPDATE ; ---TRANSACTION 14336 2 lock struct(s), heap size 376, 1 row lock(s) MySQL thread id 6, OS thread handle 0x7f3647b9e700, query id 701 localhost root cleaning up

And see there: The Query Execution Plan looks much better and locks are much smaller.

As a result

I hope we can add here the results on the impact of deadlock occurrence soon.

Change MyISAM tables to InnoDB and handle SELECT COUNT(*) situation

Shinguz - Tue, 2012-06-12 20:48
Taxonomy upgrade extras: innodbmaterialized viewsmaterialised viewsmyisamselectshadow tablecount

Its a known problem that changing the Storage Engine from MyISAM to InnoDB can cause some problems [ 1 ] if you have queries of this type:

SELECT COUNT(*) from table;

Luckily this query happens rarely and if, the query can be easily omitted or worked around by guesstimating the amount of rows in the table. For example with:

SHOW TABLE STATUS LIKE 'test';

But in some rare cases customer really needs these values for some reasons. To not exhaust the resources of the server with this query which can be fired quite often in some cases we make use of the materialized views/shadow table technique [ 2 ].

The following example illustrates how to do this.

Our original situation

We have an offer table which is feed by a host system:

CREATE TABLE offer ( id int unsigned NOT NULL AUTO_INCREMENT , `type` CHAR(3) NOT NULL DEFAULT 'AAA' , data varchar(64) DEFAULT NULL , PRIMARY KEY (`id`) , INDEX (type) ) ENGINE=InnoDB; INSERT INTO offer VALUES (NULL, 'AAA', 'Blablabla'); INSERT INTO offer VALUES (NULL, 'ABC', 'Blablabla'); INSERT INTO offer VALUES (NULL, 'ZZZ', 'Blablabla');

The query we want to perform looks like this:

SELECT COUNT(*) FROM offer;

This query becomes expensive when you have zillions of rows in your table.v

The work around

To work around the problem we create a counter table where we count the rows which are inserted, updated or deleted on the offer table.

CREATE TABLE counter ( `type` char(3) NOT NULL DEFAULT 'AAA' , `count` MEDIUMINT UNSIGNED NOT NULL DEFAULT 0 , `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP , PRIMARY KEY (type) ) ENGINE=InnoDB;

To fill this counter table we need an initial snapshot:

INSERT INTO counter SELECT type, COUNT(*), NULL FROM offer GROUP BY type; SELECT * FROM counter; SELECT COUNT(*) FROM counter; Update the counter table

To keep the counter table up-to-date we need the following 3 triggers:

DROP TRIGGER IF EXISTS insert_offer_trigger; delimiter // CREATE TRIGGER insert_offer_trigger AFTER INSERT ON offer FOR EACH ROW BEGIN INSERT INTO counter VALUES (NEW.type, 1, NULL) ON DUPLICATE KEY UPDATE count = count + 1, ts = CURRENT_TIMESTAMP(); END; // delimiter ; DROP TRIGGER IF EXISTS update_offer_trigger; delimiter // CREATE TRIGGER update_offer_trigger AFTER UPDATE ON offer FOR EACH ROW BEGIN IF NEW.type = OLD.type THEN UPDATE counter SET ts = CURRENT_TIMESTAMP() WHERE type = NEW.type; ELSE UPDATE counter SET count = count - 1, ts = CURRENT_TIMESTAMP() WHERE type = OLD.type; INSERT INTO counter VALUES (NEW.type, 1, NULL) ON DUPLICATE KEY UPDATE count = count + 1, ts = CURRENT_TIMESTAMP(); END IF; END; // delimiter ; DROP TRIGGER IF EXISTS delete_offer_trigger; delimiter // CREATE TRIGGER delete_offer_trigger AFTER DELETE ON offer FOR EACH ROW BEGIN UPDATE counter SET count = count - 1 WHERE type = OLD.type; END; // delimiter ;

Now we can test some cases and compare the results of both tables:

INSERT INTO offer VALUES (NULL, 'AAA', 'Blablabla'); INSERT INTO offer VALUES (NULL, 'AAA', 'Blablabla'); -- Single offer change UPDATE offer SET data = 'Single offer change' WHERE id = 2; -- Multi offer change UPDATE offer SET data = 'Multi offer change' WHERE type = 'AAA'; -- Single offer delete DELETE FROM offer WHERE id = 1; -- REPLACE (= DELETE / INSERT) REPLACE INTO offer VALUES (3, 'ZZZ', 'Single row replace'); -- New type INSERT INTO offer VALUES (NULL, 'DDD', 'Blablabla'); -- Change of type UPDATE offer SET type = 'ZZZ' where id = 2; -- Change of type to new type UPDATE offer SET type = 'YYY' where id = 3; -- INSERT on DUPLICATE KEY UPDATE INSERT INTO offer VALUES (7, 'DDD', 'ON DUPLICATE KEY UPDATE') ON DUPLICATE KEY UPDATE type = 'DDD', data = 'INSERT ON DUPLICATE KEY'; INSERT INTO offer VALUES (7, 'DDD', 'ON DUPLICATE KEY UPDATE') ON DUPLICATE KEY UPDATE type = 'DDD', data = 'UPDATE ON DUPLICATE KEY UPDATE'; SELECT * FROM offer; SELECT COUNT(*) FROM offer; SELECT * FROM counter; SELECT SUM(count) FROM counter;

This solution has the advantage that we get also a very fast response on the number of rows for a specific order type. Which would be also expensive for MyISAM tables...

MySQL @ FrOSCon 7 in St. Augustin (Germany)

Shinguz - Fri, 2012-06-01 16:21
Taxonomy upgrade extras: mysqltalkconferencefroscon

Also this year we will have a special track for MySQL, Galera, Percona und MariaDB at the FrOSCon in St. Augustin in Germany. The conference is scheduled for August 25 and 26 2012.

Together with the PostgreSQL people we are organizing a sub-conference for Open Source RDBMS there. Now we are looking for interesting talks about MySQL and related techniques like Galera, Percona, MariaDB. The only restriction for the talks is: They must be about an Open Source topic.

We encourage you to send your proposals.

After registering you can Submit a new event. Choose the Databases track. It makes easier to assign the proposal.

Regarding the talks: Please do NOT add talks about NON Open Source solutions. It can be about some new technical things or about some user experience with MySQL technology.

Keep in mind the audience is going to be technical driven. Think about the audience as colleges and not as decision makers.

Please help spreading the word for the Conference by blogging and twittering about it (#froscon)!

And now let us go...

Oli

How to make the MySQL Performance Monitor work on Windows?

Shinguz - Fri, 2012-04-20 18:22
Taxonomy upgrade extras: performanceperformance monitoringperformance monitormpmmaaswindows

A customer recently was asking why our MySQL Performance Monitor (MPM) is not working on Windows...? The answer is short: It was developed on Linux and never tested on Windows...

But I was wondering how much effort it would take to make it work on Windows as well.

I was quite surprised how fast it was to make the basic functionality working on Windows. It took me less than one hour to install, configure and patch MPM.

Patch MPM

The file FromDualMySQLagent.pm has to be patched at 2 locations. The lock file name must be something understandable by Windows (for example C:\Temp\FromDualMySQLagent.lock. We will fix that in the next MPM release.

40 # Should NOT be hard coded, tofix later!!! 41 # Does not work on Windows! 42 my $lAgentLockFile = '/tmp/FromDualMySQLagent.lock'; 43 # Check if lock file already exists and complain if yes ... 533 # Does not work on Windows! 534 my $lAgentLockFile = '/tmp/FromDualMySQLagent.lock'; 535 if ( ! unlink($lAgentLockFile) ) {

There are at least 2 other parts in the code which make troubles. But they can be circumvented by disabling the modules (server and process) respectively configuring MPM accordingly.

A basic MPM configuration file on Windows

We have used the following basic configuration file:

[default] LogFile = C:\Users\oli\logs\mpm.log Debug = 2 CacheFileBase = C:\Users\oli\cache MaaS = on Hash = <your hash> Methtode = http Url = http://support.fromdual.com/maas/receiver.php [FromDual.Win_laptop] Modules = mpm [FromDual.Win_laptop.win_db]

In your case there is possibly some more configuration needed. For details please look here.

Now we are quite confident that the next MPM release will work more or less with Windows out of the box. If you cannot wait try it out with this hack. More details about installing the MPM on Windows you can find here. If you run into problems please report them in the MPM installation on Windows forum. All paying customers can naturally use our support platform.

MySQL and Galera Load Balancer (GLB)

Shinguz - Sat, 2012-04-07 10:10

When you install a Galera Cluster for MySQL for High Availability (HA) it is not enough to install the Database Cluster to achieve this goal. You also have to make the application aware of this HA functionality. This is typically done with some kind of load balancing mechanism between the database and the application.

We have several possibilities how to make such a load balancing possible:

  • We build such a load balancing mechanism directly into the application.
  • When we use Java or PHP we can use the fail-over functionality of the connectors (Connector/J, mysqlnd-ms).
  • If we cannot touch the application we can put a load balancing mechanism between the application and the database. This can be done with:
Building the Galera Load Balancer

As an example we look at the Galera Load Balancer (GLB). The documentation about it you can find in the README file.

It can be built as follows:

wget http://www.codership.com/files/glb/glb-0.7.4.tar.gz tar xf glb-0.7.4.tar.gz cd glb-0.7.4 ./configure make make install
Starting the Galera Load Balancer

The Galera Load Balancer will be started as follows:

./glbd --daemon --threads 6 --control 127.0.0.1:4444 127.0.0.1:3306 \ 192.168.56.101:3306:1 192.168.56.102:3306:1 192.168.56.103:3306:1 Incoming address: 127.0.0.1:3306 , control FIFO: /tmp/glbd.fifo Control address: 127.0.0.1:4444 Number of threads: 6, source tracking: OFF, verbose: OFF, daemon: YES Destinations: 3 0: 192.168.56.101:3306 , w: 1.000 1: 192.168.56.102:3306 , w: 1.000 2: 192.168.56.103:3306 , w: 1.000
Querying the Galera Load Balancer

It can be queried as follows:

echo getinfo | nc -q 1 127.0.0.1 4444 Router: ---------------------------------------------------- Address : weight usage conns 192.168.56.101:3306 : 1.000 0.667 2 192.168.56.102:3306 : 1.000 0.500 1 192.168.56.103:3306 : 1.000 0.500 1 ---------------------------------------------------- Destinations: 3, total connections: 4

and

echo getstats | nc -q 1 127.0.0.1 4444 in: 37349 out: 52598 recv: 89947 / 1989 send: 89947 / 1768 conns: 225 / 4 poll: 1989 / 0 / 1989 elapsed: 76.59987
Draining nodes with Galera Load Balancer

Let's assume, we want to take out node 192.168.56.101 from the Load Balancer for maintenance purposes, this can be done as follows:

echo 192.168.56.101:3306:0 | nc -q 1 127.0.0.1 4444 echo getinfo | nc -q 1 127.0.0.1 4444 Router: ---------------------------------------------------- Address : weight usage conns 192.168.56.101:3306 : 0.000 1.000 0 192.168.56.102:3306 : 1.000 0.667 2 192.168.56.103:3306 : 1.000 0.667 2 ---------------------------------------------------- Destinations: 3, total connections: 4
Removing and adding nodes from Galera Load Balancer

If you want to shrink or grow your database cluster, removing and adding nodes works as follows:

echo 192.168.56.103:3306:-1 | nc -q 1 127.0.0.1 4444 echo 192.168.56.103:3306:2 | nc -q 1 127.0.0.1 4444

And now have fun playing around with your Galera Load Balancer...

FromDual Performance Monitor for MySQL (MPM) v0.9 released

Shinguz - Tue, 2012-04-03 14:03
Taxonomy upgrade extras: performanceenterprise monitormonitoringperformance monitoringnewsperformance monitorreleasegraphgalerampmmaas

On April 2nd 2012 FromDual released the new version v0.9 of its Performance Monitor for MySQL (mpm). The new version can be downloaded from here.

The Performance Monitor for MySQL (mpm) is an agent which is hooked into Zabbix. Zabbix is an integrated Enterprise Monitoring solution which can produce performance graphs and alerting.

The changes in the new release are:

New functionality
  • A new server module gathers MySQL database specific server informations. This is especially interesting for the Monitoring as a Service customers.
  • You can monitor Galera Cluster for MySQL now. All important items of Galera Cluster for MySQL up to version 2.0 are gathered. The important Triggers and Graphs are available. FromDual Performance Monitor for MySQL becomes your indispensable tool for monitoring Galera Cluster!
  • Trigger was added on low open_files_limit
Changed functionality
  • Item history was reduced from 90 to 30 days to safe space on disk.
  • InnoDB items were added and Graphs improved and cleaned-up.
  • MyISAM items were added and Graphs improved.
  • Query Cache items were added.
  • Some triggers were too verbose or complained when they should not. Should be fixed now.
  • MPM v0.9 was tested with Zabbix 1.8.11 and works without any problems.
Fixes
  • Some items were not reported correctly. Fixed them.
  • Many little bugs in different modules were fixed.

For more detailed informations see the CHANGELOG.

Installation and upgrade documentation can be found here.

If you want to stay tuned about the progess of the next release of mpm follow us on Twitter...

If you find any bug please report them to our bugs database. If you have some questions or if you want to exchange know-how related to the mpm please go to our Forum.

Pages

Subscribe to FromDual Aggregator – MySQL Tech-Feed (en)