You are here
Feed aggregator
Last login of MySQL database users
MySQL hosting providers can easily loose the overview over their customers and which user or schema is still in use and which not.
The MySQL database becomes bigger and bigger, uses more and more RAM and disk space and the backup takes longer and longer.
In this situation it would be nice to know which MySQL database user has logged in within the last 6 months for example. MySQL database users who did not login within a defined period can be backuped and removed from the production MySQL database.
The following MySQL login trigger helps to track the login of all non-super privileged MySQL users.
First we need a table where to log the login of the users:
-- DROP DATABASE tracking; CREATE DATABASE tracking; use tracking; -- DROP TABLE IF EXISTS login_tracking; CREATE TABLE login_tracking ( user VARCHAR(16) , host VARCHAR(60) , ts TIMESTAMP , PRIMARY KEY (user, host) ) engine = MyISAM;Then we need a MySQL stored procedure which does the logging of the login:
-- DROP PROCEDURE IF EXISTS login_trigger; DELIMITER // CREATE PROCEDURE login_trigger() SQL SECURITY DEFINER BEGIN INSERT INTO login_tracking (user, host, ts) VALUES (SUBSTR(USER(), 1, instr(USER(), '@')-1), substr(USER(), instr(USER(), '@')+1), NOW()) ON DUPLICATE KEY UPDATE ts = NOW(); END; // DELIMITER ;Then we have to grant the EXECUTE privilege to all users of the database which do not have the SUPER privilege. MySQL users with the SUPER privilege are not logged with the init_connect login trigger hook:
-- REVOKE EXECUTE ON PROCEDURE tracking.login_trigger FROM 'oli'@'%'; GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO 'oli'@'%';Those GRANTSs can be created with the following query:
tee /tmp/grants.sql SELECT CONCAT("GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO '", user, "'@'", host, "';") AS query FROM mysql.user WHERE Super_priv = 'N'; notee +---------------------------------------------------------------------------------+ | query | +---------------------------------------------------------------------------------+ | GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO 'oli'@'localhost'; | | GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO 'replication'@'127.0.0.1'; | | GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO 'oli'@'%'; | | GRANT EXECUTE ON PROCEDURE tracking.login_trigger TO ''@'localhost'; | +---------------------------------------------------------------------------------+As the last step we have to activate the stored procedure by hooking it into the login trigger hook:
-- SET GLOBAL init_connect=""; SET GLOBAL init_connect="CALL tracking.login_trigger()";If something went wrong with the login trigger you find the needed information in the MySQL error log.
ReportingTo find out, which users have logged in we can run the following query:
SELECT * FROM tracking.login_tracking; +------+-----------+---------------------+ | user | host | ts | +------+-----------+---------------------+ | oli | localhost | 2012-11-30 15:36:39 | +------+-----------+---------------------+To find at what time a user has logged in last you can run:
SELECT u.user, u.host, l.ts FROM mysql.user AS u LEFT JOIN tracking.login_tracking AS l ON u.user = l.user AND l.host = u.host WHERE u.Super_priv = 'N'; +-------------+-----------+---------------------+ | user | host | ts | +-------------+-----------+---------------------+ | oli | localhost | 2012-12-01 09:55:33 | | replication | 127.0.0.1 | NULL | | crm | 127.0.0.1 | NULL | +-------------+-----------+---------------------+And to find users which are logged but could not be found from the mysql user table you can run:
SELECT l.user, l.host FROM tracking.login_tracking AS l LEFT JOIN mysql.user AS u ON u.user = l.user AND l.host = u.host WHERE u.user IS NULL;MySQL backup to file, gzip and load in one step
When a MySQL Slave is set-up with mysqldump you have 2 possibilities:
- You dump into a file and then load the data into the Slave with the mysql client utility.
- You dump directly into the mysql client utility.
The first possibility has the advantage that you can start the load again if it failed. You can look into the file (and do some changes if needed).
The second possibility has the advantage that you do not need disk space and that it is possibly faster. But when the load fails you have to start from the very beginning.
What I was looking for is a way to combine everything in one step: Dumping to a file including compression and in the same step load the database to a slave. This is what I found to solve these requirements:
mysqldump --user=root --all-databases --flush-privileges --single-transaction --master-data=1 --quick \ --flush-logs --triggers --routines --events | tee >(gzip > /tmp/full_backup.sql.gz) | mysql --user=root --host=192.168.1.60 --port 3306With this command you can even load several CPUs of the system:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24747 mysql 20 0 534m 56m 5504 S 36.1 0.7 4:12.35 mysqld 4967 mysql 20 0 402m 33m 5236 S 7.0 0.4 0:02.06 mysqld 4982 mysql 20 0 23348 2112 1216 S 6.6 0.0 0:01.64 mysqldump 4984 mysql 20 0 28608 3856 1372 S 5.6 0.0 0:01.58 mysql 4986 mysql 20 0 4296 688 304 S 5.3 0.0 0:02.10 gzip 4983 mysql 20 0 98.5m 628 544 S 0.7 0.0 0:00.13 teeIf gzip becomes the bottleneck you can try with pigz.
Resize XFS file system for MySQL
Important: Before you start any operation mentioned below do a proper file system backup of your XFS file system you want to resize. If MySQL is running on this mount point do this with a stopped mysqld. Alternatively you can also use mysqldump to do the MySQL backup but test the restore time before continuing to not experience ugly surprises...
All these operations have to be performed as the root user. First we want to see what mount points are available:
shell> df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 485M 77M 383M 17% / /dev/sdb1 496M 314M 157M 67% /var/lib/mysqlOur MySQL data are located on /dev/sdb1.
After the file system backup unmount /dev/sdb1 and resize the disk, partition or volume (works for VMware, NetApp filer and similar equipment, for LVM use lvextend):
shell> tar cvf /backup/mysql.tar /var/lib/mysql shell> umount /var/lib/mysql shell> fdisk /dev/sdbChange the units in fdisk to have a better overview over your begin and end of your partition:
fdisk> u Changing display/entry units to sectors fdisk> p Disk /dev/sdb: 1073 MB, 1073741824 bytes 139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xea17dfd0 Device Boot Start End Blocks Id System /dev/sdb1 8 1047503 523748 83 Linux fdisk> d Selected partition 1 fdisk> n Command action e extended p primary partition (1-4) fdisk> p Partition number (1-4): fdisk> 1 First sector (8-2097151, default 8): Using default value 8 Last sector, +sectors or +size{K,M,G} (8-2097151, default 2097151): Using default value 2097151 fdisk> w shell> fdisk /dev/sdb fdisk> p Disk /dev/sdb: 1073 MB, 1073741824 bytes 139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xea17dfd0 Device Boot Start End Blocks Id System /dev/sdb1 8 2097151 1048572 83 LinuxNow the partition has the new size. The next step is to resize the XFS file system. Install the XFS tools if they are not already there:
apt-get install xfsprogs yum install xfsprogsAnd then extend the XFS file system on-line and mount it again:
shell> xfs_growfs /var/lib/mysql shell> df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 485M 77M 383M 17% / /dev/sdb1 992M 314M 627M 34% /var/lib/mysqlMySQL tmpdir on RAM-disk
MySQL temporary tables are created either in memory (as MEMORY tables) or on disk (as MyISAM tables). How many tables went to disk and how many tables went to memory you can find with:
mysql> SHOW GLOBAL STATUS LIKE 'Created_tmp%tables'; +-------------------------+----------+ | Variable_name | Value | +-------------------------+----------+ | Created_tmp_disk_tables | 49094 | | Created_tmp_tables | 37842181 | +-------------------------+----------+Tables created in memory are typically faster than tables created on disk. Thus we want as many as possible tables to be created in memory.
To achieve this we can configure the variables accordingly:
mysql> SHOW GLOBAL VARIABLES LIKE '%table_size'; +---------------------+----------+ | Variable_name | Value | +---------------------+----------+ | max_heap_table_size | 25165824 | | tmp_table_size | 25165824 | +---------------------+----------+All result sets which are smaller than these values can be handled as MEMORY tables. All result sets bigger than these values are handled as MyISAM tables an go to disk.
But there is still an other reason for tables going to disk: MEMORY tables cannot handle TEXT or BLOB attributes as it often occurs in CMS like Typo3. In these cases MySQL has to do directly MyISAM tables on disk and they are counted as Created_tmp_disk_tables.
If these temporary disk tables are causing serious I/O performance problems one could consider to use a RAM-disk instead of normal physical disks instead.
On Linux we have 2 possibilities to create a RAM-disk: ramfs and tmpfs [ 1 ].
We recommend to use tmpfs.
A RAM-disk can be created as follows:
shell> mkdir -p /mnt/ramdisk shell> chown mysql:mysql /mnt/ramdisk shell> mount -t tmpfs -o size=512M tmpfs /mnt/ramdiskTo make this persistent we have to add it to the fstab:
# # /etc/fstab # tmpfs /mnt/ramdisk tmpfs rw,mode=1777,size=512M 0 0MySQL still writes to the default location which is found as follows:
mysql> SHOW GLOBAL VARIABLES LIKE 'tmpdir'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | tmpdir | /tmp | +---------------+-------+To changes this value you have to configure your my.cnf accordingly and restart the database...
MySQL Backup, HA und Performance Tuning Schulung in Essen (D)
Zusammen mit dem Linux Hotel bieten wir jetzt auch in Essen MySQL Kurse an. Der erste Kurs findet noch dieses Jahr im Dezember statt:
MySQL Backup Essen, Deutschland 17. Dezember 2012 deutsch buchen MySQL Hochverfügbarkeit Essen, Deutschland 18. - 19. Dezember 2012 deutsch buchen MySQL Performance Tuning Essen, Deutschland 20. - 21. Dezember 2012 deutsch buchen MySQL Fortgeschrittene Essen, Deutschland 17. - 21. Dezember 2012 deutsch buchenUnser vollständiges Kursangebot finden Sie auf unserer Schulungsseite.
Wir planen ein identisches Angebot auch in Zürich (CH). Sollten Sie Zürich bevorzugen, teilen Sie uns dies bitte mit, damit wir Sie vormerken können...
Artikel über MySQL im neuen iX
Hallo MySQL Gemeinde,
Im neuen iX (2012-10) hat es zwei nette Artikel über: MySQL HA-Lösungen (S. 116) und MySQL 5.6 (S. 66).
Viel Spass beim Lesen!
Taxonomy upgrade extras: High AvailabilityclusterMySQL ClustergalerareplikationArtikel über MySQL im neuen iX
Hallo MySQL Gemeinde,
Im neuen iX (2012-10) hat es zwei nette Artikel über: MySQL HA-Lösungen (S. 116) und MySQL 5.6 (S. 66).
Viel Spass beim Lesen!
Taxonomy upgrade extras: High AvailabilityclusterMySQL ClustergalerareplikationArtikel über MySQL im neuen iX
Hallo MySQL Gemeinde,
Im neuen iX (2012-10) hat es zwei nette Artikel über: MySQL HA-Lösungen (S. 116) und MySQL 5.6 (S. 66).
Viel Spass beim Lesen!
Galera Cluster Nagios Plugin
Based on customer feedback we have decided to add a plugin Galera Cluster for MySQL to our MySQL Nagios/Icinga Plugins.
The module checks, if the node is in status Primary and if the expected amount of Galera Cluster nodes is available. If not, a warning or an alarm is returned.
The script is written in Perl and is Nagios Plugin API v3.0 compatible.
You can download it from our download page.
If you have suggestions for improvements, please contact us. Bugs can be reported at our bugs database.
The following modules are contained in the package:
- check_db_mysql.pl
- check_errorlog_mysql.pl
- check_galera_nodes.pl
- check_repl_mysql_cnt_slave_hosts.pl
- check_repl_mysql_hearbeat.pl
- check_repl_mysql_io_thread.pl
- check_repl_mysql_read_exec_pos.pl
- check_repl_mysql_readonly.pl
- check_repl_mysql_seconds_behind_master.pl
- check_repl_mysql_sql_thread.pl
- perf_mysql.pl
Galera Cluster Nagios Plugin
Aufgrund einer Kundenrückmeldung haben wir uns entschlossen unsere MySQL Nagios/Icinga Plugins um ein Plugin Galera Cluster für MySQL zu erweitern.
Das Modul prüft, ob der Knoten im Status Primary und ob die erwartete Anzahl von Galera Cluster Knoten vorhanden ist. Wenn nicht, wird eine Warnung oder ein Alarm zurückgemeldet.
Das Skript ist in Perl geschrieben und ist Nagios Plugin API v3.0 konform.
Ihr könnt es Euch von unserer Download Page herunterladen.
Wenn Ihr noch Verbesserungsvorschläge habt, bitte melden. Fehler können in unserer Bugs-Datenbank gemeldet werden.
Folgende Module sind im Paket enthalten:
- check_db_mysql.pl
- check_errorlog_mysql.pl
- check_galera_nodes.pl
- check_repl_mysql_cnt_slave_hosts.pl
- check_repl_mysql_hearbeat.pl
- check_repl_mysql_io_thread.pl
- check_repl_mysql_read_exec_pos.pl
- check_repl_mysql_readonly.pl
- check_repl_mysql_seconds_behind_master.pl
- check_repl_mysql_sql_thread.pl
- perf_mysql.pl
Galera Cluster Nagios Plugin
Aufgrund einer Kundenrückmeldung haben wir uns entschlossen unsere MySQL Nagios/Icinga Plugins um ein Plugin Galera Cluster für MySQL zu erweitern.
Das Modul prüft, ob der Knoten im Status Primary und ob die erwartete Anzahl von Galera Cluster Knoten vorhanden ist. Wenn nicht, wird eine Warnung oder ein Alarm zurückgemeldet.
Das Skript ist in Perl geschrieben und ist Nagios Plugin API v3.0 konform.
Ihr könnt es Euch von unserer Download Page herunterladen.
Wenn Ihr noch Verbesserungsvorschläge habt, bitte melden. Fehler können in unserer Bugs-Datenbank gemeldet werden.
Folgende Module sind im Paket enthalten:
- check_db_mysql.pl
- check_errorlog_mysql.pl
- check_galera_nodes.pl
- check_repl_mysql_cnt_slave_hosts.pl
- check_repl_mysql_hearbeat.pl
- check_repl_mysql_io_thread.pl
- check_repl_mysql_read_exec_pos.pl
- check_repl_mysql_readonly.pl
- check_repl_mysql_seconds_behind_master.pl
- check_repl_mysql_sql_thread.pl
- perf_mysql.pl
Galera Cluster Nagios Plugin
Aufgrund einer Kundenrückmeldung haben wir uns entschlossen unsere MySQL Nagios/Icinga Plugins um ein Plugin Galera Cluster für MySQL zu erweitern.
Das Modul prüft, ob der Knoten im Status Primary und ob die erwartete Anzahl von Galera Cluster Knoten vorhanden ist. Wenn nicht, wird eine Warnung oder ein Alarm zurückgemeldet.
Das Skript ist in Perl geschrieben und ist Nagios Plugin API v3.0 konform.
Ihr könnt es Euch von unserer Download Page herunterladen.
Wenn Ihr noch Verbesserungsvorschläge habt, bitte melden. Fehler können in unserer Bugs-Datenbank gemeldet werden.
Folgende Module sind im Paket enthalten:
- check_db_mysql.pl
- check_errorlog_mysql.pl
- check_galera_nodes.pl
- check_repl_mysql_cnt_slave_hosts.pl
- check_repl_mysql_hearbeat.pl
- check_repl_mysql_io_thread.pl
- check_repl_mysql_read_exec_pos.pl
- check_repl_mysql_readonly.pl
- check_repl_mysql_seconds_behind_master.pl
- check_repl_mysql_sql_thread.pl
- perf_mysql.pl
Date for next MySQL Cluster trainings scheduled
The dates for the next MySQL Cluster (ndb) trainings with the Linux Hotel are scheduled now: March 11 - 12 2013 and September 23 - 24 2013.
You can book your training here.
Daten für nächste MySQL Cluster Schulungen festgelegt
Die Daten für die nächsten MySQL Cluster (ndb) Schulungen im Linux Hotel sind jetzt festgelegt: 11./12. März 2013 und 23./24. September 2013.
Sie können Ihre Schulung hier buchen.
Wir bauen uns ein Data Warehouse mit MySQL
Galera Cluster discussions at FrOSCon 2012
During and after Henriks great talk about Galera Cluster at the FrOSCon 2012 in St. Augustin we found 2 important things related to Galera Cluster for MySQL:
- The InnoDB double write buffer (innodb_doublewrite) should not be disabled anymore for Galera when using v2.0 and higher!!! The reason for this is: When MySQL crashes InnoDB pages might get corrupted during the crash. They would be fixed by the blocks from the double write buffer during auto-recovery. But if the double write buffer is disabled they are not available. With Galera v1.x that was not a problem because after a crash a SST would have happened and the corrupted InnoDB block are corrected. But now with IST in Galera v2.0 MySQL will start without noticing the corruption (as usual) and only an IST is performed. This leads to a running MySQL database with possibly corrupted InnoDB blocks. And this might cause you later troubles for example if this node is used as a donor. Then the corrupted page is inherited to other nodes (using rsync or Xtrabackup?). And in some bad cased then the whole Cluster could crash at once when hitting the corrupted page. Thanks to Monty W. for bringing this up!
Recommendation is: Do NOT disable InnoDB double write buffer (innodb_doublewrite) with Galera Cluster >= v2.0 if your care about your data! - The second discussion was about the event sequence in the binary-log (for those who where present: the A-B vs B-A discussion). Codership confirmed that the binary-log sequence on 2 different Galera nodes of the same Galera Cluster should be the same (everything else is considered to be a bug). As a result this leads to 2 different consequences:
a) The binary-log of node B can be used for a PiTR of node A in case we need it. Finding the right position is a bit tricky and it needs some manual work on this (finding XID with binlog-pos of node B, then finding binlog-pos of node A with XID). But Codership told me they are planning a tool for automatizing this.
b) The binary-log of node B can be used for a Channel fail-over in case we have 2 different Galera Clusters in 2 different data centers connect to each other through MySQL asynchronous replication... For more on this topic see also MySQL Cluster and channel failover...
Deadlocks, indexing and Primary Key's
Recently a customer has shown up with some deadlocks occurring frequently. They were of the following type (I have shortened the output a bit):
*** (1) TRANSACTION: TRANSACTION 22723019234, fetching rows mysql tables in use 1, locked 1 LOCK WAIT 7 lock struct(s), heap size 1216, 14 row lock(s) update location set expires='2012-08-10 04:50:29' where username='12345678901' AND contact='sip:12345678901@192.168.0.191:5060' AND callid='945a20d4-8945dcb5-15511d3e@192.168.0.191' *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 2203904 n bits 136 index `GEN_CLUST_INDEX` of table `location` trx id 22723019234 lock_mode X locks rec but not gap waiting *** (2) TRANSACTION: TRANSACTION 22723019222, fetching rows, thread declared inside InnoDB 225 mysql tables in use 1, locked 1 192 lock struct(s), heap size 30704, 9483 row lock(s) delete from location where expires<'2012-08-10 04:49:30' AND expires!='1969-12-31 19:00:00' *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 0 page no 2203904 n bits 136 index `GEN_CLUST_INDEX` of table `location` trx id 22723019222 lock_mode X *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 2203951 n bits 136 index `GEN_CLUST_INDEX` of table `location` trx id 22723019222 lock_mode X waiting *** WE ROLL BACK TRANSACTION (1)They want us to have this fixed. And they did not like the answer that the application has to cope with deadlocks [ 1 ].
But one thing looks suspicious here: The GEN_CLUSTER_INDEX! This basically means there was NO explicit Primary Key on the InnoDB table and InnoDB was creating its own internal Primary Key.
After some discussion we started to examine the whole situation. For this we transformed the UPDATE and the DELETE statement into SELECT's:
UPDATE EXPLAIN SELECT * FROM location WHERE username='12345678901' AND contact='sip:12345678901@192.168.0.191:5060' AND callid='945a20d4-8945dcb5-15511d3e@192.168.0.191' ; +----+-------------+----------+------+---------------+----------+---------+-------+------+------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+------+---------------+----------+---------+-------+------+------------------------------------+ | 1 | SIMPLE | location | ref | username | username | 66 | const | 9 | Using index condition; Using where | +----+-------------+----------+------+---------------+----------+---------+-------+------+------------------------------------+The first strange thing is, that the MySQL optimizer expects 9 rows (on a long key of 66 bytes) which theoretically should be a Primary Key access! This sound non optimal. And as shorter and faster transactions are as less probable are deadlocks.
So we tried to look at the transaction with SHOW ENGINE INNODB STATUS: START TRANSACTION; SELECT * FROM location WHERE username='12345678901' AND contact='sip:12345678901@192.168.0.191:5060' AND callid='945a20d4-8945dcb5-15511d3e@192.168.0.191' FOR UPDATE ; ---TRANSACTION 14033 4 lock struct(s), heap size 1248, 11 row lock(s) MySQL thread id 5, OS thread handle 0x7f3647b9e700, query id 526 localhost root cleaning upWe can see that the same query uses 4 lock structs and locks in total 11 rows. This is similar to what we have seen in the deadlock.
DELETE EXPLAIN SELECT * FROM location WHERE expires < '2012-08-10 04:49:30' AND expires != '1969-12-31 19:00:00' ; +----+-------------+----------+------+---------------+------+---------+------+-------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+------+---------------+------+---------+------+-------+-------------+ | 1 | SIMPLE | location | ALL | NULL | NULL | NULL | NULL | 10754 | Using where | +----+-------------+----------+------+---------------+------+---------+------+-------+-------------+Uiii! The DELETE does not use an index at all but does a full table scan. Which is not so optimal performance wise...
START TRANSACTION; SELECT * FROM location WHERE expires < '2012-08-10 04:49:30' AND expires != '1969-12-31 19:00:00' FOR UPDATE ; ---TRANSACTION 14034 168 lock struct(s), heap size 31160, 11007 row lock(s) MySQL thread id 6, OS thread handle 0x7f3647b9e700, query id 663 localhost root cleaning upAnd we can see a huge amount of locked rows... The table contains 10840 rows in total. Those numbers differ a bit from the deadlock but it is OK because they do not represent the same point in time.
So we started looking at the table structure. The table which was provided by the customer looks as follows:
CREATE TABLE `location` ( `username` varchar(64) NOT NULL DEFAULT '', `domain` varchar(128) NOT NULL DEFAULT '', `contact` varchar(255) NOT NULL DEFAULT '', `received` varchar(255) DEFAULT NULL, `path` varchar(255) DEFAULT NULL, `expires` datetime NOT NULL DEFAULT '2020-01-01 00:00:00', `q` float(10,2) NOT NULL DEFAULT '1.00', `callid` varchar(255) NOT NULL DEFAULT 'Default-Call-ID', `cseq` int(11) NOT NULL DEFAULT '42', `last_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `replicate` int(10) unsigned NOT NULL DEFAULT '0', `state` tinyint(1) unsigned NOT NULL DEFAULT '0', `flags` int(11) NOT NULL DEFAULT '0', `cflags` int(11) NOT NULL DEFAULT '0', `user_agent` varchar(100) NOT NULL DEFAULT '', `socket` varchar(128) DEFAULT NULL, `methods` int(11) DEFAULT NULL, `id` int(10) NOT NULL DEFAULT '0', KEY `username` (`username`,`domain`,`contact`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ;First I want to fix the problem of the full table scan:
ALTER TABLE location ADD INDEX (expires);Then the DELETE looks much better:
+----+-------------+----------+-------+---------------+---------+---------+------+------+-----------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+-------+---------------+---------+---------+------+------+-----------------------+ | 1 | SIMPLE | location | range | expires | expires | 5 | NULL | 2 | Using index condition | +----+-------------+----------+-------+---------------+---------+---------+------+------+-----------------------+ ---TRANSACTION 14074 2 lock struct(s), heap size 1248, 1 row lock(s) MySQL thread id 6, OS thread handle 0x7f3647b9e700, query id 671 localhost root cleaning upBut I do not know how realistic my actual data are. This can change with an other data-set!
Now I want to see if there is any difference with the KEY declared as a Primary Key:
ALTER TABLE location DROP INDEX username, ADD PRIMARY KEY (username, domain, contact);Long indexes are bad for InnoDB. See blog post which will hopefully appear soon!
+----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+ | 1 | SIMPLE | location | ref | PRIMARY | PRIMARY | 66 | const | 9 | Using where | +----+-------------+----------+------+---------------+---------+---------+-------+------+-------------+ ---TRANSACTION 14145 3 lock struct(s), heap size 1248, 10 row lock(s) MySQL thread id 6, OS thread handle 0x7f3647b9e700, query id 684 localhost root cleaning upExecution plan looks the same. OK. 1 row less is locked. This means 10% less probability of deadlocks?
The effect was not as big as expected. So rolling back last change (making at least a unique key out of it).
As already mentioned, short Primary Keys are good for InnoDB. And as we will show in a blog post soon VARCHAR are bad performance wise. So we try to use the non used? field id:
ALTER TABLE location DROP PRIMARY KEY, ADD UNIQUE KEY (username, domain, contact); ALTER TABLE location MODIFY COLUMN id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY; EXPLAIN SELECT * FROM location WHERE id = 2984 ; +----+-------------+----------+-------+---------------+---------+---------+-------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+-------+---------------+---------+---------+-------+------+-------+ | 1 | SIMPLE | location | const | PRIMARY | PRIMARY | 4 | const | 1 | NULL | +----+-------------+----------+-------+---------------+---------+---------+-------+------+-------+ START TRANSACTION; SELECT * FROM location WHERE id = 2984 FOR UPDATE ; ---TRANSACTION 14336 2 lock struct(s), heap size 376, 1 row lock(s) MySQL thread id 6, OS thread handle 0x7f3647b9e700, query id 701 localhost root cleaning upAnd see there: The Query Execution Plan looks much better and locks are much smaller.
As a result- Use proper indexing (expires).
- Follow the rules: Creating Primary Keys on relational database tables. See also article: Disadvantages of explicitly NOT using InnoDB Primary Keys?.
- Follow the rules: Create short synthetic Primary Keys and avoid long natural Primary Keys (especially with InnoDB): Clustered and Secondary Indexes and Optimizing InnoDB Queries.
- Make your application aware of deadlocks (and other kinds of aborted transactions) and reissue transaction if it fails with a deadlock. [ 1 ]
I hope we can add here the results on the impact of deadlock occurrence soon.
DOAG SIG MySQL - Replication: September 4, 2012 in Hamburg
On Tuesday, September 4, 2012 the next DOAG SIG MySQL meeting will take place in Hamburg (Germany) with the topic Replication.
Possible presentations are: MySQL Replication, Galera Cluster, Replication with Oracle GoldenGate, Replication with Zimory Scale and Tungsten Replicator.
The event will presumably be located at the Hotel Böttcherhof at Wöhlerstrasse 2.
DOAG SIG MySQL - Replikation: 4. 9. 2012 in Hamburg
Am Dienstag, 4. September 2012 find in Hamburg das nächste DOAG SIG MySQL Meeting zum Thema Replikation statt.
Mögliche Vorträge sind: MySQL Replication, Galera Cluster, Replikation mit Oracle GoldenGate, Replikation mit Zimory Scale und Tungsten Replicator.
Das Event findet voraussichtlich im Hotel Böttcherhof an der Wöhlerstrasse 2 statt.
Frankfurter Datenbanktage 2013
FromDual is having a talk at Frankfurter Datenbanktage 2013, March 14/15
Pages
