You are here
Learning from the Bugs Database
This week I came across an old known issue reported in May 2010: Master/Slave Replication with binlog_format = ROW
and tables without a Primary Key is a bad idea! Especially if these tables are huge.
Why this is a bad idea is described in the bug report #53375:
if one runs DML on a table that has no indexes, a full table scan is done. with RBR, the slave might need to scan the full table for *each* row changed.
The consequence of this behaviour is that the Slave starts lagging. It was further mentioned:
Worst part is that
PROCESSLIST
, etc provide absolutely NO obvious indication what is going on, for something that may take 12 hours, 3 days or even more...
Symptoms of this problem are described as follows:
Observe 78,278 row locks but only 10,045 undo log entries, so many more rows being scanned than changed. Also observer 16 row deletes per second but 600,754 row reads per second, same mismatch between counts suggesting unindexed accesses are happening.
You may also see "invalidating query cache entries (table)" as a symptom in the processlist. If you see that, check to see whether this is the possible root cause instead of giving full blame to only the query cache."
The suggested workaround is: add a primary key to the table.
But some user complain:
in my case, my only decent primary key is a surrogate key - and that's untenable because of the locking and lost concurrency (even with
lock_mode = 2
). Even if I solved that, I'd have to use the surrogate in partitioning - which more or less defeats the purpose of partitioning by lopsiding the partitions.
and others claim:
Adding an "otherwise usable (i.e. to improve query times)" PK is not really an option for them since there are no short unique columns.
A long composite key is also not an option because:
- In InnoDB tables, having a long PRIMARY KEY wastes a lot of space.
- In InnoDB, the records in nonclustered indexes (also called secondary indexes) contain the primary key columns for the row that are not in the secondary index. InnoDB uses this primary key value to search for the row in the clustered index. If the primary key is long, the secondary indexes use more space, so it is advantageous to have a short primary key.
And then comes a first suggestion for solving the issue:
So, we can create a normal short/auto-increment PK, but this is more or less the same as having the internal/hidden InnoDB PK (which does not seem to be used properly for RBR replication purposes).
As mentioned before, possibly the internal/hidden InnoDB PK can be used to resolve this bug.
Short after we get an important information and the learning of the day:
there's nothing that makes the InnoDB internal key consistent between a master and a slave or before and after backup and restore. Row with internal ID 1 can have completely different end user data values on different servers, so it's useless for the purpose being considered here, unfortunately.
Nor is there any prohibition on a slave having a unique key, which will be promoted to PK, even if there is no unique key on the master. They can even have different PKs and there can be good application reasons for doing that. Though we could require at least a unique key on all slaves that matches a master's PK without it hurting unduly.
It is possible to recommend at least _a_ key (not necessarily unique) on the slave and have replication try key-based lookups to narrow down the number of rows that must examined. That combined with batch processing should cut the pain a lot because we can reasonably ask for at least some non-unique but at least reasonably selective key to use. But this is only recommend, not require. If people want no key, we should let them have no key and be slow.
Then we got some further information why moving back to SBR is a bad idea:
It is my opinion that switching to SBR has way too many trade offs (if even for one table) to call it an acceptable workaround. The main crux for this argument being that just about the only time you run into this bug is when you have tables with a massive amount of rows - which is exactly where you start paying heavy penalties for SBR (Locking)."
And a new potential problem rises up:
As far as how the server should know which key to use - am I correct in assuming that it will use the optimizer to determine the index, and you are asking what would happen if the optimizer picked the wrong one?"
Another suggestion for improvement:
Batching looks promising, at least it would reduce the number of scans. But it would still be very painful if no key could be used. While using a key would be very painful if a high percentage of the rows in the table were being touched. So maybe some mixed solution that depends on the count of rows being touched might be best.
In about 2012 they had an implementation for batch jobs.
And you can force a Primary Key now in MySQL 8.0 since 2021 with sql_require_primary_key
.
How MariaDB solves the problem you can find here: Row-based Replication With No Primary Key.
- Shinguz's blog
- Log in or register to post comments