You are here

MySQL Tech-Feed (en)

Playing with MariaDB Vector for initial AI tests

Shinguz - Wed, 2024-08-28 10:51

Artificial intelligence (AI) and vector databases are on everyone's lips these days. Since MariaDB will soon be coming onto the market with vector database functionality, as a database consultant I thought it was about time I got to grips with the subject so that I have at least a hint of what it's all about...

Since I'm not so much of a theoretician but rather like to do something practical, I've built a small "AI" prototype that anyone can build very quickly and easily on their laptop (without a GPU)...

I also took the liberty of stealing the graphs from the MariaDB Foundation presentation (see sources at the end).

Downloading the MariaDB database with vector functionality

There are no MariaDB packages with vector functionality yet, but the source code is already available. So you can quickly build the binaries yourself. This took just under an hour on my old box. Once the binaries are built, you can make a tarball out of them:

# tar xf mariadb-11.6.0_vector.tar.gz # cd mariadb-11.6.0_vector/ # cmake . # make # make package

The MariaDB database then only needs to be started.

The model

To show the concept of tokenisation I decided to build an AI for URLs and to show the concept of different models and their potential for improvement I built a very stupid model in PHP, which simply decomposes a URL.

The question that this model should be able to answer is: "Give me similar URLs to the following URL."

The corresponding table looks like this:

DROP TABLE IF EXISTS `urls`; -- TRUNCATE TABLE is NOT sufficient!!! CREATE TABLE `urls` ( `id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY , `url` varchar(1024) DEFAULT NULL , `title` varchar(2000) DEFAULT NULL , `embedding` blob NOT NULL , VECTOR KEY `embedding` (`embedding`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ;

The model fromdual_llm_v1 can be downloaded here.

This diagram from the MariaDB Foundation shows roughly how the whole thing works:

Training the AI

The database is then trained: The URL is taken as given and the title can be read out using an HTML scraper, for example. Here are 8 training datasets:


The vectors are then generated using our model:

(./fromdual_llm_v1.php https://mariadb.org/download/?t=mariadb&p=mariadb&r=11.6.0+Vector&os=source ./fromdual_llm_v1.php https://mariadb.com/kb/en/creating-the-mariadb-binary-tarball/ ./fromdual_llm_v1.php https://mariadb.org/wp-content/uploads/2024/02/MariaDB-Vector.pdf ./fromdual_llm_v1.php https://mariadb.com/resources/blog/mariadb-vector-preview-is-out/ ./fromdual_llm_v1.php https://mariadb.org/projects/mariadb-vector/ ./fromdual_llm_v1.php https://metacpan.org/pod/Perl::Tokenizer ./fromdual_llm_v1.php https://www.qwak.com/post/utilizing-llms-with-embedding-stores ./fromdual_llm_v1.php https://github.com/qwak-ai/qwak-examples/tree/main/qa_bot_falcon_chroma) | grep '^\[' [0.2, 0.0107421875, 0, 0, 0, 0.0006103515625, 0.00054931640625, 0] [0.2, 0.0107421875, 0, 0, 0, 0.00262451171875, 0, 0] [0.2, 0.0107421875, 0, 0, 0, 0.0028076171875, 0, 0] [0.2, 0.0107421875, 0, 0, 0, 0.0028076171875, 0, 0] [0.2, 0.0107421875, 0, 0, 0, 0.00152587890625, 0, 0] [0.2, 0.01171875, 0, 0, 0, 0.001220703125, 0, 0] [0.2, 0.01171875, 0, 0, 0, 0.0025634765625, 0, 0] [0.2, 0.009765625, 0, 0, 0, 0.00323486328125, 0, 0]

The database is now fed (trained) with these vectors:

INSERT INTO `urls` (id, url, title, embedding) VALUES ( NULL , 'https://mariadb.org/download/?t=mariadb&p=mariadb&r=11.6.0+Vector&os=source' , 'Download MariaDB Server' , VEC_FromText('[0.2, 0.0107421875, 0, 0, 0, 0.00262451171875, 0, 0]') ); INSERT INTO `urls` (id, url, title, embedding) VALUES ( NULL , 'https://mariadb.com/kb/en/creating-the-mariadb-binary-tarball/' , 'Creating the MariaDB Binary Tarball' , VEC_FromText('[0.2, 0.0107421875, 0, 0, 0, 0.0006103515625, 0.00054931640625, 0]') ); INSERT INTO `urls` (id, url, title, embedding) VALUES ( NULL , 'https://mariadb.org/wp-content/uploads/2024/02/MariaDB-Vector.pdf' , 'MariaDB Vector' , VEC_FromText('[0.2, 0.0107421875, 0, 0, 0, 0.0028076171875, 0, 0]') ); INSERT INTO `urls` (id, url, title, embedding) VALUES ( NULL , 'https://mariadb.com/resources/blog/mariadb-vector-preview-is-out/' , 'MariaDB Vector preview is out' , VEC_FromText('[0.2, 0.0107421875, 0, 0, 0, 0.0028076171875, 0, 0]') ); INSERT INTO `urls` (id, url, title, embedding) VALUES ( NULL , 'https://mariadb.org/projects/mariadb-vector/' , 'MariaDB Vector' , VEC_FromText('[0.2, 0.0107421875, 0, 0, 0, 0.00152587890625, 0, 0]') ); INSERT INTO `urls` (id, url, title, embedding) VALUES ( NULL , 'https://metacpan.org/pod/Perl::Tokenizer' , 'Perl::Tokenizer - A tiny Perl code tokenizer' , VEC_FromText('[0.2, 0.01171875, 0, 0, 0, 0.001220703125, 0, 0]') ); INSERT INTO `urls` (id, url, title, embedding) VALUES ( NULL , 'https://www.qwak.com/post/utilizing-llms-with-embedding-stores' , 'Integrating Vector Databases with LLMs: A Hands-On Guide' , VEC_FromText('[0.2, 0.01171875, 0, 0, 0, 0.0025634765625, 0, 0]') ); INSERT INTO `urls` (id, url, title, embedding) VALUES ( NULL , 'https://github.com/qwak-ai/qwak-examples/tree/main/qa_bot_falcon_chroma' , 'LLM Model Enhanced with Vector DB' , VEC_FromText('[0.2, 0.009765625, 0, 0, 0, 0.00323486328125, 0, 0]') );

Here is an overview of what is now in the database:

SELECT id, url, title, VEC_ToText(embedding) FROM urls ; +----+-----------------------------------------------------------------------------+----------------------------------------------------------+---------------------------------------------------------------------------+ | id | url | title | VEC_ToText(embedding) | +----+-----------------------------------------------------------------------------+----------------------------------------------------------+---------------------------------------------------------------------------+ | 1 | https://mariadb.org/download/?t=mariadb&p=mariadb&r=11.6.0+Vector&os=source | Download MariaDB Server | [0.200000,0.010742,0.000000,0.000000,0.000000,0.002625,0.000000,0.000000] | | 2 | https://mariadb.com/kb/en/creating-the-mariadb-binary-tarball/ | Creating the MariaDB Binary Tarball | [0.200000,0.010742,0.000000,0.000000,0.000000,0.000610,0.000549,0.000000] | | 3 | https://mariadb.org/wp-content/uploads/2024/02/MariaDB-Vector.pdf | MariaDB Vector | [0.200000,0.010742,0.000000,0.000000,0.000000,0.002808,0.000000,0.000000] | | 4 | https://mariadb.com/resources/blog/mariadb-vector-preview-is-out/ | MariaDB Vector preview is out | [0.200000,0.010742,0.000000,0.000000,0.000000,0.002808,0.000000,0.000000] | | 5 | https://mariadb.org/projects/mariadb-vector/ | MariaDB Vector | [0.200000,0.010742,0.000000,0.000000,0.000000,0.001526,0.000000,0.000000] | | 6 | https://metacpan.org/pod/Perl::Tokenizer | Perl::Tokenizer - A tiny Perl code tokenizer | [0.200000,0.011719,0.000000,0.000000,0.000000,0.001221,0.000000,0.000000] | | 7 | https://www.qwak.com/post/utilizing-llms-with-embedding-stores | Integrating Vector Databases with LLMs: A Hands-On Guide | [0.200000,0.011719,0.000000,0.000000,0.000000,0.002563,0.000000,0.000000] | | 8 | https://github.com/qwak-ai/qwak-examples/tree/main/qa_bot_falcon_chroma | LLM Model Enhanced with Vector DB | [0.200000,0.009766,0.000000,0.000000,0.000000,0.003235,0.000000,0.000000] | +----+-----------------------------------------------------------------------------+----------------------------------------------------------+---------------------------------------------------------------------------+
Search in the MariaDB vector database

Now comes the exciting part of the whole story: Can we also find something in our MariaDB vector database with URLs?

How this works schematically can again be seen in the MariaDB Foundation diagram:

The first attempt is a perfect match:

./fromdual_llm_v1.php https://mariadb.org/projects/mariadb-vector/ [0.2, 0.0107421875, 0, 0, 0, 0.00152587890625, 0, 0] SELECT id, url, title, VEC_ToText(embedding) FROM urls ORDER BY VEC_DISTANCE(embedding, VEC_FromText('[0.2, 0.0107421875, 0, 0, 0, 0.00152587890625, 0, 0]')) LIMIT 3 ; +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+ | id | url | title | VEC_ToText(embedding) | +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+ | 5 | https://mariadb.org/projects/mariadb-vector/ | MariaDB Vector | [0.200000,0.010742,0.000000,0.000000,0.000000,0.001526,0.000000,0.000000] | | 6 | https://metacpan.org/pod/Perl::Tokenizer | Perl::Tokenizer - A tiny Perl code tokenizer. | [0.200000,0.011719,0.000000,0.000000,0.000000,0.001221,0.000000,0.000000] | | 2 | https://mariadb.com/kb/en/creating-the-mariadb-binary-tarball/ | Creating the MariaDB Binary Tarball | [0.200000,0.010742,0.000000,0.000000,0.000000,0.000610,0.000549,0.000000] | +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+

The first row matches 100%. Then the results get much worse relatively quickly...

Second attempt a similar URL:

./fromdual_llm_v1.php https://mariadb.com/kb/en/e4201/ [0.2, 0.0107421875, 0, 0, 0, 0.00079345703125, 0, 0] SELECT id, url, title, VEC_ToText(embedding) FROM urls ORDER BY VEC_DISTANCE(embedding, VEC_FromText('[0.2, 0.0107421875, 0, 0, 0, 0.00079345703125, 0, 0]')) LIMIT 3 ; +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+ | id | url | title | VEC_ToText(embedding) | +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+ | 2 | https://mariadb.com/kb/en/creating-the-mariadb-binary-tarball/ | Creating the MariaDB Binary Tarball | [0.200000,0.010742,0.000000,0.000000,0.000000,0.000610,0.000549,0.000000] | | 5 | https://mariadb.org/projects/mariadb-vector/ | MariaDB Vector | [0.200000,0.010742,0.000000,0.000000,0.000000,0.001526,0.000000,0.000000] | | 6 | https://metacpan.org/pod/Perl::Tokenizer | Perl::Tokenizer - A tiny Perl code tokenizer. | [0.200000,0.011719,0.000000,0.000000,0.000000,0.001221,0.000000,0.000000] | +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+

Here I would expect only mariadb URLs among the first 3 hits. But this is not the case. So our model still has room for improvement here!

And another similar URL:

./fromdual_llm_v1.php https://mariadb.com/kb/en/vec_totext/ [0.2, 0.0107421875, 0, 0, 0, 0.0010986328125, 0, 0] SELECT id, url, title, VEC_ToText(embedding) FROM urls ORDER BY VEC_DISTANCE(embedding, VEC_FromText('[0.2, 0.0107421875, 0, 0, 0, 0.0010986328125, 0, 0]')) LIMIT 3 ; +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+ | id | url | title | VEC_ToText(embedding) | +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+ | 5 | https://mariadb.org/projects/mariadb-vector/ | MariaDB Vector | [0.200000,0.010742,0.000000,0.000000,0.000000,0.001526,0.000000,0.000000] | | 2 | https://mariadb.com/kb/en/creating-the-mariadb-binary-tarball/ | Creating the MariaDB Binary Tarball | [0.200000,0.010742,0.000000,0.000000,0.000000,0.000610,0.000549,0.000000] | | 6 | https://metacpan.org/pod/Perl::Tokenizer | Perl::Tokenizer - A tiny Perl code tokenizer. | [0.200000,0.011719,0.000000,0.000000,0.000000,0.001221,0.000000,0.000000] | +----+----------------------------------------------------------------+-----------------------------------------------+---------------------------------------------------------------------------+

Same problem here. The hostname is not weighted enough. You can/must probably play with the scatter that mariadb.org and mariadb.com generate.

And last but not least, a URL that does not appear in the data set at all:

./fromdual_llm_v1.php https://www.mongodb.com/blog/post/vector-search-llm-essentials-what-when-why [0.2, 0.0146484375, 0, 0, 0, 0.00323486328125, 0, 0] SELECT id, url, title, VEC_ToText(embedding) FROM urls ORDER BY VEC_DISTANCE(embedding, VEC_FromText('[0.2, 0.0146484375, 0, 0, 0, 0.00323486328125, 0, 0]')) LIMIT 5 ; +----+-----------------------------------------------------------------------------+----------------------------------------------------------+---------------------------------------------------------------------------+ | id | url | title | VEC_ToText(embedding) | +----+-----------------------------------------------------------------------------+----------------------------------------------------------+---------------------------------------------------------------------------+ | 7 | https://www.qwak.com/post/utilizing-llms-with-embedding-stores | Integrating Vector Databases with LLMs: A Hands-On Guide | [0.200000,0.011719,0.000000,0.000000,0.000000,0.002563,0.000000,0.000000] | | 6 | https://metacpan.org/pod/Perl::Tokenizer | Perl::Tokenizer - A tiny Perl code tokenizer. | [0.200000,0.011719,0.000000,0.000000,0.000000,0.001221,0.000000,0.000000] | | 3 | https://mariadb.org/wp-content/uploads/2024/02/MariaDB-Vector.pdf | MariaDB Vector | [0.200000,0.010742,0.000000,0.000000,0.000000,0.002808,0.000000,0.000000] | | 4 | https://mariadb.com/resources/blog/mariadb-vector-preview-is-out/ | MariaDB Vector preview is out | [0.200000,0.010742,0.000000,0.000000,0.000000,0.002808,0.000000,0.000000] | | 1 | https://mariadb.org/download/?t=mariadb&p=mariadb&r=11.6.0+Vector&os=source | Download MariaDB Server | [0.200000,0.010742,0.000000,0.000000,0.000000,0.002625,0.000000,0.000000] | +----+-----------------------------------------------------------------------------+----------------------------------------------------------+---------------------------------------------------------------------------+

Here the result seems to be completely arbitrary. But if you compare the vector of the query with the vectors of the results, the order makes sense... Are the dimensions in the vector evaluated from left to right? After all, the distance between two points in 8-dimensional space is to be determined...

Improvements in the model

The results of our AI are not yet particularly impressive. On the one hand, this is certainly due to the very limited amount of data, on the other hand, we have not yet modelled very important criteria in our model or have used completely nonsensical criteria.

Suggestions for improvement for a next model: The hostname could also be tokenised so that mariadb.com and mariadb.org are closer together.

The length of hostname, path, query and fragment is certainly not a particularly clever criterion for mapping the similarity of URLs. Much more intelligence would be needed in the model here. A function 1/CRC32(dim) might already provide slightly better results?

The title could be included, or at least the most important words (nouns, verbs) from the title.

The document type (MIME type) could be included: Is a PDF more similar to another PDF than to a CSV file or a static HMTL page or a dynamic PHP page?

Points that stood out while playing

The number of dimensions in a vector seem to be set on the first INSERT. If you subsequently enter data with a different vector length, the following error appears:

INSERT INTO products (name, description, embedding) VALUES ( 'Coffee Machine' , 'Built to make the best coffee you can imagine' , VEC_FromText('[0.2, 0.013671875, 0, 0, 0, 6.103515625E-5, 0, 0]') ); ERROR 1366 (22007): Incorrect vector value: '...' for column `test`.`products`.`embedding` at row 1

Changing the vector length is currently NOT possible with a TRUNCATE TABLE command. The table must be dropped (DROP TABLE) and created again (CREATE TABLE).

However, searching with a shorter vector is possible:

SELECT id, url, title, VEC_ToText(embedding) FROM urls ORDER BY VEC_DISTANCE(embedding, VEC_FromText('[0.2, 0.0107421875]')) LIMIT 3 ; +----+-------------------------------------------------------------------------+-------------------------------------+---------------------------------------------------------------------------+ | id | url | title | VEC_ToText(embedding) | +----+-------------------------------------------------------------------------+-------------------------------------+---------------------------------------------------------------------------+ | 8 | https://github.com/qwak-ai/qwak-examples/tree/main/qa_bot_falcon_chroma | LLM Model Enhanced with Vector DB | [0.200000,0.009766,0.000000,0.000000,0.000000,0.003235,0.000000,0.000000] | | 2 | https://mariadb.com/kb/en/creating-the-mariadb-binary-tarball/ | Creating the MariaDB Binary Tarball | [0.200000,0.010742,0.000000,0.000000,0.000000,0.000610,0.000549,0.000000] | | 5 | https://mariadb.org/projects/mariadb-vector/ | MariaDB Vector | [0.200000,0.010742,0.000000,0.000000,0.000000,0.001526,0.000000,0.000000] | +----+-------------------------------------------------------------------------+-------------------------------------+---------------------------------------------------------------------------+

I cannot (yet) judge whether the result is so useful.

Sources
Taxonomy upgrade extras: mariadbaivectorartificial intelligence

FromDual Backup and Recovery Manager for MariaDB and MySQL 2.3.1 has been released

Shinguz - Mon, 2024-08-12 16:25

FromDual has the pleasure to announce the release of the new version 2.3.1 of its popular Backup and Recovery Manager for MariaDB and MySQL (brman).

The new FromDual Backup and Recovery Manager can be downloaded from here. The FromDual Repositories were updated. How to install and use the Backup and Recovery Manager is described in FromDual Backup and Recovery Manager (brman) installation guide.

In the inconceivable case that you find a bug in the FromDual Backup and Recovery Manager please report it to the FromDual Bugtracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 2.x to 2.3.1 shell> cd ${HOME}/product shell> tar xf /download/brman-2.3.1.tar.gz shell> rm -f brman shell> ln -s brman-2.3.1 brman
Changes in FromDual Backup and Recovery Manager 2.3.1

This release is a new minor release. It contains mainly bug fixes. We have tried to maintain backward-compatibility with the 1.2, 2.0, 2.1 and 2.2 release series. But you should test the new release seriously!

You can verify your current FromDual Backup Manager version with the following command:

shell> fromdual_bman --version shell> bman --version shell> rman --version
General
  • Testing: LXC library updated.
  • Testing: Test preparation improved.
  • General: CHANGELOG updated.
  • General: rc made unique.
  • Documentation: Documentation improved.
  • Documentation: Create user documented.
  • Documentation: New release notes added to documentation.
  • Packaging: Distro Debian 10 removed.

FromDual Backup Manager (bman)
  • Utility mariabackup replaced by mariadb-backup.
  • Branch guessing improved.
  • Code refactored.
  • No realistic password is used any more.
  • Error message for missing client utilities made more distro specific.

FromDual Recovery Manager (rman)
  • Branch guessing improved.
  • Code refactored.
  • No realistic password is used any more.
  • Error message for missing client utilities made more distro specific.
  • Typo error fixed, tests refactored.
  • In some error cases all processes where killed instead of just the dummy database daemon. This is caught correctly now.
  • Error message improved to get more information in case of errors.
  • Physical schema backup leads to dump which is wrong. Now this is caught and aborted with an error.

Subscriptions for commercial use of FromDual Backup and Recovery Manager you can get from from us.

Taxonomy upgrade extras: BackupRestoreRecoverypitrbrmanreleasebmanrmanFromDual Backup and Recovery Manager

Partial physical database restore for MariaDB and MySQL

Shinguz - Tue, 2024-07-02 18:13
What is it about?

When describing backup and restore scenarios, a full backup and a full restore of the database instance (mariadbd/mysqld) are usually assumed. This means that the entire database instance, including all databases (schemas), is backed up and restored.

In practice, however, the situation is often different: An entire database instance is not to be restored, but only individual databases or even individual tables, because only these have broken.

In many cases, this can be done quite easily with the tools mariadb-dump/mariadb or mysqldump/mysql (logical backup). However, if the database or table is very large, the restore will not be completed in a reasonable time (some minutes to a few hours).

This is exactly where the so-called partial physical restore comes into play. Partial stands for one or more tables (or an entire database), physical for: Individual SQL statements are not executed, but the data files are physically restored. In this scenario, very large amounts of data can be restored very quickly, provided the appropriate infrastructure is in place. Rule of thumb: On fat hardware: 1 Tbyte per hour. In this way, database restores can be carried out very quickly.

MariaDB and MySQL already offer this functionality out of the box. The mechanism is reasonably practicable for individual tables (see Restore partial backup). For entire databases with possibly dozens or hundreds of tables, however, the on-board mechanism is very cumbersome and error-prone.

Use case

This is exactly where the new functionality of the FromDual Backup and Recovery Manager (brman) v2.3.0 comes into play: it considerably simplifies the partial physical database restore.

A second scenario in which this new functionality can also be used is when moving a large database from one database instance to another (e.g. from Dev to Prod).

Preparations for the partial physical database restore

In order to be able to restore a database, a clean backup must of course first be available. This can either be created with the FromDual Backup Manager (bman):

PORT=3306 BACKUPNAME=bck_full_2024-07-01 BACKUPDIR=/tmp/bck ./brman/bin/bman --target=brman:secret@127.0.0.1:${PORT} --type=full --mode=physical --policy=daily --backupdir=${BACKUPDIR} --backup-name=${BACKUPNAME} --no-compress

or you can simply create the backup with the MariaDB (mariadb-backup) or MySQL on-board tools (xtrabackup):

PORT=3306 BACKUPNAME=bck_full_2024-07-01 BACKUPDIR=/tmp/bck POLICY=daily mariadb-backup --user=brman --password=secret --host=127.0.0.1 --port=${PORT} --backup --target-dir=${BACKUPDIR}/${POLICY}/${BACKUPNAME} mariadb-backup --user=brman --password=secret --host=127.0.0.1 --port=${PORT} --prepare --target-dir=${BACKUPDIR}/${POLICY}/${BACKUPNAME}
Partial physical database restore

To perform a partial physical database restore, the database must be running, in contrast to a complete physical restore.

The partial physical database restore is then simple:

PORT=3306 DATADIR=/var/lib/mysql BACKUPNAME=bck_full_2024-07-01 BACKUPDIR=/tmp/bck ./brman/bin/rman --target=brman:secret@127.0.0.1:${PORT} --type=schema --mode=physical --policy=daily --schema=test --log=/tmp/rman.log --backupdir=${BACKUPDIR} --datadir=${DATADIR} --backup-name=${BACKUPNAME} ... Start restore at 2024-07-01 16:29:48 Backup with tool mariabackup version 10.11.8 (from path /home/mysql/product/mariadb-10.11/bin/mariabackup). Parent: We are the parent. Our child is: 63712. Waiting for database daemon... Child: We are the child: Starting database daemon... Child: Change ownership of database files (/tmp/bck/daily/bck_full_2024-07-01) to mysql Child: /home/mysql/product/mariadb-10.11/bin/mariadbd --no-defaults --user=mysql --basedir=/home/mysql/product/mariadb-10.11 --datadir=/tmp/bck/daily/bck_full_2024-07-01 --log-error=/tmp/my.err --port=3360 --socket=/tmp/my.sock --lower-case-table-names=0 Parent: Tables not InnoDB or sequences: 0 Parent: Tables with partitions: 0 Parent: Tables with full-text index: 0 Parent: InnoDB table `test` found to restore Parent: Dump database test Parent: /home/mysql/product/mariadb-10.11/bin/mariadb-dump --user=brman --host=127.0.0.1 --port=3360 --routines --events --triggers --no-data --skip-lock-tables --add-drop-database --databases test Parent: Shutdown backup database. Restore empty database test Prepare and export tables: /home/mysql/product/mariadb-10.11/bin/mariabackup --user=brman --host=127.0.0.1 --port=3321 --prepare --export --databases=test --target-dir=/tmp/bck/daily/bck_full_2024-07-01 SET SESSION foreign_key_checks = 0 SET SESSION sql_log_bin = off Restore table test ALTER TABLE `test`.`test` DISCARD TABLESPACE cp /tmp/bck/daily/bck_full_2024-07-01/test/test.cfg /home/mysql/database/mariadb-1011/data/test/test.cfg cp /tmp/bck/daily/bck_full_2024-07-01/test/test.ibd /home/mysql/database/mariadb-1011/data/test/test.ibd chown mysql: /home/mysql/database/mariadb-1011/data/test/test.cfg /home/mysql/database/mariadb-1011/data/test/test.ibd ALTER TABLE `test`.`test` IMPORT TABLESPACE rm /home/mysql/database/mariadb-1011/data/test/test.cfg rm /tmp/bck/daily/bck_full_2024-07-01/test/test.cfg ---------------------------------------- WARNING: You should restart the database now! Otherwise possible future backups may fail. See MDEV-34418 (https://jira.mariadb.org/browse/MDEV-34418). ---------------------------------------- Restore time was: 0d 0h 0' 2" End restore at 2024-07-01 16:29:50 (rc=0)

For MariaDB, it is recommended to restart the database afterwards until the bug MDEV-34418: mariadb-backup fails on database which was partially restored with mariadb-backup is fixed. This step can be omitted for MySQL.

Restrictions

At present, the following restrictions still apply to the partial physical restore of databases with rman:

  • Only entire databases can be restored. Restoring individual tables is not yet implemented. Use the basic on-board tools for this.
  • Restoring partitioned tables is not yet implemented. Use the basic on-board tools for partitioned tables.
  • A subsequent point-in-time recovery of the database is not yet implemented and must be carried out manually.
  • A partial physical database restore for an entire Galera cluster is not yet implemented and must be performed manually. In this case, a restore to one Galera node and a subsequent synchronisation of the other nodes using SST is recommended.
  • With a physical partial database restore, a pseudo instance is started on the backup files. This pseudo instance requires a free port 3360.
  • The backup files must already be in a consistent state (--prepare).
  • With a partial physical database restore, a logical backup of the database is created without the data on the pseudo instance. This backup is restored to the instance to be repaired. This means that all objects (views, triggers, functions, procedures, events, etc.) that were created AFTER the complete physical backup are deleted before the partial physical database restore and are then no longer available.
  • The original database instance from which the backup was created and the instance on which the restore is performed must have the same setting for lower_case_table_names.
  • All three, the backup, the database instance and the rman tool must be located on the same machine.
  • The backup must currently still be available in uncompressed form.

Literature
Taxonomy upgrade extras: partial restoreRestoredatabaseschema

FromDual Backup and Recovery Manager for MariaDB and MySQL 2.3.0 has been released

Shinguz - Tue, 2024-06-25 15:12

FromDual has the pleasure to announce the release of the new version 2.3.0 of its popular Backup and Recovery Manager for MariaDB and MySQL (brman).

The new FromDual Backup and Recovery Manager can be downloaded from here. The FromDual Repositories were updated. How to install and use the Backup and Recovery Manager is described in FromDual Backup and Recovery Manager (brman) installation guide.

In the inconceivable case that you find a bug in the FromDual Backup and Recovery Manager please report it to the FromDual Bugtracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 2.x to 2.3.0 shell> cd ${HOME}/product shell> tar xf /download/brman-2.3.0.tar.gz shell> rm -f brman shell> ln -s brman-2.3.0 brman
Changes in FromDual Backup and Recovery Manager 2.3.0

This release is a new major release. The most important new feature is the partial physical restore. We have tried to maintain backward-compatibility with the 1.2, 2.0, 2.1 and 2.2 release series. But you should test the new release seriously!

You can verify your current FromDual Backup Manager version with the following command:

shell> fromdual_bman --version shell> bman --version shell> rman --version
General
  • Packaging: Release el9 added and debian format increase from 9 to 10 because of debian12.
  • Documentation prepared for Asciidoctor.
  • Distribtion dependent things fixed.
  • Build script improved.
  • Typos fixed.
  • All print replaced by standard PHP echo commands.
  • Redhat replace by Red Hat and CentOS by Rocky Linux.
  • Copyright year updated to 2024.
  • Library myEnv updated (getDistribution, lsb_release removed).

FromDual Backup Manager (bman)
  • Bug in sftp transfer for archiving files fixed.
  • Also create subdirectories only once if necessary in physical backup.
  • Archive location directory is only created once in physical backup. This avoids errors and makes the backup slightly faster.
  • In function createArchiveDestinationDirectory php-ssh2 calls where replaced by the more stable scp/sftp calls.
  • Bug in archive destination directory creation fixed and example improved.
  • Library LXClib added for testing.
  • Number of lines in dump parsing increased because of changes in MySQL 8.0.
  • Binlog backup debug output made nicer.
  • Retention time increased from 800 days to 1200 days (more than 3 years), tests added for this, newer DB versions added to tests.
  • Advices apt-get and yum replace by apt and dnf.
  • Error was not returned correctly when bman aborts because of MyISAM tables. Fixed.
  • Syntax error fixed.
  • flock on fpmmm file added.
  • BINLOG MONITOR privilege check added for MariaDB 10.11.
  • Code clean-up.
  • Example enhanced.
  • Output made more clear for trouble shooting (doCleanup).

FromDual Recovery Manager (rman)
  • Various bugs fixed found during automated testing.
  • Some minor fixes about timing and repeatable testing, warnings and messages improved.
  • mariadb_backup_binlog_info case (MariaDB 11.4) added for physical full backup.
  • Code refactored and case when backup is compressed caught.
  • Function doPhysicalRestore renamed to doFullPhysicalRestore.
  • Split restore functions in their own files.
  • sql_log_bin is used for schema restore on partial physical restore.
  • New example for partial physical restore added.
  • New method for partial physical restore implemented.
  • Partial physical schema restore added.

Subscriptions for commercial use of FromDual Backup and Recovery Manager you can get from from us.

Taxonomy upgrade extras: BackupRestoreRecoverypitrbrmanreleasebmanrmanFromDual Backup and Recovery Manager

Shrinking the InnoDB system tablespace

Shinguz - Mon, 2024-06-17 14:45

One feature that really excited me in the new MariaDB 11.4 LTS release is the shrinking of the system tablespace (ibdata1). I have been eagerly waiting for this feature since around 2006 and now it has finally arrived with MariaDB 11.4.
Actually, this feature has been available since MariaDB 11.2 IR (June 2023).

Unfortunately, the announcement of this feature came a little too short. In the MariaDB release notes it says succinctly:

The InnoDB system tablespace is now shrunk by reclaiming unused space at startup (MDEV-14795)

From the MariaDB 11.2.0 Release Notes.

The reasons why this file can grow immeasurably have actually been known for a long time and the measures against it are also clear (see literature). But again and again we see MariaDB users out in the field who didn't realise the problem or realised it too late and are now left with an ibdata1 file that is far too large...

How can the problem be provoked?

The problem can be provoked by creating a table in the system tablespace:

SQL> SET global innodb_file_per_table = off; SQL> CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(128) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(), PRIMARY KEY (`id`) ) ENGINE=InnoDB;

and then filling it with data:

SQL> INSERT INTO test SELECT NULL, 'Some data to provoke huge data growth in system tablespace', NOW() ; SQL> INSERT INTO test SELECT NULL, 'Some data to provoke huge data growth in system tablespace', NOW() FROM test LIMIT 1000000 ; ...

While the table is being filled, you can observe how the file ibdata1 swells on the file system:

$ while [ 1 ] ; do ll -h ibdata1 ; sleep 5 ; done -rw-rw---- 1 mysql mysql 12M Jun 2 13:57 ibdata1 -rw-rw---- 1 mysql mysql 76M Jun 12 13:57 ibdata1 -rw-rw---- 1 mysql mysql 76M Jun 12 13:57 ibdata1 -rw-rw---- 1 mysql mysql 140M Jun 12 13:58 ibdata1 -rw-rw---- 1 mysql mysql 204M Jun 12 13:58 ibdata1 -rw-rw---- 1 mysql mysql 268M Jun 12 13:58 ibdata1 -rw-rw---- 1 mysql mysql 332M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 396M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 460M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 524M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 588M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 652M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 716M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 780M Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 844M Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 908M Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 972M Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 1.1G Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 1.2G Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 1.3G Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 1.4G Jun 12 14:00 ibdata1

If the ibdata1 file is large enough, you can move the table from the system tablespace to a dedicated tablespace:

SQL> SET global innodb_file_per_table = off; SQL> ALTER TABLE test.test FORCE; Query OK, 0 rows affected (33.764 sec)

And you can see how the new file is created:

$ ll -h ibdata1 test/* -rw-rw---- 1 mysql mysql 1.4G Jun 12 14:01 ibdata1 -rw-rw---- 1 mysql mysql 1.1K Jun 12 14:01 test/#sql-alter-dca30-12.frm -rw-rw---- 1 mysql mysql 696M Jun 12 14:01 test/#sql-alter-dca30-12.ibd -rw-rw---- 1 mysql mysql 1.1K Jun 12 13:56 test/test.frm -rw-rw---- 1 mysql mysql 1.4G Jun 12 14:01 ibdata1 -rw-rw---- 1 mysql mysql 1.1K Jun 12 14:01 test/test.frm -rw-rw---- 1 mysql mysql 1.4G Jun 12 14:02 test/test.ibd

So now we have once the data but twice as much space used.

And how can you make the system tablespace smaller again?

Unfortunately, this information is somewhat hidden and has to be gathered from the documentation and the MariaDB Jira issues (see literature):

SQL> SET GLOBAL innodb_fast_shutdown=0; SQL> SHUTDOWN;

When shutting down, you can see the corresponding entries in the MariaDB error log:

[Note] bin/mariadbd (initiated by: root[root] @ localhost []): Normal shutdown [Note] InnoDB: FTS optimize thread exiting. [Note] InnoDB: Truncating system tablespace from 90880 to 768 pages [Note] InnoDB: System tablespace truncated successfully [Note] InnoDB: Starting shutdown... [Note] InnoDB: Dumping buffer pool(s) to /home/mysql/database/mariadb-114/data/ib_buffer_pool [Note] InnoDB: Restricted to 2016 pages due to innodb_buf_pool_dump_pct=25 [Note] InnoDB: Buffer pool(s) dump completed at 240612 14:11:11 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" [Note] InnoDB: Shutdown completed; log sequence number 4011132308; transaction id 139 [Note] bin/mariadbd: Shutdown complete

And if you look at the file ibdata1 on disc afterwards, it is as small as it was at the beginning of the experiment:

$ ll ibdata1* -h -rw-rw---- 1 mysql mysql 12M Jun 12 14:11 ibdata1
Literature

See also:


Taxonomy upgrade extras: innodb tablespace ibdata1

Shrinking the InnoDB system tablespace

Shinguz - Mon, 2024-06-17 14:45

One feature that really excited me in the new MariaDB 11.4 LTS release is the shrinking of the system tablespace (ibdata1). I have been eagerly waiting for this feature since around 2006 and now it has finally arrived with MariaDB 11.4.
Actually, this feature has been available since MariaDB 11.2 IR (June 2023).

Unfortunately, the announcement of this feature came a little too short. In the MariaDB release notes it says succinctly:

The InnoDB system tablespace is now shrunk by reclaiming unused space at startup (MDEV-14795)

From the MariaDB 11.2.0 Release Notes.

The reasons why this file can grow immeasurably have actually been known for a long time and the measures against it are also clear (see literature). But again and again we see MariaDB users out in the field who didn't realise the problem or realised it too late and are now left with an ibdata1 file that is far too large...

How can the problem be provoked?

The problem can be provoked by creating a table in the system tablespace:

SQL> SET global innodb_file_per_table = off; SQL> CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(128) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(), PRIMARY KEY (`id`) ) ENGINE=InnoDB;

and then filling it with data:

SQL> INSERT INTO test SELECT NULL, 'Some data to provoke huge data growth in system tablespace', NOW() ; SQL> INSERT INTO test SELECT NULL, 'Some data to provoke huge data growth in system tablespace', NOW() FROM test LIMIT 1000000 ; ...

While the table is being filled, you can observe how the file ibdata1 swells on the file system:

$ while [ 1 ] ; do ll -h ibdata1 ; sleep 5 ; done -rw-rw---- 1 mysql mysql 12M Jun 2 13:57 ibdata1 -rw-rw---- 1 mysql mysql 76M Jun 12 13:57 ibdata1 -rw-rw---- 1 mysql mysql 76M Jun 12 13:57 ibdata1 -rw-rw---- 1 mysql mysql 140M Jun 12 13:58 ibdata1 -rw-rw---- 1 mysql mysql 204M Jun 12 13:58 ibdata1 -rw-rw---- 1 mysql mysql 268M Jun 12 13:58 ibdata1 -rw-rw---- 1 mysql mysql 332M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 396M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 460M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 524M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 588M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 652M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 716M Jun 12 13:59 ibdata1 -rw-rw---- 1 mysql mysql 780M Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 844M Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 908M Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 972M Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 1.1G Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 1.2G Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 1.3G Jun 12 14:00 ibdata1 -rw-rw---- 1 mysql mysql 1.4G Jun 12 14:00 ibdata1

If the ibdata1 file is large enough, you can move the table from the system tablespace to a dedicated tablespace:

SQL> SET global innodb_file_per_table = off; SQL> ALTER TABLE test.test FORCE; Query OK, 0 rows affected (33.764 sec)

And you can see how the new file is created:

$ ll -h ibdata1 test/* -rw-rw---- 1 mysql mysql 1.4G Jun 12 14:01 ibdata1 -rw-rw---- 1 mysql mysql 1.1K Jun 12 14:01 test/#sql-alter-dca30-12.frm -rw-rw---- 1 mysql mysql 696M Jun 12 14:01 test/#sql-alter-dca30-12.ibd -rw-rw---- 1 mysql mysql 1.1K Jun 12 13:56 test/test.frm -rw-rw---- 1 mysql mysql 1.4G Jun 12 14:01 ibdata1 -rw-rw---- 1 mysql mysql 1.1K Jun 12 14:01 test/test.frm -rw-rw---- 1 mysql mysql 1.4G Jun 12 14:02 test/test.ibd

So now we have once the data but twice as much space used.

And how can you make the system tablespace smaller again?

Unfortunately, this information is somewhat hidden and has to be gathered from the documentation and the MariaDB Jira issues (see literature):

SQL> SET GLOBAL innodb_fast_shutdown=0; SQL> SHUTDOWN;

When shutting down, you can see the corresponding entries in the MariaDB error log:

[Note] bin/mariadbd (initiated by: root[root] @ localhost []): Normal shutdown [Note] InnoDB: FTS optimize thread exiting. [Note] InnoDB: Truncating system tablespace from 90880 to 768 pages [Note] InnoDB: System tablespace truncated successfully [Note] InnoDB: Starting shutdown... [Note] InnoDB: Dumping buffer pool(s) to /home/mysql/database/mariadb-114/data/ib_buffer_pool [Note] InnoDB: Restricted to 2016 pages due to innodb_buf_pool_dump_pct=25 [Note] InnoDB: Buffer pool(s) dump completed at 240612 14:11:11 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" [Note] InnoDB: Shutdown completed; log sequence number 4011132308; transaction id 139 [Note] bin/mariadbd: Shutdown complete

And if you look at the file ibdata1 on disc afterwards, it is as small as it was at the beginning of the experiment:

$ ll ibdata1* -h -rw-rw---- 1 mysql mysql 12M Jun 12 14:11 ibdata1
Literature

See also:


Taxonomy upgrade extras: innodb tablespace ibdata1

FromDual Ops Center 1.2.2 for MariaDB and MySQL databases has been released

Shinguz - Mon, 2024-04-29 17:41

FromDual is pleased to announce the release of the new version 1.2.2 of the popular FromDual Ops Center focmm, a Graphical User Interface (GUI) for MariaDB, MySQL and compatible databases.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBAs and system administrators to better manage their MariaDB and MySQL databases and Galera Cluster farms. Ops Center makes life easier for DBAs and Admins!

The main purpose of the Ops Center is to help you with your daily MariaDB and MySQL operations. More information about FromDual Ops Center can be found in the General Information section.

Download now

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from our download page. How to install and use focmm is documented in the Ops Center User Guide.

In the unlikely event that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, comments and testimonials are also welcome! Please send them to feedback@fromdual.com.

Installing Ops Center 1.2.2

Information on how to install FromDual Ops Center can be found in the Ops Center User Guide.

Upgrading from 0.9.x to 1.2.2

The upgrade from 0.9.x to 1.2.2 should be automatic. Please make a backup of your Ops Center database instance before upgrading! See also Upgrading.

Important: In some cases the directory focmm/tmp/start_jobs.lock is missing. In this case jobs will not start. Please check the log file under focmm/log/start_jobs.log and create the folder accordingly. There should also be a file called pid in this folder. Create this file if it does not exist.

Downgrading from 1.2.2

Downgrading from 1.2.2 to 1.2.1 should work.

Changes in Ops Center 1.2.2 Machine
  • Machine information is automatically gathered after first successful check now.
  • Machine function calls optimized.
  • Ssh key check added for machine.
  • Some tests around machine library added.
  • Function createPublicKey + page refactored.
  • Copy of ssh key and ssh command implemented.
  • Skip machines which are down during gathering resource costs.
  • Error message in pricing when adding a new machine was fixed.

Instance
  • Instance operations are now faster when instance is running but not reachable.
  • Node check improved in cluster and instance, works now also correctly on multi instance environments.
  • PID file is gathered now.
  • Stop instance on instance operations improved if a Galera cluster node.
  • Instance is now checked immediately after start or restart and checks are set to failed after instance stop. So the instance state is more appropriate and in time now.
  • Link added to read_only variable comment.
  • Instance is shown orange when ping check was not run yet. This is fixed now.
  • Error log gathering improved in instance operations.
  • If mysql_upgrade_info file is missing error is suppressed now.
  • Gather instance information is done now right after the first successful check.
  • Check database_ping was removed from general instance tests to avoid noise in the database error log.
  • Check instance_ping_check removed from instanceTest to avoid error log messages on a regular base, wondering what kind of impact this has...
  • innodb_log_writer_threads rule added to instance configuration.
  • Target connect function error logging improved.
  • Continue button is handled correctly now when stopping, starting or restarting an instance.
  • Instance operations location of error log is shown now.
  • Refactored instance operations code for moving Galera buttons out.
  • Function refreshInstance refactored because we need this information for Galera Cluster.
  • Unknown array_keys error message fixed in instance.
  • Foreign MariaDB/MySQL repositories should now be considered as well.
  • Optimized instance handling.
  • Instance error log handling can now handle systemd/journalctl.
  • Instance hardening when creating instance improved.
  • Create instance improved.
  • Create instance bug fixed.
  • Instance create on Rocky 9, fixed.
  • Create instance for Rocky 8 improved.
  • Check for restart also removed, so also restart works now.
  • Check removed to allow a running but not reachable instance to be restarted, restart and start button added.
  • Non existing configuration was not handled correctly, fixed.
  • Edit instance placeholder added and default instance name changed from mysqld to mariadbd.
  • Create instance output made nicer in case of failure.

Cluster
  • Cluster overview signal lights are also working now for Galera.
  • Cluster operations returns faster now if machine is not reachable.
  • Cluster show checks non important information removed.
  • Cluster checks in menu are now shown correctly.
  • grastate.dat async check added.
  • Cluster check made async.
  • Instance state added to cluster operations and check view.
  • M/S cluster tests made more robust against strange situations.
  • Some bugs in M/S cluster fixed.
  • focmm user passwords should sync in galera.
  • Bootstrap should distribute focmm user on all nodes.
  • Bootstrap on Debian fixed again.
  • Error message for missing grastate.dat file is suppressed now.
  • Deploy configuration button is now on a new line.
  • 2 little bugs in M/S cluster operations removed.
  • Instance operations improved with Galera clusters.
  • Soft bootstrap implemented.
  • Cluster node bootstrap, start stop fixed.
  • Function bootstrap implemented.
  • Async rolling cluster restart added.
  • Problem with rolling restart fixed.
  • Cluster check and operations is read if not OK. Failed, same behaviour as before!
  • Rolling cluster restart and other buttons added to cluster operations.
  • Variable wsrep_node_name is set to instance name per default.
  • Galera cluster config deployment made smoother and fixed bug.
  • Sort order of cluster nodes and title of table clarified.
  • Page start/stop notices added to cluster.php.
  • Indention problem in function getClusterChecks fixed.

Load Balancer
  • Terms load-balancer and loadbalancer replaced by load balancer.
  • User radmin added for ProxySQL monitoring.
  • Socket default for HAproxy changed to the new standards.

Virtual IP (VIP)/Floating IP
  • Status grey image does not exist in vip_show_checks, fixed.

Tools
  • Crontab: Missing library in check_instances.php added.
  • Crontab: html tags removed in mail.
  • Crontab: Error messages and error handling in crontab and start jobs improved, lock file directory is created automatically now.

Configuration
  • No changes.
Database-as-a-Service (DBaaS)
  • No changes.

Building and Packaging
  • Bulid moved to MyEnv project.
  • Missing Rocky Linux branch added.
  • Package redhat-lsb-core does not exist on Rocky 9 any more, spec file adapted.
  • Package build test fixed.

Themes / UI
  • jquery updated from 3.6.3 to 3.7.1

General
  • Modification for license key automation added.
  • Package installation ssh-keygen is only called if key does not exist. So upgrade should work without error/warning.
  • Some tests fixed, bug in createPublicKey fixed.
  • Function listRemoteDirectory does not write needlessly error messages any more.
  • HTML tag placeholder added for add function, add object and save object made more error prone.
  • Variable tx_isolation replaced by transaction_isolation which was deprecated in MariaDB 11.2 and MySQL 5.7.
  • Code clean-up.
  • Some menu items opened a new tab which was not intended. Fixed.
  • Field naming made consistent.
  • Error handling and error messages improved.
  • Copyright year updated from 2023 to 2024.
  • Message downgraded from ERROR to DEBUG.
  • Function runRemoteCommand will return an error code again when going_back_to functionality implemented.
  • Function renderDropDown newline was removed.
  • Function renderSubmitButton accepts a title now.
  • Function runRemoteCommand can suppress error logging now.
  • Command apt-get replaced by apt.
  • Distribution information refactored.
  • Function whoami cleaned-up.
  • PHP 8.0 function calls removed again because of Debian 10/11 support.
  • User interaction made unique among all 5 different objects.
  • Redundant DEBUG information removed.
  • Configuration file was removed again when not needed any more (clean-up).
  • Command yum replaced by dnf, also apt-get partly.
  • Command which replaced by type -p to make it Rocky 9 compliant.
  • Some more lsb_release stuff removed.
  • myEnv.inc library updated, lsb_release removed.
  • Dead code message added.

Repository
  • No changes.

Documentation
  • Version changed from 1.2.0 to 1.2.1 and TLS conflict on ubuntu 22.04 fixed.
  • Build automatized.
  • Documentation for M/S updated.
  • All text taken over from CMS.
  • Load balancer documentation added.

Taxonomy upgrade extras: OperationsreleaseFromDual Ops Centerops centerdbaasfocmm

FromDual Ops Center 1.2.2 for MariaDB and MySQL databases has been released

Shinguz - Mon, 2024-04-29 17:41

FromDual is pleased to announce the release of the new version 1.2.2 of the popular FromDual Ops Center focmm, a Graphical User Interface (GUI) for MariaDB, MySQL and compatible databases.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBAs and system administrators to better manage their MariaDB and MySQL databases and Galera Cluster farms. Ops Center makes life easier for DBAs and Admins!

The main purpose of the Ops Center is to help you with your daily MariaDB and MySQL operations. More information about FromDual Ops Center can be found in the General Information section.

Download now

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from our download page. How to install and use focmm is documented in the Ops Center User Guide.

In the unlikely event that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, comments and testimonials are also welcome! Please send them to feedback@fromdual.com.

Installing Ops Center 1.2.2

Information on how to install FromDual Ops Center can be found in the Ops Center User Guide.

Upgrading from 0.9.x to 1.2.2

The upgrade from 0.9.x to 1.2.2 should be automatic. Please make a backup of your Ops Center database instance before upgrading! See also Upgrading.

Important: In some cases the directory focmm/tmp/start_jobs.lock is missing. In this case jobs will not start. Please check the log file under focmm/log/start_jobs.log and create the folder accordingly. There should also be a file called pid in this folder. Create this file if it does not exist.

Downgrading from 1.2.2

Downgrading from 1.2.2 to 1.2.1 should work.

Changes in Ops Center 1.2.2 Machine
  • Machine information is automatically gathered after first successful check now.
  • Machine function calls optimized.
  • Ssh key check added for machine.
  • Some tests around machine library added.
  • Function createPublicKey + page refactored.
  • Copy of ssh key and ssh command implemented.
  • Skip machines which are down during gathering resource costs.
  • Error message in pricing when adding a new machine was fixed.

Instance
  • Instance operations are now faster when instance is running but not reachable.
  • Node check improved in cluster and instance, works now also correctly on multi instance environments.
  • PID file is gathered now.
  • Stop instance on instance operations improved if a Galera cluster node.
  • Instance is now checked immediately after start or restart and checks are set to failed after instance stop. So the instance state is more appropriate and in time now.
  • Link added to read_only variable comment.
  • Instance is shown orange when ping check was not run yet. This is fixed now.
  • Error log gathering improved in instance operations.
  • If mysql_upgrade_info file is missing error is suppressed now.
  • Gather instance information is done now right after the first successful check.
  • Check database_ping was removed from general instance tests to avoid noise in the database error log.
  • Check instance_ping_check removed from instanceTest to avoid error log messages on a regular base, wondering what kind of impact this has...
  • innodb_log_writer_threads rule added to instance configuration.
  • Target connect function error logging improved.
  • Continue button is handled correctly now when stopping, starting or restarting an instance.
  • Instance operations location of error log is shown now.
  • Refactored instance operations code for moving Galera buttons out.
  • Function refreshInstance refactored because we need this information for Galera Cluster.
  • Unknown array_keys error message fixed in instance.
  • Foreign MariaDB/MySQL repositories should now be considered as well.
  • Optimized instance handling.
  • Instance error log handling can now handle systemd/journalctl.
  • Instance hardening when creating instance improved.
  • Create instance improved.
  • Create instance bug fixed.
  • Instance create on Rocky 9, fixed.
  • Create instance for Rocky 8 improved.
  • Check for restart also removed, so also restart works now.
  • Check removed to allow a running but not reachable instance to be restarted, restart and start button added.
  • Non existing configuration was not handled correctly, fixed.
  • Edit instance placeholder added and default instance name changed from mysqld to mariadbd.
  • Create instance output made nicer in case of failure.

Cluster
  • Cluster overview signal lights are also working now for Galera.
  • Cluster operations returns faster now if machine is not reachable.
  • Cluster show checks non important information removed.
  • Cluster checks in menu are now shown correctly.
  • grastate.dat async check added.
  • Cluster check made async.
  • Instance state added to cluster operations and check view.
  • M/S cluster tests made more robust against strange situations.
  • Some bugs in M/S cluster fixed.
  • focmm user passwords should sync in galera.
  • Bootstrap should distribute focmm user on all nodes.
  • Bootstrap on Debian fixed again.
  • Error message for missing grastate.dat file is suppressed now.
  • Deploy configuration button is now on a new line.
  • 2 little bugs in M/S cluster operations removed.
  • Instance operations improved with Galera clusters.
  • Soft bootstrap implemented.
  • Cluster node bootstrap, start stop fixed.
  • Function bootstrap implemented.
  • Async rolling cluster restart added.
  • Problem with rolling restart fixed.
  • Cluster check and operations is read if not OK. Failed, same behaviour as before!
  • Rolling cluster restart and other buttons added to cluster operations.
  • Variable wsrep_node_name is set to instance name per default.
  • Galera cluster config deployment made smoother and fixed bug.
  • Sort order of cluster nodes and title of table clarified.
  • Page start/stop notices added to cluster.php.
  • Indention problem in function getClusterChecks fixed.

Load Balancer
  • Terms load-balancer and loadbalancer replaced by load balancer.
  • User radmin added for ProxySQL monitoring.
  • Socket default for HAproxy changed to the new standards.

Virtual IP (VIP)/Floating IP
  • Status grey image does not exist in vip_show_checks, fixed.

Tools
  • Crontab: Missing library in check_instances.php added.
  • Crontab: html tags removed in mail.
  • Crontab: Error messages and error handling in crontab and start jobs improved, lock file directory is created automatically now.

Configuration
  • No changes.
Database-as-a-Service (DBaaS)
  • No changes.

Building and Packaging
  • Bulid moved to MyEnv project.
  • Missing Rocky Linux branch added.
  • Package redhat-lsb-core does not exist on Rocky 9 any more, spec file adapted.
  • Package build test fixed.

Themes / UI
  • jquery updated from 3.6.3 to 3.7.1

General
  • Modification for license key automation added.
  • Package installation ssh-keygen is only called if key does not exist. So upgrade should work without error/warning.
  • Some tests fixed, bug in createPublicKey fixed.
  • Function listRemoteDirectory does not write needlessly error messages any more.
  • HTML tag placeholder added for add function, add object and save object made more error prone.
  • Variable tx_isolation replaced by transaction_isolation which was deprecated in MariaDB 11.2 and MySQL 5.7.
  • Code clean-up.
  • Some menu items opened a new tab which was not intended. Fixed.
  • Field naming made consistent.
  • Error handling and error messages improved.
  • Copyright year updated from 2023 to 2024.
  • Message downgraded from ERROR to DEBUG.
  • Function runRemoteCommand will return an error code again when going_back_to functionality implemented.
  • Function renderDropDown newline was removed.
  • Function renderSubmitButton accepts a title now.
  • Function runRemoteCommand can suppress error logging now.
  • Command apt-get replaced by apt.
  • Distribution information refactored.
  • Function whoami cleaned-up.
  • PHP 8.0 function calls removed again because of Debian 10/11 support.
  • User interaction made unique among all 5 different objects.
  • Redundant DEBUG information removed.
  • Configuration file was removed again when not needed any more (clean-up).
  • Command yum replaced by dnf, also apt-get partly.
  • Command which replaced by type -p to make it Rocky 9 compliant.
  • Some more lsb_release stuff removed.
  • myEnv.inc library updated, lsb_release removed.
  • Dead code message added.

Repository
  • No changes.

Documentation
  • Version changed from 1.2.0 to 1.2.1 and TLS conflict on ubuntu 22.04 fixed.
  • Build automatized.
  • Documentation for M/S updated.
  • All text taken over from CMS.
  • Load balancer documentation added.

Taxonomy upgrade extras: OperationsreleaseFromDual Ops Centerops centerdbaasfocmm

dbstat for MariaDB after one month of productive use

Shinguz - Fri, 2024-04-26 14:13
Table of contents
Review

After we introduced dbstat for MariaDB (and MySQL) a good 5 weeks ago, we naturally also rolled it out on our systems to test the behaviour in daily use (eat your own dog food).

This went quite well until we came up with the idea of activating dbstat on the passive dbstat node on our MariaDB active/passive master/master replication cluster (a similar situation would also occur with a Galera cluster). We realised that the design of dbstat still had potential. After this problem was fixed (v0.0.2 and v0.0.3) and the problem of how to activate events on master AND slave was solved (MDEV-33782: Event is always disabled on slave), everything seemed fine at first glance. Unfortunately, we did not realise that the data also had to be adjusted. As a result, our replication came to a complete stop over the Easter holidays, which then led to another problem when catching up (MDEV-33923: MariaDB parallel replication causes Foreign Key errors).

After this minor incident was also resolved, dbstat has been running flawlessly on our MariaDB master/master replication cluster ever since... The product dbstat is open source (GPLv2) and can be downloaded from GitHub.

One month later

Databases should NOT grow over time but only over the number of {customers, products, etc.} once the desired equilibrium (steady state) is reached. In our dbstat installation, we have set this equilibrium state to 30 days. So it is now time that the size of dbstat stabilises and the database stops growing...

It would also be interesting to understand what practical use dbstat has. That is why we have now set to work and are trying to analyse the results of dbstat.

Here is an overview of the 11 current running database events:

SQL> SELECT db, name, definer, CONCAT(interval_value, ' ', interval_field) AS 'interval' , last_executed, ends, status FROM mysql.event ORDER BY db, name ASC ; +--------+-------------------------+------------------+----------+---------------------+------+---------+ | db | name | definer | interval | last_executed | ends | status | +--------+-------------------------+------------------+----------+---------------------+------+---------+ | dbstat | gather_global_status | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:14 | NULL | ENABLED | | dbstat | gather_global_variables | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:32 | NULL | ENABLED | | dbstat | gather_metadata_lock | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:47 | NULL | ENABLED | | dbstat | gather_processlist | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:28 | NULL | ENABLED | | dbstat | gather_table_size | dbstat@localhost | 1 DAY | 2024-04-24 00:04:00 | NULL | ENABLED | | dbstat | gather_trx_and_lck | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:35 | NULL | ENABLED | | dbstat | purge_global_status | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:08 | NULL | ENABLED | | dbstat | purge_metadata_lock | dbstat@localhost | 5 MINUTE | 2024-04-24 07:44:37 | NULL | ENABLED | | dbstat | purge_processlist | dbstat@localhost | 1 MINUTE | 2024-04-24 07:43:58 | NULL | ENABLED | | dbstat | purge_table_size | dbstat@localhost | 5 MINUTE | 2024-04-24 07:40:04 | NULL | ENABLED | | dbstat | purge_trx_and_lck | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:45 | NULL | ENABLED | +--------+-------------------------+------------------+----------+---------------------+------+---------+
Size of the tables

Firstly, the growth of dbstat itself is interesting. But of course this evaluation can also be carried out for any other database, table or catalogue (coming in MariaDB 11.7?):

SQL> SET SESSION sql_mode='STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,only_full_group_by'; SQL> SET @machine_name = @@hostname; SQL> SELECT `table_schema`, SUBSTR(`ts`, 1, 10) AS date , ROUND(SUM(`data_length`)/1024/1024, 1) AS data_mb , ROUND(SUM(`index_length`)/1024/1024, 1) AS index_mb , ROUND(SUM(`data_free`)/1024/1024, 1) AS free_mb , ROUND((SUM(`data_length`) + SUM(`index_length`) + SUM(`data_free`))/1024/1024, 1) AS total_mb , ROUND(SUM(`table_rows`)/1000/1000, 1) AS rows_m FROM `table_size` WHERE `machine_name` = @machine_name AND `table_catalog` = 'def' AND `table_schema` = 'dbstat' GROUP BY `table_catalog`, `table_schema`, `date` ORDER BY `table_catalog`, `table_schema`, `date` ASC ; +--------------+------------+---------+----------+---------+----------+--------+ | table_schema | date | data_mb | index_mb | free_mb | total_mb | rows_m | +--------------+------------+---------+----------+---------+----------+--------+ | dbstat | 2024-03-26 | 762.8 | 1128.6 | 18.0 | 1909.4 | 10.9 | | dbstat | 2024-03-27 | 835.8 | 1241.6 | 17.0 | 2094.4 | 11.1 | | dbstat | 2024-03-28 | 837.8 | 1241.6 | 14.0 | 2093.4 | 11.8 | | dbstat | 2024-03-29 | 960.7 | 1443.6 | 18.0 | 2422.4 | 14.2 | | dbstat | 2024-03-30 | 960.7 | 1443.6 | 17.0 | 2421.4 | 15.0 | | dbstat | 2024-03-31 | 1057.7 | 1604.6 | 20.0 | 2682.4 | 16.9 | | dbstat | 2024-04-01 | 1057.7 | 1602.6 | 21.0 | 2681.4 | 17.6 | | dbstat | 2024-04-02 | 1172.7 | 1797.6 | 22.0 | 2992.3 | 17.8 | | dbstat | 2024-04-03 | 1442.8 | 2333.7 | 12.0 | 3788.5 | 22.8 | | dbstat | 2024-04-04 | 1649.8 | 2723.7 | 13.0 | 4386.5 | 24.4 | | dbstat | 2024-04-05 | 1649.8 | 2722.7 | 14.0 | 4386.5 | 26.0 | | dbstat | 2024-04-06 | 1821.8 | 3034.8 | 13.0 | 4869.6 | 24.6 | | dbstat | 2024-04-07 | 1821.8 | 3034.8 | 14.0 | 4870.6 | 26.2 | | dbstat | 2024-04-08 | 1989.9 | 3344.8 | 12.0 | 5346.6 | 29.9 | | dbstat | 2024-04-09 | 1990.9 | 3343.8 | 14.0 | 5348.6 | 31.5 | | dbstat | 2024-04-10 | 2193.9 | 3712.8 | 13.0 | 5919.7 | 31.6 | | dbstat | 2024-04-11 | 2193.9 | 3712.8 | 15.0 | 5921.7 | 31.1 | | dbstat | 2024-04-12 | 2405.8 | 4119.1 | 12.0 | 6537.0 | 34.9 | | dbstat | 2024-04-13 | 2405.8 | 4119.1 | 14.0 | 6538.9 | 35.7 | | dbstat | 2024-04-14 | 2480.8 | 4278.9 | 15.0 | 6774.8 | 36.2 | | dbstat | 2024-04-15 | 2560.8 | 4443.7 | 12.0 | 7016.5 | 37.5 | | dbstat | 2024-04-16 | 2560.8 | 4443.7 | 12.0 | 7016.5 | 38.2 | | dbstat | 2024-04-17 | 2640.8 | 4610.6 | 18.0 | 7269.4 | 38.5 | | dbstat | 2024-04-18 | 2640.9 | 4611.6 | 14.0 | 7266.5 | 39.7 | | dbstat | 2024-04-19 | 2743.9 | 4826.5 | 14.0 | 7584.3 | 36.9 | | dbstat | 2024-04-20 | 2826.9 | 4995.5 | 14.0 | 7836.4 | 38.3 | | dbstat | 2024-04-21 | 2830.9 | 4997.4 | 18.0 | 7846.3 | 39.2 | | dbstat | 2024-04-22 | 2919.9 | 5177.4 | 14.0 | 8111.3 | 43.2 | | dbstat | 2024-04-23 | 2923.0 | 5177.3 | 16.0 | 8116.3 | 44.1 | | dbstat | 2024-04-24 | 3020.0 | 5376.3 | 16.0 | 8412.3 | 41.0 | | dbstat | 2024-04-25 | 3024.0 | 5377.3 | 17.0 | 8418.3 | 40.9 | +--------------+------------+---------+----------+---------+----------+--------+

If you take the disc space in the O/S for comparison:

# du -shc *.ibd 8.6G global_status.ibd 308K global_variables.ibd 692K metadata_lock.ibd 97M processlist.ibd 18M table_size.ibd 212K trx_and_lck.ibd 8.7G total

you can see that the values from the database are approximately correct (5% error)...

Important: The database dbstat reaches a size of approx. 9 Gbyte after approx. one month on a not particularly large database system.

You can also see that the size of the database is only just stabilising:

If you want to know more precisely which tables are responsible for which part of the data volume, you can also zoom in or drill down into the data:

SQL> SELECT `table_name`, SUBSTR(`ts`, 1, 10) AS date , ROUND(`data_length`/1024/1024, 1) AS data_mb , ROUND(`index_length`/1024/1024, 1) AS index_mb , ROUND(`data_free`/1024/1024, 1) AS free_mb , ROUND((`data_length` + `index_length` + `data_free`)/1024/1024, 1) AS total_mb , ROUND((`data_length` + `index_length` + `data_free`)/1024/1024/8418.26*100, 1) AS pct , ROUND(`table_rows`/1000/1000, 1) AS rows_m FROM `table_size` WHERE `machine_name` = @machine_name AND `table_catalog` = 'def' AND `table_schema` = 'dbstat' AND SUBSTR(`ts`, 1, 10) = CURRENT_DATE() ORDER BY rows_m DESC ; +------------------+------------+---------+----------+---------+----------+------+--------+ | table_name | date | data_mb | index_mb | free_mb | total_mb | pct | rows_m | +------------------+------------+---------+----------+---------+----------+------+--------+ | global_status | 2024-04-25 | 2949.9 | 5356.9 | 5.0 | 8311.8 | 98.7 | 40.4 | | processlist | 2024-04-25 | 68.2 | 17.1 | 7.0 | 92.2 | 1.1 | 0.4 | | global_variables | 2024-04-25 | 0.1 | 0.1 | 0.0 | 0.2 | 0.0 | 0.0 | | metadata_lock | 2024-04-25 | 0.4 | 0.2 | 0.0 | 0.6 | 0.0 | 0.0 | | table_size | 2024-04-25 | 5.4 | 3.1 | 5.0 | 13.5 | 0.2 | 0.0 | | trx_and_lck | 2024-04-25 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +------------------+------------+---------+----------+---------+----------+------+--------+

Note: Please excuse the non-use of the window function!

The only real driver for the data volume of this database is the global_status table. This is to be expected (see: see quantity structure of dbstat).

SQL> SELECT SUBSTR(ts, 1, 10) AS date, table_rows/1000/1000 AS k_rows , ROUND(data_length/1024/1024, 1) AS data_mb, ROUND(index_length/1024/1024, 1) AS index_mb, ROUND(data_free/1024/1024, 1) AS free_mb , ROUND((data_length + index_length + data_free)/1024/1024, 1) AS total_mb FROM table_size WHERE `machine_name` = @machine_name AND `table_catalog` = 'def' AND `table_schema` = 'dbstat' AND table_name = 'global_status' AND ts > DATE_SUB(CURRENT_DATE, INTERVAL 10 DAY) ; +------------+-------------+---------+----------+---------+----------+ | date | k_rows | data_mb | index_mb | free_mb | total_mb | +------------+-------------+---------+----------+---------+----------+ | 2024-04-15 | 37.13876300 | 2512.9 | 4433.0 | 4.0 | 6949.9 | | 2024-04-16 | 37.94217200 | 2512.9 | 4433.0 | 4.0 | 6949.9 | + 0M | 2024-04-17 | 38.19867500 | 2592.9 | 4600.0 | 7.0 | 7199.9 | + 250M | 2024-04-18 | 39.39108500 | 2592.9 | 4600.0 | 5.0 | 7197.9 | - 2M | 2024-04-19 | 36.52539600 | 2691.9 | 4813.0 | 5.0 | 7509.8 | + 312M | 2024-04-20 | 37.99073500 | 2770.9 | 4980.9 | 6.0 | 7757.8 | + 248M | 2024-04-21 | 38.79420200 | 2770.9 | 4980.9 | 7.0 | 7758.8 | + 1M | 2024-04-22 | 42.82606200 | 2855.9 | 5158.9 | 6.0 | 8020.8 | + 263M | 2024-04-23 | 43.62953000 | 2855.9 | 5158.9 | 7.0 | 8021.8 | + 1M | 2024-04-24 | 40.54342200 | 2949.9 | 5356.9 | 7.0 | 8313.8 | + 292M | 2024-04-25 | 40.43067700 | 2949.9 | 5356.9 | 5.0 | 8311.8 | - 2M +------------+-------------+---------+----------+---------+----------+

Note: Sorry, I should really familiarise myself with the window functions...

If we analyse the data a bit more closely, we see that the number of rows has slowly stabilised over the last 4 days (note: table_rows is calculated (from the number of blocks and the average row length?) and is not an exact value), but the "amount of data" has continued to increase until yesterday, which is probably due to the fragmentation of the tables and indexes...

The primary key of the global_status table was chosen to optimise the localisation of the data:

PRIMARY KEY (`machine_name`,`variable_name`,`ts`),

The situation should calm down in the next few days. In 2 to 4 weeks we will have to check the situation again.

Summary: I would say that this feature fulfils the requirements and helps to understand the data growth.

List of processes

Since we do not have any serious load issues in our databases, this feature is not that interesting in our case. For example, we can see what a (persistent) connection has done:

SQL> SELECT connection_id, ts, command, time, state, SUBSTR(REGEXP_REPLACE(REPLACE(query, "\n", ' '), '\ +', ' '), 1, 64) FROM processlist WHERE machine_name = @machine_name AND command != 'Sleep' AND connection_id = @connection_id AND state NOT IN ( 'Waiting for next activation' , 'Master has sent all binlog to slave; waiting for more updates' , 'Waiting for master to send event' , 'Slave has read all relay log; waiting for more updates' ) ORDER BY ts ASC ; +---------------+---------------------+---------+-------+----------------+----------------------------------------------------------------------+ | connection_id | ts | command | time | state | SUBSTR(REGEXP_REPLACE(REPLACE(query, "\n", ' '), '\ +', ' '), 1, 64) | +---------------+---------------------+---------+-------+----------------+----------------------------------------------------------------------+ | 18 | 2024-04-17 12:30:28 | Query | 0.029 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-17 14:58:28 | Query | 0.009 | Writing to net | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-18 06:24:28 | Query | 0.003 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-18 11:34:28 | Query | 0.030 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-18 16:39:28 | Query | 0.006 | Sending data | select itemid,functionid,name,parameter,triggerid from functions | | 18 | 2024-04-18 19:12:28 | Query | 0.014 | Sending data | select triggerid,description,expression,error,priority,type,valu | | 18 | 2024-04-18 21:49:28 | Query | 0.004 | Writing to net | select i.itemid,i.hostid,i.templateid from items i inner join ho | | 18 | 2024-04-19 00:21:28 | Query | 0.032 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-19 02:59:28 | Query | 0.017 | Writing to net | select triggerid,description,expression,error,priority,type,valu | | 18 | 2024-04-19 05:39:28 | Query | 0.052 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-19 08:19:28 | Query | 0.000 | Statistics | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-19 13:26:28 | Query | 0.075 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-19 15:57:28 | Query | 0.027 | Writing to net | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-19 18:33:28 | Query | 0.010 | Sending data | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-19 21:10:28 | Query | 0.008 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-19 23:50:28 | Query | 0.067 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 02:28:28 | Query | 0.008 | Sending data | select triggerid,description,expression,error,priority,type,valu | | 18 | 2024-04-20 05:08:28 | Query | 0.052 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 07:44:28 | Query | 0.123 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 10:21:28 | Query | 0.144 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 12:55:28 | Query | 0.004 | Sending data | select i.itemid,i.hostid,i.templateid from items i where i.flags | | 18 | 2024-04-20 15:35:28 | Query | 0.092 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 18:12:28 | Query | 0.041 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 20:47:28 | Query | 0.113 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 23:25:28 | Query | 0.101 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 02:03:28 | Query | 0.120 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 04:42:28 | Query | 0.099 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 07:18:28 | Query | 0.015 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 12:32:28 | Query | 0.018 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 15:06:28 | Query | 0.091 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 20:16:28 | Query | 0.012 | Sending data | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-22 06:44:28 | Query | 0.161 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 09:21:28 | Query | 0.000 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 11:54:28 | Query | 0.020 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 14:23:28 | Query | 0.067 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 16:59:28 | Query | 0.128 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 22:05:28 | Query | 0.078 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 00:38:28 | Query | 0.084 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 03:15:28 | Query | 0.098 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 05:52:28 | Query | 0.000 | starting | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 08:27:28 | Query | 0.011 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-23 10:58:28 | Query | 0.000 | Sending data | select i.itemid,i.hostid,i.templateid from items i inner join ho | | 18 | 2024-04-23 13:31:28 | Query | 0.110 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 16:01:28 | Query | 0.023 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 18:35:28 | Query | 0.095 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 21:10:28 | Query | 0.017 | Writing to net | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-23 23:44:28 | Query | 0.014 | Sending data | select triggerid,description,expression,error,priority,type,valu | | 18 | 2024-04-24 02:21:28 | Query | 0.024 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-24 07:33:28 | Query | 0.046 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | +---------------+---------------------+---------+-------+----------------+----------------------------------------------------------------------+

It is also important that we only see the entries in this report when the thread has done SOMETHING (we have hidden state Sleep). It is also interesting that we do not see this (persistent) connection before 17 April, but at the moment I have NO explanation for this from an operational point of view (restart etc.). Probably the application (Zabbix) has to explain this.

Global variables

The information in the global_variables table is also interesting:

SQL> SELECT variable_name, ts, variable_value FROM global_variables WHERE machine_name = @machine_name AND variable_name IN ( SELECT variable_name FROM global_variables WHERE machine_name = @machine_name GROUP BY variable_name HAVING COUNT(*) > 1 ) ORDER BY ts, variable_name ; +---------------------------+---------------------+----------------+ | variable_name | ts | variable_value | +---------------------------+---------------------+----------------+ | auto_increment_increment | 2024-03-09 22:10:42 | 1 | | auto_increment_offset | 2024-03-09 22:10:42 | 1 | | read_only | 2024-03-09 22:10:42 | OFF | | slave_parallel_max_queued | 2024-03-09 22:10:42 | 131072 | | slave_parallel_threads | 2024-03-09 22:10:42 | 0 | | slave_parallel_workers | 2024-03-09 22:10:42 | 0 | | slave_skip_errors | 2024-03-09 22:10:42 | OFF | | system_time_zone | 2024-03-09 22:10:42 | CET | | read_only | 2024-03-27 09:42:50 | ON | | slave_skip_errors | 2024-03-27 12:33:13 | 1032 | | slave_skip_errors | 2024-03-27 12:35:13 | OFF | | slave_skip_errors | 2024-03-27 12:42:13 | 1032 | | slave_skip_errors | 2024-03-27 12:50:13 | OFF | | slave_parallel_threads | 2024-04-02 10:17:32 | 8 | | slave_parallel_workers | 2024-04-02 10:17:32 | 8 | | slave_parallel_max_queued | 2024-04-02 10:22:32 | 1048576 | | slave_parallel_max_queued | 2024-04-02 10:23:32 | 4194304 | | slave_parallel_max_queued | 2024-04-02 10:25:32 | 16777216 | | slave_parallel_threads | 2024-04-02 10:25:32 | 16 | | slave_parallel_workers | 2024-04-02 10:25:32 | 16 | | slave_parallel_threads | 2024-04-02 10:28:32 | 32 | | slave_parallel_workers | 2024-04-02 10:28:32 | 32 | | auto_increment_increment | 2024-04-02 10:39:32 | 2 | | auto_increment_offset | 2024-04-02 10:39:32 | 2 | | slave_parallel_max_queued | 2024-04-02 10:57:32 | 131072 | | slave_parallel_threads | 2024-04-02 10:57:32 | 0 | | slave_parallel_workers | 2024-04-02 10:57:32 | 0 | | system_time_zone | 2024-04-02 10:57:32 | CEST | | slave_parallel_max_queued | 2024-04-16 14:06:32 | 16777216 | | slave_parallel_threads | 2024-04-16 14:06:32 | 8 | | slave_parallel_workers | 2024-04-16 14:06:32 | 8 | | slave_parallel_max_queued | 2024-04-16 14:26:32 | 131072 | | slave_parallel_threads | 2024-04-16 14:26:32 | 0 | | slave_parallel_workers | 2024-04-16 14:26:32 | 0 | | slave_parallel_max_queued | 2024-04-17 09:03:32 | 16777216 | | slave_parallel_threads | 2024-04-17 09:03:32 | 16 | | slave_parallel_workers | 2024-04-17 09:03:32 | 16 | | slave_parallel_max_queued | 2024-04-24 08:26:32 | 131072 | | slave_parallel_threads | 2024-04-24 08:26:32 | 0 | | slave_parallel_workers | 2024-04-24 08:26:32 | 0 | | read_only | 2024-04-24 08:42:32 | OFF | +---------------------------+---------------------+----------------+

Here you can see very precisely when and what was done to the database:

  • On 9 March, dbstat was installed for the first time.
  • Then on 27 March (before Easter) there seem to have been problems with the replication (here the new version of dbstat was installed, which allows simultaneous collection on master and slave. This led to replication errors, which were partially rectified).
  • On 2 April (after Easter) we then tried to catch up with parallel replication. You can also see that AUTO_INCREMENT_OFFSET and AUTO_INCREMENT_INCREMENT have been changed. Here we have corrected an error in the database configuration...
  • You can also see that the time zone has changed from CET to CEST (summer time!) Why only on 2 April is not entirely clear to me. (Maybe because it came via replication?)
  • Then on 16 and 17 April we tried to reproduce a "bug" in the parallel replication. Apparently we did not reset the value. Because only after the restart on 24 April (usual fortnightly maintenance window) was the value reset again.
  • On 24 April, you can also see that the database has now assumed the role of the active master (read_only = off). A gracefull switchover has therefore taken place...

Conclusion: A very useful feature to see when something has been changed. Although I have followed all these operations closely, I am still amazed at the informative value of this feature. I would like to see it installed in all databases...

Metadata Lock and InnoDB Transaction Lock

Unfortunately, due to the low traffic on our databases, we do not see too much exciting stuff here.

Here are the metadata locks that we have "caught" on the master in the last 24 hours:

+---------------+---------------------+--------+-----------------+--------------+---------------+-----------------+----------------------------------------------------------------------+ | connection_id | ts | user | host | table_schema | table_name | state | SUBSTR(REGEXP_REPLACE(REPLACE(query, "\n", ' '), '\ +', ' '), 1, 64) | +---------------+---------------------+--------+-----------------+--------------+---------------+-----------------+----------------------------------------------------------------------+ | 18 | 2024-04-23 14:16:47 | zabbix | localhost:51252 | zabbix | triggers | Writing to net | select triggerid,description,expression,error,priority,type,valu | | 1325025 | 2024-04-23 16:01:47 | zabbix | localhost:50150 | | | init for update | delete from history_text where itemid=85477 and clock<1678167661 | | 1325025 | 2024-04-23 16:01:47 | zabbix | localhost:50150 | zabbix | history_text | init for update | delete from history_text where itemid=85477 and clock<1678167661 | | 1365229 | 2024-04-24 02:13:47 | root | localhost:38096 | dbstat | global_status | Writing to net | SELECT /*!40001 SQL_NO_CACHE */ `machine_name`, `variable_name`, | | 18 | 2024-04-24 03:10:47 | zabbix | localhost:51252 | zabbix | item_tag | Writing to net | select itemtagid,itemid,tag,value from item_tag | | 1368524 | 2024-04-24 04:41:47 | zabbix | localhost:38112 | | | | NULL | | 1368524 | 2024-04-24 04:41:47 | zabbix | localhost:38112 | zabbix | history_uint | | NULL | | 18 | 2024-04-24 05:46:47 | zabbix | localhost:51252 | zabbix | item_tag | Sending data | select itemtagid,itemid,tag,value from item_tag | +---------------+---------------------+--------+-----------------+--------------+---------------+-----------------+----------------------------------------------------------------------+

We have not found any InnoDB locks in the last 24 hours for 7 days in a row.

It would be interesting to see a system where more is happening...

Global status

If a normal database monitoring such as the FromDual Performance Monitor for MariaDB and MySQL (fpmmm) with Zabbix is used, this feature is not absolutely necessary. However, most of our customers do not have any useful monitoring in use. Therefore, this feature would be very useful for post-mortem analyses...

For example InnoDB Row Lock Waits, minute-granular over the last 30 days (analogue to sar from sysstat):

Here you can see that the database was restarted on 10 April between 08:37 and 08:41. You could also find this out in another way, but unfortunately this is often not possible for various reasons (error log rotated away, etc.).

The trend break around 2 April is also interesting. At this time we were experimenting with parallel replication. It should not have been a failover (see GLOBAL VARIABLES, above).

Although parallel replication was later deactivated again, there were more locks. A similar situation around the 16th/17th of April, here too we played around with parallel replication, which seems to have had an effect on the locking behaviour.

Even with this feature, there are many ways to analyse the database. Unfortunately, our database is relatively boring: Mainly monotonous traffic (which is plentiful due to the monitoring) and very little exceptional traffic.

Remark: This text has been translated with the assistance of DeepL.

Taxonomy upgrade extras: performancemonitoringperformance monitoringmetadata locklockingperformance_schema

dbstat for MariaDB after one month of productive use

Shinguz - Fri, 2024-04-26 14:13
Table of contents
Review

After we introduced dbstat for MariaDB (and MySQL) a good 5 weeks ago, we naturally also rolled it out on our systems to test the behaviour in daily use (eat your own dog food).

This went quite well until we came up with the idea of activating dbstat on the passive dbstat node on our MariaDB active/passive master/master replication cluster (a similar situation would also occur with a Galera cluster). We realised that the design of dbstat still had potential. After this problem was fixed (v0.0.2 and v0.0.3) and the problem of how to activate events on master AND slave was solved (MDEV-33782: Event is always disabled on slave), everything seemed fine at first glance. Unfortunately, we did not realise that the data also had to be adjusted. As a result, our replication came to a complete stop over the Easter holidays, which then led to another problem when catching up (MDEV-33923: MariaDB parallel replication causes Foreign Key errors).

After this minor incident was also resolved, dbstat has been running flawlessly on our MariaDB master/master replication cluster ever since... The product dbstat is open source (GPLv2) and can be downloaded from GitHub.

One month later

Databases should NOT grow over time but only over the number of {customers, products, etc.} once the desired equilibrium (steady state) is reached. In our dbstat installation, we have set this equilibrium state to 30 days. So it is now time that the size of dbstat stabilises and the database stops growing...

It would also be interesting to understand what practical use dbstat has. That is why we have now set to work and are trying to analyse the results of dbstat.

Here is an overview of the 11 current running database events:

SQL> SELECT db, name, definer, CONCAT(interval_value, ' ', interval_field) AS 'interval' , last_executed, ends, status FROM mysql.event ORDER BY db, name ASC ; +--------+-------------------------+------------------+----------+---------------------+------+---------+ | db | name | definer | interval | last_executed | ends | status | +--------+-------------------------+------------------+----------+---------------------+------+---------+ | dbstat | gather_global_status | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:14 | NULL | ENABLED | | dbstat | gather_global_variables | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:32 | NULL | ENABLED | | dbstat | gather_metadata_lock | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:47 | NULL | ENABLED | | dbstat | gather_processlist | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:28 | NULL | ENABLED | | dbstat | gather_table_size | dbstat@localhost | 1 DAY | 2024-04-24 00:04:00 | NULL | ENABLED | | dbstat | gather_trx_and_lck | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:35 | NULL | ENABLED | | dbstat | purge_global_status | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:08 | NULL | ENABLED | | dbstat | purge_metadata_lock | dbstat@localhost | 5 MINUTE | 2024-04-24 07:44:37 | NULL | ENABLED | | dbstat | purge_processlist | dbstat@localhost | 1 MINUTE | 2024-04-24 07:43:58 | NULL | ENABLED | | dbstat | purge_table_size | dbstat@localhost | 5 MINUTE | 2024-04-24 07:40:04 | NULL | ENABLED | | dbstat | purge_trx_and_lck | dbstat@localhost | 1 MINUTE | 2024-04-24 07:44:45 | NULL | ENABLED | +--------+-------------------------+------------------+----------+---------------------+------+---------+
Size of the tables

Firstly, the growth of dbstat itself is interesting. But of course this evaluation can also be carried out for any other database, table or catalogue (coming in MariaDB 11.7?):

SQL> SET SESSION sql_mode='STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,only_full_group_by'; SQL> SET @machine_name = @@hostname; SQL> SELECT `table_schema`, SUBSTR(`ts`, 1, 10) AS date , ROUND(SUM(`data_length`)/1024/1024, 1) AS data_mb , ROUND(SUM(`index_length`)/1024/1024, 1) AS index_mb , ROUND(SUM(`data_free`)/1024/1024, 1) AS free_mb , ROUND((SUM(`data_length`) + SUM(`index_length`) + SUM(`data_free`))/1024/1024, 1) AS total_mb , ROUND(SUM(`table_rows`)/1000/1000, 1) AS rows_m FROM `table_size` WHERE `machine_name` = @machine_name AND `table_catalog` = 'def' AND `table_schema` = 'dbstat' GROUP BY `table_catalog`, `table_schema`, `date` ORDER BY `table_catalog`, `table_schema`, `date` ASC ; +--------------+------------+---------+----------+---------+----------+--------+ | table_schema | date | data_mb | index_mb | free_mb | total_mb | rows_m | +--------------+------------+---------+----------+---------+----------+--------+ | dbstat | 2024-03-26 | 762.8 | 1128.6 | 18.0 | 1909.4 | 10.9 | | dbstat | 2024-03-27 | 835.8 | 1241.6 | 17.0 | 2094.4 | 11.1 | | dbstat | 2024-03-28 | 837.8 | 1241.6 | 14.0 | 2093.4 | 11.8 | | dbstat | 2024-03-29 | 960.7 | 1443.6 | 18.0 | 2422.4 | 14.2 | | dbstat | 2024-03-30 | 960.7 | 1443.6 | 17.0 | 2421.4 | 15.0 | | dbstat | 2024-03-31 | 1057.7 | 1604.6 | 20.0 | 2682.4 | 16.9 | | dbstat | 2024-04-01 | 1057.7 | 1602.6 | 21.0 | 2681.4 | 17.6 | | dbstat | 2024-04-02 | 1172.7 | 1797.6 | 22.0 | 2992.3 | 17.8 | | dbstat | 2024-04-03 | 1442.8 | 2333.7 | 12.0 | 3788.5 | 22.8 | | dbstat | 2024-04-04 | 1649.8 | 2723.7 | 13.0 | 4386.5 | 24.4 | | dbstat | 2024-04-05 | 1649.8 | 2722.7 | 14.0 | 4386.5 | 26.0 | | dbstat | 2024-04-06 | 1821.8 | 3034.8 | 13.0 | 4869.6 | 24.6 | | dbstat | 2024-04-07 | 1821.8 | 3034.8 | 14.0 | 4870.6 | 26.2 | | dbstat | 2024-04-08 | 1989.9 | 3344.8 | 12.0 | 5346.6 | 29.9 | | dbstat | 2024-04-09 | 1990.9 | 3343.8 | 14.0 | 5348.6 | 31.5 | | dbstat | 2024-04-10 | 2193.9 | 3712.8 | 13.0 | 5919.7 | 31.6 | | dbstat | 2024-04-11 | 2193.9 | 3712.8 | 15.0 | 5921.7 | 31.1 | | dbstat | 2024-04-12 | 2405.8 | 4119.1 | 12.0 | 6537.0 | 34.9 | | dbstat | 2024-04-13 | 2405.8 | 4119.1 | 14.0 | 6538.9 | 35.7 | | dbstat | 2024-04-14 | 2480.8 | 4278.9 | 15.0 | 6774.8 | 36.2 | | dbstat | 2024-04-15 | 2560.8 | 4443.7 | 12.0 | 7016.5 | 37.5 | | dbstat | 2024-04-16 | 2560.8 | 4443.7 | 12.0 | 7016.5 | 38.2 | | dbstat | 2024-04-17 | 2640.8 | 4610.6 | 18.0 | 7269.4 | 38.5 | | dbstat | 2024-04-18 | 2640.9 | 4611.6 | 14.0 | 7266.5 | 39.7 | | dbstat | 2024-04-19 | 2743.9 | 4826.5 | 14.0 | 7584.3 | 36.9 | | dbstat | 2024-04-20 | 2826.9 | 4995.5 | 14.0 | 7836.4 | 38.3 | | dbstat | 2024-04-21 | 2830.9 | 4997.4 | 18.0 | 7846.3 | 39.2 | | dbstat | 2024-04-22 | 2919.9 | 5177.4 | 14.0 | 8111.3 | 43.2 | | dbstat | 2024-04-23 | 2923.0 | 5177.3 | 16.0 | 8116.3 | 44.1 | | dbstat | 2024-04-24 | 3020.0 | 5376.3 | 16.0 | 8412.3 | 41.0 | | dbstat | 2024-04-25 | 3024.0 | 5377.3 | 17.0 | 8418.3 | 40.9 | +--------------+------------+---------+----------+---------+----------+--------+

If you take the disc space in the O/S for comparison:

# du -shc *.ibd 8.6G global_status.ibd 308K global_variables.ibd 692K metadata_lock.ibd 97M processlist.ibd 18M table_size.ibd 212K trx_and_lck.ibd 8.7G total

you can see that the values from the database are approximately correct (5% error)...

Important: The database dbstat reaches a size of approx. 9 Gbyte after approx. one month on a not particularly large database system.

You can also see that the size of the database is only just stabilising:

If you want to know more precisely which tables are responsible for which part of the data volume, you can also zoom in or drill down into the data:

SQL> SELECT `table_name`, SUBSTR(`ts`, 1, 10) AS date , ROUND(`data_length`/1024/1024, 1) AS data_mb , ROUND(`index_length`/1024/1024, 1) AS index_mb , ROUND(`data_free`/1024/1024, 1) AS free_mb , ROUND((`data_length` + `index_length` + `data_free`)/1024/1024, 1) AS total_mb , ROUND((`data_length` + `index_length` + `data_free`)/1024/1024/8418.26*100, 1) AS pct , ROUND(`table_rows`/1000/1000, 1) AS rows_m FROM `table_size` WHERE `machine_name` = @machine_name AND `table_catalog` = 'def' AND `table_schema` = 'dbstat' AND SUBSTR(`ts`, 1, 10) = CURRENT_DATE() ORDER BY rows_m DESC ; +------------------+------------+---------+----------+---------+----------+------+--------+ | table_name | date | data_mb | index_mb | free_mb | total_mb | pct | rows_m | +------------------+------------+---------+----------+---------+----------+------+--------+ | global_status | 2024-04-25 | 2949.9 | 5356.9 | 5.0 | 8311.8 | 98.7 | 40.4 | | processlist | 2024-04-25 | 68.2 | 17.1 | 7.0 | 92.2 | 1.1 | 0.4 | | global_variables | 2024-04-25 | 0.1 | 0.1 | 0.0 | 0.2 | 0.0 | 0.0 | | metadata_lock | 2024-04-25 | 0.4 | 0.2 | 0.0 | 0.6 | 0.0 | 0.0 | | table_size | 2024-04-25 | 5.4 | 3.1 | 5.0 | 13.5 | 0.2 | 0.0 | | trx_and_lck | 2024-04-25 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +------------------+------------+---------+----------+---------+----------+------+--------+

Note: Please excuse the non-use of the window function!

The only real driver for the data volume of this database is the global_status table. This is to be expected (see: see quantity structure of dbstat).

SQL> SELECT SUBSTR(ts, 1, 10) AS date, table_rows/1000/1000 AS k_rows , ROUND(data_length/1024/1024, 1) AS data_mb, ROUND(index_length/1024/1024, 1) AS index_mb, ROUND(data_free/1024/1024, 1) AS free_mb , ROUND((data_length + index_length + data_free)/1024/1024, 1) AS total_mb FROM table_size WHERE `machine_name` = @machine_name AND `table_catalog` = 'def' AND `table_schema` = 'dbstat' AND table_name = 'global_status' AND ts > DATE_SUB(CURRENT_DATE, INTERVAL 10 DAY) ; +------------+-------------+---------+----------+---------+----------+ | date | k_rows | data_mb | index_mb | free_mb | total_mb | +------------+-------------+---------+----------+---------+----------+ | 2024-04-15 | 37.13876300 | 2512.9 | 4433.0 | 4.0 | 6949.9 | | 2024-04-16 | 37.94217200 | 2512.9 | 4433.0 | 4.0 | 6949.9 | + 0M | 2024-04-17 | 38.19867500 | 2592.9 | 4600.0 | 7.0 | 7199.9 | + 250M | 2024-04-18 | 39.39108500 | 2592.9 | 4600.0 | 5.0 | 7197.9 | - 2M | 2024-04-19 | 36.52539600 | 2691.9 | 4813.0 | 5.0 | 7509.8 | + 312M | 2024-04-20 | 37.99073500 | 2770.9 | 4980.9 | 6.0 | 7757.8 | + 248M | 2024-04-21 | 38.79420200 | 2770.9 | 4980.9 | 7.0 | 7758.8 | + 1M | 2024-04-22 | 42.82606200 | 2855.9 | 5158.9 | 6.0 | 8020.8 | + 263M | 2024-04-23 | 43.62953000 | 2855.9 | 5158.9 | 7.0 | 8021.8 | + 1M | 2024-04-24 | 40.54342200 | 2949.9 | 5356.9 | 7.0 | 8313.8 | + 292M | 2024-04-25 | 40.43067700 | 2949.9 | 5356.9 | 5.0 | 8311.8 | - 2M +------------+-------------+---------+----------+---------+----------+

Note: Sorry, I should really familiarise myself with the window functions...

If we analyse the data a bit more closely, we see that the number of rows has slowly stabilised over the last 4 days (note: table_rows is calculated (from the number of blocks and the average row length?) and is not an exact value), but the "amount of data" has continued to increase until yesterday, which is probably due to the fragmentation of the tables and indexes...

The primary key of the global_status table was chosen to optimise the localisation of the data:

PRIMARY KEY (`machine_name`,`variable_name`,`ts`),

The situation should calm down in the next few days. In 2 to 4 weeks we will have to check the situation again.

Summary: I would say that this feature fulfils the requirements and helps to understand the data growth.

List of processes

Since we do not have any serious load issues in our databases, this feature is not that interesting in our case. For example, we can see what a (persistent) connection has done:

SQL> SELECT connection_id, ts, command, time, state, SUBSTR(REGEXP_REPLACE(REPLACE(query, "\n", ' '), '\ +', ' '), 1, 64) FROM processlist WHERE machine_name = @machine_name AND command != 'Sleep' AND connection_id = @connection_id AND state NOT IN ( 'Waiting for next activation' , 'Master has sent all binlog to slave; waiting for more updates' , 'Waiting for master to send event' , 'Slave has read all relay log; waiting for more updates' ) ORDER BY ts ASC ; +---------------+---------------------+---------+-------+----------------+----------------------------------------------------------------------+ | connection_id | ts | command | time | state | SUBSTR(REGEXP_REPLACE(REPLACE(query, "\n", ' '), '\ +', ' '), 1, 64) | +---------------+---------------------+---------+-------+----------------+----------------------------------------------------------------------+ | 18 | 2024-04-17 12:30:28 | Query | 0.029 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-17 14:58:28 | Query | 0.009 | Writing to net | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-18 06:24:28 | Query | 0.003 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-18 11:34:28 | Query | 0.030 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-18 16:39:28 | Query | 0.006 | Sending data | select itemid,functionid,name,parameter,triggerid from functions | | 18 | 2024-04-18 19:12:28 | Query | 0.014 | Sending data | select triggerid,description,expression,error,priority,type,valu | | 18 | 2024-04-18 21:49:28 | Query | 0.004 | Writing to net | select i.itemid,i.hostid,i.templateid from items i inner join ho | | 18 | 2024-04-19 00:21:28 | Query | 0.032 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-19 02:59:28 | Query | 0.017 | Writing to net | select triggerid,description,expression,error,priority,type,valu | | 18 | 2024-04-19 05:39:28 | Query | 0.052 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-19 08:19:28 | Query | 0.000 | Statistics | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-19 13:26:28 | Query | 0.075 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-19 15:57:28 | Query | 0.027 | Writing to net | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-19 18:33:28 | Query | 0.010 | Sending data | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-19 21:10:28 | Query | 0.008 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-19 23:50:28 | Query | 0.067 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 02:28:28 | Query | 0.008 | Sending data | select triggerid,description,expression,error,priority,type,valu | | 18 | 2024-04-20 05:08:28 | Query | 0.052 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 07:44:28 | Query | 0.123 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 10:21:28 | Query | 0.144 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 12:55:28 | Query | 0.004 | Sending data | select i.itemid,i.hostid,i.templateid from items i where i.flags | | 18 | 2024-04-20 15:35:28 | Query | 0.092 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 18:12:28 | Query | 0.041 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 20:47:28 | Query | 0.113 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-20 23:25:28 | Query | 0.101 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 02:03:28 | Query | 0.120 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 04:42:28 | Query | 0.099 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 07:18:28 | Query | 0.015 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 12:32:28 | Query | 0.018 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 15:06:28 | Query | 0.091 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-21 20:16:28 | Query | 0.012 | Sending data | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-22 06:44:28 | Query | 0.161 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 09:21:28 | Query | 0.000 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 11:54:28 | Query | 0.020 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 14:23:28 | Query | 0.067 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 16:59:28 | Query | 0.128 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-22 22:05:28 | Query | 0.078 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 00:38:28 | Query | 0.084 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 03:15:28 | Query | 0.098 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 05:52:28 | Query | 0.000 | starting | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 08:27:28 | Query | 0.011 | Sending data | select pp.item_preprocid,pp.itemid,pp.type,pp.params,pp.step,h.h | | 18 | 2024-04-23 10:58:28 | Query | 0.000 | Sending data | select i.itemid,i.hostid,i.templateid from items i inner join ho | | 18 | 2024-04-23 13:31:28 | Query | 0.110 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 16:01:28 | Query | 0.023 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 18:35:28 | Query | 0.095 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-23 21:10:28 | Query | 0.017 | Writing to net | select itemtagid,itemid,tag,value from item_tag | | 18 | 2024-04-23 23:44:28 | Query | 0.014 | Sending data | select triggerid,description,expression,error,priority,type,valu | | 18 | 2024-04-24 02:21:28 | Query | 0.024 | Sending data | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | | 18 | 2024-04-24 07:33:28 | Query | 0.046 | Writing to net | select i.itemid,i.hostid,i.status,i.type,i.value_type,i.key_,i.s | +---------------+---------------------+---------+-------+----------------+----------------------------------------------------------------------+

It is also important that we only see the entries in this report when the thread has done SOMETHING (we have hidden state Sleep). It is also interesting that we do not see this (persistent) connection before 17 April, but at the moment I have NO explanation for this from an operational point of view (restart etc.). Probably the application (Zabbix) has to explain this.

Global variables

The information in the global_variables table is also interesting:

SQL> SELECT variable_name, ts, variable_value FROM global_variables WHERE machine_name = @machine_name AND variable_name IN ( SELECT variable_name FROM global_variables WHERE machine_name = @machine_name GROUP BY variable_name HAVING COUNT(*) > 1 ) ORDER BY ts, variable_name ; +---------------------------+---------------------+----------------+ | variable_name | ts | variable_value | +---------------------------+---------------------+----------------+ | auto_increment_increment | 2024-03-09 22:10:42 | 1 | | auto_increment_offset | 2024-03-09 22:10:42 | 1 | | read_only | 2024-03-09 22:10:42 | OFF | | slave_parallel_max_queued | 2024-03-09 22:10:42 | 131072 | | slave_parallel_threads | 2024-03-09 22:10:42 | 0 | | slave_parallel_workers | 2024-03-09 22:10:42 | 0 | | slave_skip_errors | 2024-03-09 22:10:42 | OFF | | system_time_zone | 2024-03-09 22:10:42 | CET | | read_only | 2024-03-27 09:42:50 | ON | | slave_skip_errors | 2024-03-27 12:33:13 | 1032 | | slave_skip_errors | 2024-03-27 12:35:13 | OFF | | slave_skip_errors | 2024-03-27 12:42:13 | 1032 | | slave_skip_errors | 2024-03-27 12:50:13 | OFF | | slave_parallel_threads | 2024-04-02 10:17:32 | 8 | | slave_parallel_workers | 2024-04-02 10:17:32 | 8 | | slave_parallel_max_queued | 2024-04-02 10:22:32 | 1048576 | | slave_parallel_max_queued | 2024-04-02 10:23:32 | 4194304 | | slave_parallel_max_queued | 2024-04-02 10:25:32 | 16777216 | | slave_parallel_threads | 2024-04-02 10:25:32 | 16 | | slave_parallel_workers | 2024-04-02 10:25:32 | 16 | | slave_parallel_threads | 2024-04-02 10:28:32 | 32 | | slave_parallel_workers | 2024-04-02 10:28:32 | 32 | | auto_increment_increment | 2024-04-02 10:39:32 | 2 | | auto_increment_offset | 2024-04-02 10:39:32 | 2 | | slave_parallel_max_queued | 2024-04-02 10:57:32 | 131072 | | slave_parallel_threads | 2024-04-02 10:57:32 | 0 | | slave_parallel_workers | 2024-04-02 10:57:32 | 0 | | system_time_zone | 2024-04-02 10:57:32 | CEST | | slave_parallel_max_queued | 2024-04-16 14:06:32 | 16777216 | | slave_parallel_threads | 2024-04-16 14:06:32 | 8 | | slave_parallel_workers | 2024-04-16 14:06:32 | 8 | | slave_parallel_max_queued | 2024-04-16 14:26:32 | 131072 | | slave_parallel_threads | 2024-04-16 14:26:32 | 0 | | slave_parallel_workers | 2024-04-16 14:26:32 | 0 | | slave_parallel_max_queued | 2024-04-17 09:03:32 | 16777216 | | slave_parallel_threads | 2024-04-17 09:03:32 | 16 | | slave_parallel_workers | 2024-04-17 09:03:32 | 16 | | slave_parallel_max_queued | 2024-04-24 08:26:32 | 131072 | | slave_parallel_threads | 2024-04-24 08:26:32 | 0 | | slave_parallel_workers | 2024-04-24 08:26:32 | 0 | | read_only | 2024-04-24 08:42:32 | OFF | +---------------------------+---------------------+----------------+

Here you can see very precisely when and what was done to the database:

  • On 9 March, dbstat was installed for the first time.
  • Then on 27 March (before Easter) there seem to have been problems with the replication (here the new version of dbstat was installed, which allows simultaneous collection on master and slave. This led to replication errors, which were partially rectified).
  • On 2 April (after Easter) we then tried to catch up with parallel replication. You can also see that AUTO_INCREMENT_OFFSET and AUTO_INCREMENT_INCREMENT have been changed. Here we have corrected an error in the database configuration...
  • You can also see that the time zone has changed from CET to CEST (summer time!) Why only on 2 April is not entirely clear to me. (Maybe because it came via replication?)
  • Then on 16 and 17 April we tried to reproduce a "bug" in the parallel replication. Apparently we did not reset the value. Because only after the restart on 24 April (usual fortnightly maintenance window) was the value reset again.
  • On 24 April, you can also see that the database has now assumed the role of the active master (read_only = off). A gracefull switchover has therefore taken place...

Conclusion: A very useful feature to see when something has been changed. Although I have followed all these operations closely, I am still amazed at the informative value of this feature. I would like to see it installed in all databases...

Metadata Lock and InnoDB Transaction Lock

Unfortunately, due to the low traffic on our databases, we do not see too much exciting stuff here.

Here are the metadata locks that we have "caught" on the master in the last 24 hours:

+---------------+---------------------+--------+-----------------+--------------+---------------+-----------------+----------------------------------------------------------------------+ | connection_id | ts | user | host | table_schema | table_name | state | SUBSTR(REGEXP_REPLACE(REPLACE(query, "\n", ' '), '\ +', ' '), 1, 64) | +---------------+---------------------+--------+-----------------+--------------+---------------+-----------------+----------------------------------------------------------------------+ | 18 | 2024-04-23 14:16:47 | zabbix | localhost:51252 | zabbix | triggers | Writing to net | select triggerid,description,expression,error,priority,type,valu | | 1325025 | 2024-04-23 16:01:47 | zabbix | localhost:50150 | | | init for update | delete from history_text where itemid=85477 and clock<1678167661 | | 1325025 | 2024-04-23 16:01:47 | zabbix | localhost:50150 | zabbix | history_text | init for update | delete from history_text where itemid=85477 and clock<1678167661 | | 1365229 | 2024-04-24 02:13:47 | root | localhost:38096 | dbstat | global_status | Writing to net | SELECT /*!40001 SQL_NO_CACHE */ `machine_name`, `variable_name`, | | 18 | 2024-04-24 03:10:47 | zabbix | localhost:51252 | zabbix | item_tag | Writing to net | select itemtagid,itemid,tag,value from item_tag | | 1368524 | 2024-04-24 04:41:47 | zabbix | localhost:38112 | | | | NULL | | 1368524 | 2024-04-24 04:41:47 | zabbix | localhost:38112 | zabbix | history_uint | | NULL | | 18 | 2024-04-24 05:46:47 | zabbix | localhost:51252 | zabbix | item_tag | Sending data | select itemtagid,itemid,tag,value from item_tag | +---------------+---------------------+--------+-----------------+--------------+---------------+-----------------+----------------------------------------------------------------------+

We have not found any InnoDB locks in the last 24 hours for 7 days in a row.

It would be interesting to see a system where more is happening...

Global status

If a normal database monitoring such as the FromDual Performance Monitor for MariaDB and MySQL (fpmmm) with Zabbix is used, this feature is not absolutely necessary. However, most of our customers do not have any useful monitoring in use. Therefore, this feature would be very useful for post-mortem analyses...

For example InnoDB Row Lock Waits, minute-granular over the last 30 days (analogue to sar from sysstat):

Here you can see that the database was restarted on 10 April between 08:37 and 08:41. You could also find this out in another way, but unfortunately this is often not possible for various reasons (error log rotated away, etc.).

The trend break around 2 April is also interesting. At this time we were experimenting with parallel replication. It should not have been a failover (see GLOBAL VARIABLES, above).

Although parallel replication was later deactivated again, there were more locks. A similar situation around the 16th/17th of April, here too we played around with parallel replication, which seems to have had an effect on the locking behaviour.

Even with this feature, there are many ways to analyse the database. Unfortunately, our database is relatively boring: Mainly monotonous traffic (which is plentiful due to the monitoring) and very little exceptional traffic.

Remark: This text has been translated with the assistance of DeepL.

Taxonomy upgrade extras: performancemonitoringperformance monitoringmetadata locklockingperformance_schema

MariaDB's parallel replication to catch up

Shinguz - Tue, 2024-04-09 10:53

Due to an application error, our replication stopped for 5 days (over Easter). After the problem was solved, the replication was supposed to catch up, which turned out to be very slow. All the usual tricks (innodb_flush_log_at_trx_commit, sync_binlog, etc.) had already been exhausted. So we tried our hand at parallel replication of the MariaDB server.

Parallel replication is deactivated by default:

SQL> SHOW GLOBAL VARIABLES LIKE '%parallel%'; +-------------------------------+------------+ | Variable_name | Value | +-------------------------------+------------+ | slave_domain_parallel_threads | 0 | | slave_parallel_max_queued | 131072 | | slave_parallel_mode | optimistic | | slave_parallel_threads | 0 | | slave_parallel_workers | 0 | +-------------------------------+------------+

Parallel replication is activated by setting the server variables slave_parallel_threads:

SQL> SET GLOBAL slave_parallel_threads = 8; ERROR 1198 (HY000): This operation cannot be performed as you have a running slave ''; run STOP SLAVE '' first

However, this must be done when replication is stopped:

SQL> STOP SLAVE; SQL> SET GLOBAL slave_parallel_threads = 8; SQL> START SLAVE;

Replication then caught up a little faster. However, as we were impatient, we tried to make it even faster. With the command:

SQL> SHOW SLAVE STATUS\G ... Slave_SQL_Running_State: Waiting for room in worker thread event queue ...

we found the following message. You would also see it using the SHOW PROCESSLIST command:

SQL> SHOW PROCESSLIST; +--------+-------------+- ... -+-----------+------+-----------------------------------------------+- ... -+ | Id | User | ... | Command | Time | State | ... | +--------+-------------+- ... -+-----------+------+-----------------------------------------------+- ... -+ ... ... ... | 212496 | system user | ... | Slave_SQL | 16 | Waiting for room in worker thread event queue | ... | +--------+-------------+- ... -+-----------+------+-----------------------------------------------+- ... -+

According to the documentation, it can help in this case to increase the size of the slave_parallel_max_queued variable slightly (attention: Oom!).

SQL> STOP SLAVE; SQL> SET GLOBAL slave_parallel_max_queued = 1*1024*1024; SQL> SHOW GLOBAL VARIABLES LIKE '%parallel%'; +-------------------------------+------------+ | Variable_name | Value | +-------------------------------+------------+ | slave_domain_parallel_threads | 0 | | slave_parallel_max_queued | 1048576 | | slave_parallel_mode | optimistic | | slave_parallel_threads | 8 | | slave_parallel_workers | 8 | +-------------------------------+------------+ SQL> START SLAVE;

We have played around with the values slave_parallel_threads in the range from 4 to 32 (with 8 vCores) and with slave_parallel_max_queued in the range from 128 kbyte to 32 Mbyte.
Caution: Do not exaggerate: 32 threads x 32 Mbyte = 1 Gbyte RAM (Oom)!

To find out which values are the optimum, you would have to test and measure more extensively. In any case, the replication made up the 5-day backlog after about an hour, towards the end a little more than at the beginning, which was hopefully caused by our configuration adjustments.

Depending on what DML statements are currently running, you can see that all threads can be used or that some threads have to wait for other threads:

SQL> SHOW PROCESSLIST; +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ | Id | User | Command | Time | State | Info | +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ | 2 | event_scheduler | Daemon | 506179 | Waiting for next activation | NULL | | 191154 | root | Query | 0 | starting | show pr... | | 208669 | replication | Binlog Dump | 297 | Master has sent all binlog to slave; waiting for more updates | NULL | | 212495 | system user | Slave_IO | 20 | Waiting for master to send event | NULL | | 212497 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212498 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212499 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212500 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212501 | system user | Slave_worker | 0 | Write_rows_log_event::write_row(-1) on table `history_uint` | insert ... | | 212502 | system user | Slave_worker | 0 | Write_rows_log_event::write_row(-1) on table `history_uint` | insert ... | | 212503 | system user | Slave_worker | 0 | Write_rows_log_event::write_row(-1) on table `history_str` | insert ... | | 212504 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212505 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212506 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212507 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212510 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212509 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212508 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212511 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212512 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212496 | system user | Slave_SQL | 16 | Waiting for room in worker thread event queue | NULL | +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ SQL> SHOW PROCESSLIST; +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ | Id | User | Command | Time | State | Info | +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ | 2 | event_scheduler | Daemon | 506197 | Waiting for next activation | NULL | | 191154 | root | Query | 0 | starting | show pr... | | 208669 | replication | Binlog Dump | 315 | Master has sent all binlog to slave; waiting for more updates | NULL | | 212495 | system user | Slave_IO | 37 | Waiting for master to send event | NULL | | 212497 | system user | Slave_worker | 0 | Delete_rows_log_event::ha_delete_row(-1) on table `history` | delete ... | | 212498 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212499 | system user | Slave_worker | 0 | Delete_rows_log_event::ha_delete_row(-1) on table `history` | delete ... | | 212500 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212501 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212502 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212503 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212504 | system user | Slave_worker | 0 | Delete_rows_log_event::ha_delete_row(-1) on table `history` | delete ... | | 212505 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212506 | system user | Slave_worker | 0 | Delete_rows_log_event::ha_delete_row(-1) on table `history` | delete ... | | 212507 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212510 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212509 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212508 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212511 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212512 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212496 | system user | Slave_SQL | 11 | Waiting for room in worker thread event queue | NULL | +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+

Our monitoring also showed us that the CPU load went up, the I/O system got more to do and more rows were modified...

What was also noticeable is that with parallel replication, Foreign Key errors suddenly occurred, a phenomenon that we had not observed before:

FromDual.maas2.prod2 - Warning: InnoDB Foreign Key error detected Trigger: InnoDB Foreign Key error detected Trigger status: PROBLEM Trigger severity: Warning Trigger URL: https://fromdual.com/innodb-foreign-key-error-detected Item values: 1 1. InnoDB new Foreign Key error (FromDual.maas2.prod2:FromDual.MySQL.innodb.ForeignKey_new): 1

With the command SHOW ENGINE INNODB STATUS\G you can inspect these accordingly or view them in the monitoring:

------------------------ LATEST FOREIGN KEY ERROR ------------------------ 2024-04-02 10:36:39 0x7f36088ff640 Transaction: TRANSACTION 7199599266, ACTIVE 0 sec inserting mysql tables in use 1, locked 1 6 lock struct(s), heap size 1128, 3 row lock(s), undo log entries 1 MariaDB thread id 228555, OS thread handle 139870048613952, query id 28453893 Write_rows_log_event::write_row(-1) on table `alerts` insert into alerts (alertid,actionid,eventid,userid,clock,mediatypeid,sendto,subject,message,status,error,esc_step,alerttype,acknowledgeid,parameters) values (203687,4,471733,3,1712044003,1,'xxx@fromdual.com','Zabbix server - High: Too many processes on Zabbix server','Trigger: Too many processes on Zabbix server Trigger status: PROBLEM Trigger severity: High Trigger URL: Item values: 309 1. Number of processes (Zabbix server:proc.num[]): 309',3,'',1,0,null,'{}') Foreign key constraint fails for table `zabbix`.`alerts`: , CONSTRAINT `c_alerts_2` FOREIGN KEY (`eventid`) REFERENCES `events` (`eventid`) ON DELETE CASCADE in parent table, in index alerts_3 tuple: DATA TUPLE: 2 fields; ... But in parent table `zabbix`.`events`, in index PRIMARY, the closest match we can find is record: PHYSICAL RECORD: n_fields 12; compact format; info bits 0 ...
Literature/Sources
Taxonomy upgrade extras: replicationmariadbparallelmulti-threaded

MariaDB's parallel replication to catch up

Shinguz - Tue, 2024-04-09 10:53

Due to an application error, our replication stopped for 5 days (over Easter). After the problem was solved, the replication was supposed to catch up, which turned out to be very slow. All the usual tricks (innodb_flush_log_at_trx_commit, sync_binlog, etc.) had already been exhausted. So we tried our hand at parallel replication of the MariaDB server.

Parallel replication is deactivated by default:

SQL> SHOW GLOBAL VARIABLES LIKE '%parallel%'; +-------------------------------+------------+ | Variable_name | Value | +-------------------------------+------------+ | slave_domain_parallel_threads | 0 | | slave_parallel_max_queued | 131072 | | slave_parallel_mode | optimistic | | slave_parallel_threads | 0 | | slave_parallel_workers | 0 | +-------------------------------+------------+

Parallel replication is activated by setting the server variables slave_parallel_threads:

SQL> SET GLOBAL slave_parallel_threads = 8; ERROR 1198 (HY000): This operation cannot be performed as you have a running slave ''; run STOP SLAVE '' first

However, this must be done when replication is stopped:

SQL> STOP SLAVE; SQL> SET GLOBAL slave_parallel_threads = 8; SQL> START SLAVE;

Replication then caught up a little faster. However, as we were impatient, we tried to make it even faster. With the command:

SQL> SHOW SLAVE STATUS\G ... Slave_SQL_Running_State: Waiting for room in worker thread event queue ...

we found the following message. You would also see it using the SHOW PROCESSLIST command:

SQL> SHOW PROCESSLIST; +--------+-------------+- ... -+-----------+------+-----------------------------------------------+- ... -+ | Id | User | ... | Command | Time | State | ... | +--------+-------------+- ... -+-----------+------+-----------------------------------------------+- ... -+ ... ... ... | 212496 | system user | ... | Slave_SQL | 16 | Waiting for room in worker thread event queue | ... | +--------+-------------+- ... -+-----------+------+-----------------------------------------------+- ... -+

According to the documentation, it can help in this case to increase the size of the slave_parallel_max_queued variable slightly (attention: Oom!).

SQL> STOP SLAVE; SQL> SET GLOBAL slave_parallel_max_queued = 1*1024*1024; SQL> SHOW GLOBAL VARIABLES LIKE '%parallel%'; +-------------------------------+------------+ | Variable_name | Value | +-------------------------------+------------+ | slave_domain_parallel_threads | 0 | | slave_parallel_max_queued | 1048576 | | slave_parallel_mode | optimistic | | slave_parallel_threads | 8 | | slave_parallel_workers | 8 | +-------------------------------+------------+ SQL> START SLAVE;

We have played around with the values slave_parallel_threads in the range from 4 to 32 (with 8 vCores) and with slave_parallel_max_queued in the range from 128 kbyte to 32 Mbyte.
Caution: Do not exaggerate: 32 threads x 32 Mbyte = 1 Gbyte RAM (Oom)!

To find out which values are the optimum, you would have to test and measure more extensively. In any case, the replication made up the 5-day backlog after about an hour, towards the end a little more than at the beginning, which was hopefully caused by our configuration adjustments.

Depending on what DML statements are currently running, you can see that all threads can be used or that some threads have to wait for other threads:

SQL> SHOW PROCESSLIST; +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ | Id | User | Command | Time | State | Info | +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ | 2 | event_scheduler | Daemon | 506179 | Waiting for next activation | NULL | | 191154 | root | Query | 0 | starting | show pr... | | 208669 | replication | Binlog Dump | 297 | Master has sent all binlog to slave; waiting for more updates | NULL | | 212495 | system user | Slave_IO | 20 | Waiting for master to send event | NULL | | 212497 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212498 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212499 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212500 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212501 | system user | Slave_worker | 0 | Write_rows_log_event::write_row(-1) on table `history_uint` | insert ... | | 212502 | system user | Slave_worker | 0 | Write_rows_log_event::write_row(-1) on table `history_uint` | insert ... | | 212503 | system user | Slave_worker | 0 | Write_rows_log_event::write_row(-1) on table `history_str` | insert ... | | 212504 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212505 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212506 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212507 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212510 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212509 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212508 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212511 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212512 | system user | Slave_worker | 0 | Waiting for prior transaction to commit | NULL | | 212496 | system user | Slave_SQL | 16 | Waiting for room in worker thread event queue | NULL | +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ SQL> SHOW PROCESSLIST; +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ | Id | User | Command | Time | State | Info | +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+ | 2 | event_scheduler | Daemon | 506197 | Waiting for next activation | NULL | | 191154 | root | Query | 0 | starting | show pr... | | 208669 | replication | Binlog Dump | 315 | Master has sent all binlog to slave; waiting for more updates | NULL | | 212495 | system user | Slave_IO | 37 | Waiting for master to send event | NULL | | 212497 | system user | Slave_worker | 0 | Delete_rows_log_event::ha_delete_row(-1) on table `history` | delete ... | | 212498 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212499 | system user | Slave_worker | 0 | Delete_rows_log_event::ha_delete_row(-1) on table `history` | delete ... | | 212500 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212501 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212502 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212503 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212504 | system user | Slave_worker | 0 | Delete_rows_log_event::ha_delete_row(-1) on table `history` | delete ... | | 212505 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212506 | system user | Slave_worker | 0 | Delete_rows_log_event::ha_delete_row(-1) on table `history` | delete ... | | 212507 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212510 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212509 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212508 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212511 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212512 | system user | Slave_worker | 0 | Delete_rows_log_event::find_row(-1) on table `history` | delete ... | | 212496 | system user | Slave_SQL | 11 | Waiting for room in worker thread event queue | NULL | +--------+-----------------+--------------+--------+---------------------------------------------------------------+------------+

Our monitoring also showed us that the CPU load went up, the I/O system got more to do and more rows were modified...

What was also noticeable is that with parallel replication, Foreign Key errors suddenly occurred, a phenomenon that we had not observed before:

FromDual.maas2.prod2 - Warning: InnoDB Foreign Key error detected Trigger: InnoDB Foreign Key error detected Trigger status: PROBLEM Trigger severity: Warning Trigger URL: https://fromdual.com/innodb-foreign-key-error-detected Item values: 1 1. InnoDB new Foreign Key error (FromDual.maas2.prod2:FromDual.MySQL.innodb.ForeignKey_new): 1

With the command SHOW ENGINE INNODB STATUS\G you can inspect these accordingly or view them in the monitoring:

------------------------ LATEST FOREIGN KEY ERROR ------------------------ 2024-04-02 10:36:39 0x7f36088ff640 Transaction: TRANSACTION 7199599266, ACTIVE 0 sec inserting mysql tables in use 1, locked 1 6 lock struct(s), heap size 1128, 3 row lock(s), undo log entries 1 MariaDB thread id 228555, OS thread handle 139870048613952, query id 28453893 Write_rows_log_event::write_row(-1) on table `alerts` insert into alerts (alertid,actionid,eventid,userid,clock,mediatypeid,sendto,subject,message,status,error,esc_step,alerttype,acknowledgeid,parameters) values (203687,4,471733,3,1712044003,1,'xxx@fromdual.com','Zabbix server - High: Too many processes on Zabbix server','Trigger: Too many processes on Zabbix server Trigger status: PROBLEM Trigger severity: High Trigger URL: Item values: 309 1. Number of processes (Zabbix server:proc.num[]): 309',3,'',1,0,null,'{}') Foreign key constraint fails for table `zabbix`.`alerts`: , CONSTRAINT `c_alerts_2` FOREIGN KEY (`eventid`) REFERENCES `events` (`eventid`) ON DELETE CASCADE in parent table, in index alerts_3 tuple: DATA TUPLE: 2 fields; ... But in parent table `zabbix`.`events`, in index PRIMARY, the closest match we can find is record: PHYSICAL RECORD: n_fields 12; compact format; info bits 0 ...
Literature/Sources
Taxonomy upgrade extras: replicationmariadbparallelmulti-threaded

Building MariaDB Server from the sources

Shinguz - Fri, 2024-04-05 08:47

Recently I had to test a new MariaDB feature that was developed at our request (MDEV-33782). To test this feature I had to build the MariaDB server myself from source, which I have not done for a long time. So a new challenge, especially with CMake...

I followed the MariaDB documentation Get, Build and Test Latest MariaDB the Lazy Way to build the server.

On Ubuntu 22.04 it did not work for me, for reasons unknown to me. So I cloned an Ubuntu 23.04 (Lunar Lobster) LXC container and built the MariaDB server in it.

To make the whole thing work, however, the package sources had to be added to the file /etc/apt/sources.list in the container first:

deb-src http://de.archive.ubuntu.com/ubuntu lunar main restricted universe multiverse deb-src http://de.archive.ubuntu.com/ubuntu lunar-updates main restricted universe multiverse deb-src http://de.archive.ubuntu.com/ubuntu lunar-security main restricted universe multiverse deb-src http://de.archive.ubuntu.com/ubuntu lunar-backports main restricted universe multiverse

Then we proceeded according to the instructions:

shell> apt install build-essential bison shell> apt build-dep mariadb-server

The corresponding branch was cloned:

shell> # git clone https://github.com/andremralves/server.git mariadb-MDEV-33782 shell> # git branch --all shell> git clone --branch MDEV-33782 --single-branch https://github.com/andremralves/server.git mariadb-MDEV-33782 shell> cd mariadb-MDEV-33782 shell> # git checkout 11.5

and then the server was build. This took about 20 minutes on my old machine. CMake still ran into an error, which was solved by installing the corresponding package (MDEV-33815):

shell> apt install libgnutls28-dev shell> cmake . -DBUILD_CONFIG=mysql_release && make -j8

The tests were executed:

shell> cd mysql-test shell> ./mtr rpl.rpl_create_drop_event Logging: ./mtr rpl.rpl_create_drop_event VS config: vardir: /root/mariadb-MDEV-33782/mysql-test/var Checking leftover processes... Removing old var directory... Creating var directory '/root/mariadb-MDEV-33782/mysql-test/var'... Checking supported features... MariaDB Version 11.5.0-MariaDB - SSL connections supported - binaries built with wsrep patch Collecting tests... Installing system database... ============================================================================== TEST RESULT TIME (ms) or COMMENT -------------------------------------------------------------------------- worker[01] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019 worker[01] mysql-test-run: WARNING: running this script as _root_ will cause some tests to be skipped rpl.rpl_create_drop_event 'mix' [ pass ] 522 rpl.rpl_create_drop_event 'row' [ pass ] 525 rpl.rpl_create_drop_event 'stmt' [ pass ] 525 -------------------------------------------------------------------------- The servers were restarted 2 times Spent 1.572 of 14 seconds executing testcases Completed: All 3 tests were successful.

And then a binary tarball was build for further testing.

shell> make package Run CPack packaging tool... CPack: Create package using TGZ CPack: Install projects CPack: - Run preinstall target for: MariaDB CPack: - Install project: MariaDB [] CPack: Create package CPack: - package: /root/mariadb-MDEV-33782/mariadb-11.5.0-linux-x86_64.tar.gz generated.
Taxonomy upgrade extras: mariadbbuildcompilingsourcestarball

Building MariaDB Server from the sources

Shinguz - Fri, 2024-04-05 08:47

Recently I had to test a new MariaDB feature that was developed at our request (MDEV-33782). To test this feature I had to build the MariaDB server myself from source, which I have not done for a long time. So a new challenge, especially with CMake...

I followed the MariaDB documentation Get, Build and Test Latest MariaDB the Lazy Way to build the server.

On Ubuntu 22.04 it did not work for me, for reasons unknown to me. So I cloned an Ubuntu 23.04 (Lunar Lobster) LXC container and built the MariaDB server in it.

To make the whole thing work, however, the package sources had to be added to the file /etc/apt/sources.list in the container first:

deb-src http://de.archive.ubuntu.com/ubuntu lunar main restricted universe multiverse deb-src http://de.archive.ubuntu.com/ubuntu lunar-updates main restricted universe multiverse deb-src http://de.archive.ubuntu.com/ubuntu lunar-security main restricted universe multiverse deb-src http://de.archive.ubuntu.com/ubuntu lunar-backports main restricted universe multiverse

Then we proceeded according to the instructions:

shell> apt install build-essential bison shell> apt build-dep mariadb-server

The corresponding branch was cloned:

shell> # git clone https://github.com/andremralves/server.git mariadb-MDEV-33782 shell> # git branch --all shell> git clone --branch MDEV-33782 --single-branch https://github.com/andremralves/server.git mariadb-MDEV-33782 shell> cd mariadb-MDEV-33782 shell> # git checkout 11.5

and then the server was build. This took about 20 minutes on my old machine. CMake still ran into an error, which was solved by installing the corresponding package (MDEV-33815):

shell> apt install libgnutls28-dev shell> cmake . -DBUILD_CONFIG=mysql_release && make -j8

The tests were executed:

shell> cd mysql-test shell> ./mtr rpl.rpl_create_drop_event Logging: ./mtr rpl.rpl_create_drop_event VS config: vardir: /root/mariadb-MDEV-33782/mysql-test/var Checking leftover processes... Removing old var directory... Creating var directory '/root/mariadb-MDEV-33782/mysql-test/var'... Checking supported features... MariaDB Version 11.5.0-MariaDB - SSL connections supported - binaries built with wsrep patch Collecting tests... Installing system database... ============================================================================== TEST RESULT TIME (ms) or COMMENT -------------------------------------------------------------------------- worker[01] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019 worker[01] mysql-test-run: WARNING: running this script as _root_ will cause some tests to be skipped rpl.rpl_create_drop_event 'mix' [ pass ] 522 rpl.rpl_create_drop_event 'row' [ pass ] 525 rpl.rpl_create_drop_event 'stmt' [ pass ] 525 -------------------------------------------------------------------------- The servers were restarted 2 times Spent 1.572 of 14 seconds executing testcases Completed: All 3 tests were successful.

And then a binary tarball was build for further testing.

shell> make package Run CPack packaging tool... CPack: Create package using TGZ CPack: Install projects CPack: - Run preinstall target for: MariaDB CPack: - Install project: MariaDB [] CPack: Create package CPack: - package: /root/mariadb-MDEV-33782/mariadb-11.5.0-linux-x86_64.tar.gz generated.
Taxonomy upgrade extras: mariadbbuildcompilingsourcestarball

MaxScale configuration synchronisation

Shinguz - Thu, 2024-04-04 09:53
Table of contents
Overview

A feature that I recently discovered while browsing is the MaxScale configuration synchronisation functionality.

This is not primarily about a MariaDB replication cluster or a MariaDB Galera cluster, but about a cluster consisting of two or more MaxScale nodes. Or more precisely, the exchange of the configuration between these MaxScale nodes.

Pon Suresh Pandian has already written a blog article about this feature in 2022, which is even more detailed than this post here.

Preparations

An LXD container environment was prepared, consisting of 3 database containers (deb12-n1 (10.139.158.33), deb12-n2 (10.139.158.178), deb12-n3 (10.139.158.39)) and 2 MaxScale containers (deb12-mxs1 (10.139.158.66), deb12-mxs2 (10.139.158.174)). The database version is a MariaDB 10.11.6 from the Debian repository and MaxScale was downloaded in version 22.08.5 from the MariaDB plc website.

The database configuration looks similar for all 3 nodes:

# # /etc/mysql/mariadb.conf.d/99-fromdual.cnf # [server] server_id = 1 log_bin = deb12-n1-binlog binlog_format = row bind_address = * proxy_protocol_networks = ::1, 10.139.158.0/24, localhost gtid_strict_mode = on log_slave_updates = on skip_name_resolve = on

The MaxScale nodes were built as described in the article Sharding with MariaDB MaxScale.

The maxscale_admin user has exactly the same rights as described there, the maxscale_monitor user has the following rights:

RELOAD, SUPER, REPLICATION SLAVE, READ_ONLY ADMIN

See also here: Required Grants.

The MaxScale start configuration looks like this:

# # /etc/maxscale.cnf # [maxscale] threads = auto admin_gui = false [deb12-n1] type = server address = 10.139.158.33 port = 3306 proxy_protocol = true [deb12-n2] type = server address = 10.139.158.178 port = 3306 proxy_protocol = true [Replication-Monitor] type = monitor module = mariadbmon servers = deb12-n1,deb12-n2 user = maxscale_monitor password = secret monitor_interval = 500ms auto_failover = true auto_rejoin = true enforce_read_only_slaves = true replication_user = replication replication_password = secret cooperative_monitoring_locks = majority_of_running [WriteListener] type = listener service = WriteService port = 3306 [WriteService] type = service router = readwritesplit servers = deb12-n1,deb12-n2 user = maxscale_admin password = secret transaction_replay = true transaction_replay_timeout = 30s

Important: The configuration should look the same on all MaxScale nodes!

And then a few more checks were done to be sure that everything is correct:

shell> maxctrl list listeners ┌───────────────┬──────┬──────┬─────────┬──────────────┐ │ Name │ Port │ Host │ State │ Service │ ├───────────────┼──────┼──────┼─────────┼──────────────┤ │ WriteListener │ 3306 │ :: │ Running │ WriteService │ └───────────────┴──────┴──────┴─────────┴──────────────┘ shell> maxctrl list services ┌──────────────┬────────────────┬─────────────┬───────────────────┬────────────────────┐ │ Service │ Router │ Connections │ Total Connections │ Targets │ ├──────────────┼────────────────┼─────────────┼───────────────────┼────────────────────┤ │ WriteService │ readwritesplit │ 0 │ 0 │ deb12-n1, deb12-n2 │ └──────────────┴────────────────┴─────────────┴───────────────────┴────────────────────┘ shell> maxctrl list servers ┌──────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┬─────────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┼─────────────────────┤ │ deb12-n1 │ 10.139.158.33 │ 3306 │ 0 │ Master, Running │ 0-1-19 │ Replication-Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┼─────────────────────┤ │ deb12-n2 │ 10.139.158.178 │ 3306 │ 0 │ Slave, Running │ 0-1-19 │ Replication-Monitor │ └──────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┴─────────────────────┘ SQL> SELECT @@hostname, test.* FROM test.test; +------------+----+-----------+---------------------+ | @@hostname | id | data | ts | +------------+----+-----------+---------------------+ | deb12-n2 | 1 | Some data | 2024-03-26 09:40:21 | +------------+----+-----------+---------------------+ SQL> SELECT @@hostname, test.* FROM test.test FOR UPDATE; +------------+----+-----------+---------------------+ | @@hostname | id | data | ts | +------------+----+-----------+---------------------+ | deb12-n1 | 1 | Some data | 2024-03-26 09:40:21 | +------------+----+-----------+---------------------+

And another test whether MaxScale really executes the failover correctly:

shell> systemctl stop mariadb 2024-03-26 16:27:05 error : Monitor was unable to connect to server deb12-n2[10.139.158.178:3306] : 'Can't connect to server on '10.139.158.178' (115)' 2024-03-26 16:27:05 notice : Server changed state: deb12-n2[10.139.158.178:3306]: master_down. [Master, Running] -> [Down] 2024-03-26 16:27:05 warning: [mariadbmon] Primary has failed. If primary does not return in 4 monitor tick(s), failover begins. 2024-03-26 16:27:07 notice : [mariadbmon] Selecting a server to promote and replace 'deb12-n2'. Candidates are: 'deb12-n1'. 2024-03-26 16:27:07 notice : [mariadbmon] Selected 'deb12-n1'. 2024-03-26 16:27:07 notice : [mariadbmon] Performing automatic failover to replace failed primary 'deb12-n2'. 2024-03-26 16:27:07 notice : [mariadbmon] Failover 'deb12-n2' -> 'deb12-n1' performed. 2024-03-26 16:27:07 notice : Server changed state: deb12-n1[10.139.158.33:3306]: new_master. [Slave, Running] -> [Master, Running] shell> systemctl start mariadb 2024-03-26 16:28:03 notice : Server changed state: deb12-n2[10.139.158.178:3306]: server_up. [Down] -> [Running] 2024-03-26 16:28:03 notice : [mariadbmon] Directing standalone server 'deb12-n2' to replicate from 'deb12-n1'. 2024-03-26 16:28:03 notice : [mariadbmon] Replica connection from deb12-n2 to [10.139.158.33]:3306 created and started. 2024-03-26 16:28:03 notice : [mariadbmon] 1 server(s) redirected or rejoined the cluster. 2024-03-26 16:28:03 notice : Server changed state: deb12-n2[10.139.158.178:3306]: new_slave. [Running] -> [Slave, Running]

Which MaxScale node is currently responsible for monitoring and failover (cooperatve_monitoring) can be determined as follows:

shell> maxctrl show monitor Replication-Monitor | grep -e 'Diagnostics' -e '"primary"' -e 'lock_held' | uniq │ Monitor Diagnostics │ { │ │ │ "primary": true, │ │ │ "lock_held": true, │

It should be ensured that everything works properly up to this point. Otherwise there is no real point in the next steps.

Activate MaxScale configuration synchronisation

A separate database user with the following rights is required for configuration synchronisation:

SQL> CREATE USER 'maxscale_confsync'@'%' IDENTIFIED BY 'secret'; SQL> GRANT SELECT, INSERT, UPDATE, CREATE ON `mysql`.`maxscale_config` TO maxscale_confsync@'%';

MaxScale must then be configured accordingly (on both MaxScale nodes) so that configuration synchronisation is activated. This configuration takes place in the global MaxScale section:

# # /etc/maxscale.cnf # [maxscale] config_sync_cluster = Replication-Monitor config_sync_user = maxscale_confsync config_sync_password = secret

The MaxScale nodes are then restarted:

shell> systemctl restart maxscale

MaxScale configuration synchronisation can also be activated and deactivated dynamically:

shell> maxctrl show maxscale | grep config_sync │ │ "config_sync_cluster": null, │ │ │ "config_sync_db": "mysql", │ │ │ "config_sync_interval": "5000ms", │ │ │ "config_sync_password": null, │ │ │ "config_sync_timeout": "10000ms", │ │ │ "config_sync_user": null, │

Here it is important to keep to the correct order of the 3 commands, otherwise there will be an error:

shell> MAXCTRL_WARNINGS=0 maxctrl alter maxscale config_sync_user='maxscale_confsync' shell> MAXCTRL_WARNINGS=0 maxctrl alter maxscale config_sync_password='secret' shell> MAXCTRL_WARNINGS=0 maxctrl alter maxscale config_sync_cluster='Replication-Monitor'
Change MaxScale parameters

As a first test, we have focussed on the MaxScale monitor variable monitor_interval, which in this case is even different on both MaxScale nodes:

shell> maxctrl show monitor Replication-Monitor | grep monitor_interval │ │ "monitor_interval": "750ms", shell> maxctrl show monitor Replication-Monitor | grep monitor_interval │ │ "monitor_interval": "1000ms",

The variable can now be set on a MaxScale node with the alter monitor command:

shell> MAXCTRL_WARNINGS=0 maxctrl alter monitor Replication-Monitor monitor_interval=500ms OK

which can be seen in the MaxScale error log:

2024-03-26 14:09:16 notice : (ConfigManager); Updating to configuration version 1

On the other hand, the value should be propagated to the second MaxScale node within 5 seconds (config_sync_interval), which can be checked with the above command.

Add new slave and make MaxScale known

A new slave (deb12-n3) is first created and added to the MariaDB replication cluster by hand. The slave is then made known to a MaxScale node:

shell> maxctrl create server deb12-n3 10.139.158.39 shell> MAXCTRL_WARNINGS=0 maxctrl link monitor Replication-Monitor deb12-n3 OK shell> MAXCTRL_WARNINGS=0 maxctrl link service WriteService deb12-n3 OK shell> maxctrl list servers ┌──────────┬────────────────┬──────┬─────────────┬─────────────────┬────────────┬─────────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n1 │ 10.139.158.33 │ 3306 │ 3 │ Slave, Running │ 0-2-479618 │ Replication-Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n2 │ 10.139.158.178 │ 3306 │ 3 │ Master, Running │ 0-2-479618 │ Replication-Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n3 │ 10.139.158.39 │ 3306 │ 1 │ Slave, Running │ 0-2-479618 │ Replication-Monitor │ └──────────┴────────────────┴──────┴─────────────┴─────────────────┴────────────┴─────────────────────┘
Remove old slave and make MaxScale known

Before a slave can be deleted, it should be removed from the replication cluster for a MaxScale node:

shell> maxctrl destroy server deb12-n1 --force OK shell> maxctrl list servers ┌──────────┬────────────────┬──────┬─────────────┬─────────────────┬────────────┬─────────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n2 │ 10.139.158.178 │ 3306 │ 3 │ Master, Running │ 0-2-493034 │ Replication-Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n3 │ 10.139.158.39 │ 3306 │ 1 │ Slave, Running │ 0-2-493032 │ Replication-Monitor │ └──────────┴────────────────┴──────┴─────────────┴─────────────────┴────────────┴─────────────────────┘

The slave can then be removed.

How is the configuration synchronised?

The configuration of the two MaxScale nodes is synchronised via the database, which I personally consider to be an unfortunate design decision, as a configuration change could potentially cause chaos if the master breaks or network problems occur between the database nodes...

The configuration is stored in the table mysql.maxscale_config, which looks like this:

CREATE TABLE `maxscale_config` ( `cluster` varchar(256) NOT NULL, `version` bigint(20) NOT NULL, `config` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL CHECK (json_valid(`config`)), `origin` varchar(254) NOT NULL, `nodes` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL CHECK (json_valid(`nodes`)), PRIMARY KEY (`cluster`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci

This table has approximately the following content:

SQL> SELECT cluster, version, CONCAT(SUBSTR(config, 1, 32), ' ... ', SUBSTR(config, -32)) AS config , origin, nodes FROM mysql.maxscale_config; +---------------------+---------+-----------------------------------------------------------------------+------------+------------------------------------------+ | cluster | version | config | origin | nodes | +---------------------+---------+-----------------------------------------------------------------------+------------+------------------------------------------+ | Replication-Monitor | 2 | {"config":[{"id":"deb12-n1","typ ... ter_name":"Replication-Monitor"} | deb12-mxs1 | {"deb12-mxs1": "OK", "deb12-mxs2": "OK"} | +---------------------+---------+-----------------------------------------------------------------------+------------+------------------------------------------+

A local copy is available on each node for security reasons:

shell> cut -b-32 /var/lib/maxscale/maxscale-config.json {"config":[{"id":"deb12-n2","typ
What happens in the event of a conflict?

See also: Error Handling in Configuration Synchronization

If the configuration is changed simultaneously (within config_sync_interval?) on two different MaxScale nodes, we receive the following error message:

Error: Server at http://127.0.0.1:8989 responded with 400 Bad Request to `PATCH monitors/Replication-Monitor` { "errors": [ { "detail": "Cannot start configuration change: Configuration conflict detected: version stored in the cluster (3) is not the same as the local version (2), MaxScale is out of sync." } ] }

The following command may help to recognise the problem in the event of major faults:

shell> maxctrl show maxscale | grep -A9 'Config Sync' │ Config Sync │ { │ │ │ "checksum": "0052fe6f775168bf00778abbe37775f6f642adc7", │ │ │ "nodes": { │ │ │ "deb12-mxs1": "OK", │ │ │ "deb12-mxs2": "OK" │ │ │ }, │ │ │ "origin": "deb12-mxs2", │ │ │ "status": "OK", │ │ │ "version": 3 │ │ │ } │
Tests

All tests were also carried out under load. The following tests ran in parallel:

  • insert_test.php
  • insert_test.sh
  • mixed_test.php
  • while [ true ] ; do mariadb -s --user=app --host=10.139.158.174 --port=3306 --password=secret --execute='SELECT @@hostname, COUNT(*) FROM test.test GROUP BY @@hostname' ; sleep 0.5 ; done
  • while [ true ] ; do mariadb -s --user=app --host=10.139.158.174 --port=3306 --password=secret --execute='SELECT @@hostname, COUNT(*) FROM test.test GROUP BY @@hostname FOR UPDATE' ; sleep 0.5 ; done

All tests have run flawlessly and without problems with all manipulations.

Deactivate MaxScale configuration synchronisation again

Execute the following command on both MaxScale nodes to end configuration synchronisation:

shell> MAXCTRL_WARNINGS=0 maxctrl alter maxscale config_sync_cluster=''
Literature/sources
Taxonomy upgrade extras: maxscaleconfigurationclusterload balancer

MaxScale configuration synchronisation

Shinguz - Thu, 2024-04-04 09:53
Table of contents
Overview

A feature that I recently discovered while browsing is the MaxScale configuration synchronisation functionality.

This is not primarily about a MariaDB replication cluster or a MariaDB Galera cluster, but about a cluster consisting of two or more MaxScale nodes. Or more precisely, the exchange of the configuration between these MaxScale nodes.

Pon Suresh Pandian has already written a blog article about this feature in 2022, which is even more detailed than this post here.

Preparations

An LXD container environment was prepared, consisting of 3 database containers (deb12-n1 (10.139.158.33), deb12-n2 (10.139.158.178), deb12-n3 (10.139.158.39)) and 2 MaxScale containers (deb12-mxs1 (10.139.158.66), deb12-mxs2 (10.139.158.174)). The database version is a MariaDB 10.11.6 from the Debian repository and MaxScale was downloaded in version 22.08.5 from the MariaDB plc website.

The database configuration looks similar for all 3 nodes:

# # /etc/mysql/mariadb.conf.d/99-fromdual.cnf # [server] server_id = 1 log_bin = deb12-n1-binlog binlog_format = row bind_address = * proxy_protocol_networks = ::1, 10.139.158.0/24, localhost gtid_strict_mode = on log_slave_updates = on skip_name_resolve = on

The MaxScale nodes were built as described in the article Sharding with MariaDB MaxScale.

The maxscale_admin user has exactly the same rights as described there, the maxscale_monitor user has the following rights:

RELOAD, SUPER, REPLICATION SLAVE, READ_ONLY ADMIN

See also here: Required Grants.

The MaxScale start configuration looks like this:

# # /etc/maxscale.cnf # [maxscale] threads = auto admin_gui = false [deb12-n1] type = server address = 10.139.158.33 port = 3306 proxy_protocol = true [deb12-n2] type = server address = 10.139.158.178 port = 3306 proxy_protocol = true [Replication-Monitor] type = monitor module = mariadbmon servers = deb12-n1,deb12-n2 user = maxscale_monitor password = secret monitor_interval = 500ms auto_failover = true auto_rejoin = true enforce_read_only_slaves = true replication_user = replication replication_password = secret cooperative_monitoring_locks = majority_of_running [WriteListener] type = listener service = WriteService port = 3306 [WriteService] type = service router = readwritesplit servers = deb12-n1,deb12-n2 user = maxscale_admin password = secret transaction_replay = true transaction_replay_timeout = 30s

Important: The configuration should look the same on all MaxScale nodes!

And then a few more checks were done to be sure that everything is correct:

shell> maxctrl list listeners ┌───────────────┬──────┬──────┬─────────┬──────────────┐ │ Name │ Port │ Host │ State │ Service │ ├───────────────┼──────┼──────┼─────────┼──────────────┤ │ WriteListener │ 3306 │ :: │ Running │ WriteService │ └───────────────┴──────┴──────┴─────────┴──────────────┘ shell> maxctrl list services ┌──────────────┬────────────────┬─────────────┬───────────────────┬────────────────────┐ │ Service │ Router │ Connections │ Total Connections │ Targets │ ├──────────────┼────────────────┼─────────────┼───────────────────┼────────────────────┤ │ WriteService │ readwritesplit │ 0 │ 0 │ deb12-n1, deb12-n2 │ └──────────────┴────────────────┴─────────────┴───────────────────┴────────────────────┘ shell> maxctrl list servers ┌──────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┬─────────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┼─────────────────────┤ │ deb12-n1 │ 10.139.158.33 │ 3306 │ 0 │ Master, Running │ 0-1-19 │ Replication-Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┼─────────────────────┤ │ deb12-n2 │ 10.139.158.178 │ 3306 │ 0 │ Slave, Running │ 0-1-19 │ Replication-Monitor │ └──────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┴─────────────────────┘ SQL> SELECT @@hostname, test.* FROM test.test; +------------+----+-----------+---------------------+ | @@hostname | id | data | ts | +------------+----+-----------+---------------------+ | deb12-n2 | 1 | Some data | 2024-03-26 09:40:21 | +------------+----+-----------+---------------------+ SQL> SELECT @@hostname, test.* FROM test.test FOR UPDATE; +------------+----+-----------+---------------------+ | @@hostname | id | data | ts | +------------+----+-----------+---------------------+ | deb12-n1 | 1 | Some data | 2024-03-26 09:40:21 | +------------+----+-----------+---------------------+

And another test whether MaxScale really executes the failover correctly:

shell> systemctl stop mariadb 2024-03-26 16:27:05 error : Monitor was unable to connect to server deb12-n2[10.139.158.178:3306] : 'Can't connect to server on '10.139.158.178' (115)' 2024-03-26 16:27:05 notice : Server changed state: deb12-n2[10.139.158.178:3306]: master_down. [Master, Running] -> [Down] 2024-03-26 16:27:05 warning: [mariadbmon] Primary has failed. If primary does not return in 4 monitor tick(s), failover begins. 2024-03-26 16:27:07 notice : [mariadbmon] Selecting a server to promote and replace 'deb12-n2'. Candidates are: 'deb12-n1'. 2024-03-26 16:27:07 notice : [mariadbmon] Selected 'deb12-n1'. 2024-03-26 16:27:07 notice : [mariadbmon] Performing automatic failover to replace failed primary 'deb12-n2'. 2024-03-26 16:27:07 notice : [mariadbmon] Failover 'deb12-n2' -> 'deb12-n1' performed. 2024-03-26 16:27:07 notice : Server changed state: deb12-n1[10.139.158.33:3306]: new_master. [Slave, Running] -> [Master, Running] shell> systemctl start mariadb 2024-03-26 16:28:03 notice : Server changed state: deb12-n2[10.139.158.178:3306]: server_up. [Down] -> [Running] 2024-03-26 16:28:03 notice : [mariadbmon] Directing standalone server 'deb12-n2' to replicate from 'deb12-n1'. 2024-03-26 16:28:03 notice : [mariadbmon] Replica connection from deb12-n2 to [10.139.158.33]:3306 created and started. 2024-03-26 16:28:03 notice : [mariadbmon] 1 server(s) redirected or rejoined the cluster. 2024-03-26 16:28:03 notice : Server changed state: deb12-n2[10.139.158.178:3306]: new_slave. [Running] -> [Slave, Running]

Which MaxScale node is currently responsible for monitoring and failover (cooperatve_monitoring) can be determined as follows:

shell> maxctrl show monitor Replication-Monitor | grep -e 'Diagnostics' -e '"primary"' -e 'lock_held' | uniq │ Monitor Diagnostics │ { │ │ │ "primary": true, │ │ │ "lock_held": true, │

It should be ensured that everything works properly up to this point. Otherwise there is no real point in the next steps.

Activate MaxScale configuration synchronisation

A separate database user with the following rights is required for configuration synchronisation:

SQL> CREATE USER 'maxscale_confsync'@'%' IDENTIFIED BY 'secret'; SQL> GRANT SELECT, INSERT, UPDATE, CREATE ON `mysql`.`maxscale_config` TO maxscale_confsync@'%';

MaxScale must then be configured accordingly (on both MaxScale nodes) so that configuration synchronisation is activated. This configuration takes place in the global MaxScale section:

# # /etc/maxscale.cnf # [maxscale] config_sync_cluster = Replication-Monitor config_sync_user = maxscale_confsync config_sync_password = secret

The MaxScale nodes are then restarted:

shell> systemctl restart maxscale

MaxScale configuration synchronisation can also be activated and deactivated dynamically:

shell> maxctrl show maxscale | grep config_sync │ │ "config_sync_cluster": null, │ │ │ "config_sync_db": "mysql", │ │ │ "config_sync_interval": "5000ms", │ │ │ "config_sync_password": null, │ │ │ "config_sync_timeout": "10000ms", │ │ │ "config_sync_user": null, │

Here it is important to keep to the correct order of the 3 commands, otherwise there will be an error:

shell> MAXCTRL_WARNINGS=0 maxctrl alter maxscale config_sync_user='maxscale_confsync' shell> MAXCTRL_WARNINGS=0 maxctrl alter maxscale config_sync_password='secret' shell> MAXCTRL_WARNINGS=0 maxctrl alter maxscale config_sync_cluster='Replication-Monitor'
Change MaxScale parameters

As a first test, we have focussed on the MaxScale monitor variable monitor_interval, which in this case is even different on both MaxScale nodes:

shell> maxctrl show monitor Replication-Monitor | grep monitor_interval │ │ "monitor_interval": "750ms", shell> maxctrl show monitor Replication-Monitor | grep monitor_interval │ │ "monitor_interval": "1000ms",

The variable can now be set on a MaxScale node with the alter monitor command:

shell> MAXCTRL_WARNINGS=0 maxctrl alter monitor Replication-Monitor monitor_interval=500ms OK

which can be seen in the MaxScale error log:

2024-03-26 14:09:16 notice : (ConfigManager); Updating to configuration version 1

On the other hand, the value should be propagated to the second MaxScale node within 5 seconds (config_sync_interval), which can be checked with the above command.

Add new slave and make MaxScale known

A new slave (deb12-n3) is first created and added to the MariaDB replication cluster by hand. The slave is then made known to a MaxScale node:

shell> maxctrl create server deb12-n3 10.139.158.39 shell> MAXCTRL_WARNINGS=0 maxctrl link monitor Replication-Monitor deb12-n3 OK shell> MAXCTRL_WARNINGS=0 maxctrl link service WriteService deb12-n3 OK shell> maxctrl list servers ┌──────────┬────────────────┬──────┬─────────────┬─────────────────┬────────────┬─────────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n1 │ 10.139.158.33 │ 3306 │ 3 │ Slave, Running │ 0-2-479618 │ Replication-Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n2 │ 10.139.158.178 │ 3306 │ 3 │ Master, Running │ 0-2-479618 │ Replication-Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n3 │ 10.139.158.39 │ 3306 │ 1 │ Slave, Running │ 0-2-479618 │ Replication-Monitor │ └──────────┴────────────────┴──────┴─────────────┴─────────────────┴────────────┴─────────────────────┘
Remove old slave and make MaxScale known

Before a slave can be deleted, it should be removed from the replication cluster for a MaxScale node:

shell> maxctrl destroy server deb12-n1 --force OK shell> maxctrl list servers ┌──────────┬────────────────┬──────┬─────────────┬─────────────────┬────────────┬─────────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n2 │ 10.139.158.178 │ 3306 │ 3 │ Master, Running │ 0-2-493034 │ Replication-Monitor │ ├──────────┼────────────────┼──────┼─────────────┼─────────────────┼────────────┼─────────────────────┤ │ deb12-n3 │ 10.139.158.39 │ 3306 │ 1 │ Slave, Running │ 0-2-493032 │ Replication-Monitor │ └──────────┴────────────────┴──────┴─────────────┴─────────────────┴────────────┴─────────────────────┘

The slave can then be removed.

How is the configuration synchronised?

The configuration of the two MaxScale nodes is synchronised via the database, which I personally consider to be an unfortunate design decision, as a configuration change could potentially cause chaos if the master breaks or network problems occur between the database nodes...

The configuration is stored in the table mysql.maxscale_config, which looks like this:

CREATE TABLE `maxscale_config` ( `cluster` varchar(256) NOT NULL, `version` bigint(20) NOT NULL, `config` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL CHECK (json_valid(`config`)), `origin` varchar(254) NOT NULL, `nodes` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL CHECK (json_valid(`nodes`)), PRIMARY KEY (`cluster`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci

This table has approximately the following content:

SQL> SELECT cluster, version, CONCAT(SUBSTR(config, 1, 32), ' ... ', SUBSTR(config, -32)) AS config , origin, nodes FROM mysql.maxscale_config; +---------------------+---------+-----------------------------------------------------------------------+------------+------------------------------------------+ | cluster | version | config | origin | nodes | +---------------------+---------+-----------------------------------------------------------------------+------------+------------------------------------------+ | Replication-Monitor | 2 | {"config":[{"id":"deb12-n1","typ ... ter_name":"Replication-Monitor"} | deb12-mxs1 | {"deb12-mxs1": "OK", "deb12-mxs2": "OK"} | +---------------------+---------+-----------------------------------------------------------------------+------------+------------------------------------------+

A local copy is available on each node for security reasons:

shell> cut -b-32 /var/lib/maxscale/maxscale-config.json {"config":[{"id":"deb12-n2","typ
What happens in the event of a conflict?

See also: Error Handling in Configuration Synchronization

If the configuration is changed simultaneously (within config_sync_interval?) on two different MaxScale nodes, we receive the following error message:

Error: Server at http://127.0.0.1:8989 responded with 400 Bad Request to `PATCH monitors/Replication-Monitor` { "errors": [ { "detail": "Cannot start configuration change: Configuration conflict detected: version stored in the cluster (3) is not the same as the local version (2), MaxScale is out of sync." } ] }

The following command may help to recognise the problem in the event of major faults:

shell> maxctrl show maxscale | grep -A9 'Config Sync' │ Config Sync │ { │ │ │ "checksum": "0052fe6f775168bf00778abbe37775f6f642adc7", │ │ │ "nodes": { │ │ │ "deb12-mxs1": "OK", │ │ │ "deb12-mxs2": "OK" │ │ │ }, │ │ │ "origin": "deb12-mxs2", │ │ │ "status": "OK", │ │ │ "version": 3 │ │ │ } │
Tests

All tests were also carried out under load. The following tests ran in parallel:

  • insert_test.php
  • insert_test.sh
  • mixed_test.php
  • while [ true ] ; do mariadb -s --user=app --host=10.139.158.174 --port=3306 --password=secret --execute='SELECT @@hostname, COUNT(*) FROM test.test GROUP BY @@hostname' ; sleep 0.5 ; done
  • while [ true ] ; do mariadb -s --user=app --host=10.139.158.174 --port=3306 --password=secret --execute='SELECT @@hostname, COUNT(*) FROM test.test GROUP BY @@hostname FOR UPDATE' ; sleep 0.5 ; done

All tests have run flawlessly and without problems with all manipulations.

Deactivate MaxScale configuration synchronisation again

Execute the following command on both MaxScale nodes to end configuration synchronisation:

shell> MAXCTRL_WARNINGS=0 maxctrl alter maxscale config_sync_cluster=''
Literature/sources
Taxonomy upgrade extras: maxscaleconfigurationclusterload balancer

Sharding with MariaDB MaxScale

Shinguz - Tue, 2024-03-19 17:02
Table of contents
Overview

This feature should more or less work with MariaDB MaxScale 6.x.y, 22.08.x, 23.02.x, 23.08.x and 24.02.x. We have tested it with the latest MaxScale version 23.08.05, as we encountered problems with an older version (MXS-5026).

shell> maxscale --version MaxScale 23.08.5

We used MariaDB 10.11 as the database backend (shards).

Less than approx. 2% of all MariaDB installations known to us are what we technically understand by multi-tenant systems (each customer in its own database (also called a schema)).

This MariaDB MaxScale feature is therefore used relatively rarely and there is an increased risk of encountering bugs that no-one has come across before!

This feature is called SchemaRouter at MariadDB MaxScale and is still declared as beta quality (MXS-5025):

maxctrl> show module schemarouter ┌─────────────┬────────────────────────────────────────────────┐ │ Module │ schemarouter │ ├─────────────┼────────────────────────────────────────────────┤ │ Type │ Router │ ├─────────────┼────────────────────────────────────────────────┤ │ Version │ V1.0.0 │ ├─────────────┼────────────────────────────────────────────────┤ │ Maturity │ Beta │ ├─────────────┼────────────────────────────────────────────────┤ │ Description │ A database sharding router for simple sharding │ ├─────────────┼────────────────────────────────────────────────┤ │ ...

The target topology should look like this: Each customer (client, tenant) is located in its own database (= schema). The databases are distributed across several MariaDB instances (shards). So that the application can access the database transparently, a pair of MaxScale load balancers is connected in front of it, which knows where the customer is located and forwards the traffic to the shard accordingly. To ensure that the MaxScale load balancers are designed for high availability, a virtual IP (VIP) is connected upstream, e.g. using Keepalived. If this is still too simple for you, you can design each individual shard as a master/slave or Galera cluster construct...


Preparation of the shards (MariaDB database instances)

The first problem we had with this PoC was with the test database. By deleting the test database on all shards, the problem disappeared. Alternatively, you can run mariadb-secure-installation, which you should do on production systems anyway, or you can use the MaxScale configuration parameters: ignore_tables or ignore_tables_regex to allow the same tables in different shards (MXS-5027).

See also: MaxScale Router Parameters.

Create test data

So that we have something to play with, we have created test data:

-- On shard 1: 2 customers SQL> CREATE DATABASE customer_0010; SQL> CREATE TABLE customer_0010.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0010.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0010.address VALUES (1, 'Customer 10 GmbH'); SQL> INSERT INTO customer_0010.sales VALUES (1, 'Apples', 5, 1.2, 6), (2, 'Pears', 2, 0.9, 1.8), (3, 'Bread', 1, 2.5, 2.5); SQL> CREATE DATABASE customer_0011; SQL> CREATE TABLE customer_0011.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0011.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0011.address VALUES (1, 'Customer 11 SE'); SQL> INSERT INTO customer_0011.sales VALUES (1, 'Oranges', 2, 1.7, 3.4), (2, 'Salad', 5, 1.2, 6); -- On shard 2: 3 customers SQL> CREATE DATABASE customer_0020; SQL> CREATE TABLE customer_0020.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0020.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0020.address VALUES (1, 'Customer 20 AG'); SQL> INSERT INTO customer_0020.sales VALUES (1, 'Oranges', 2, 1.7, 3.4), (2, 'Salad', 5, 1.2, 6); SQL> CREATE DATABASE customer_0021; SQL> CREATE TABLE customer_0021.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0021.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0021.address VALUES (1, 'Customer 21 GmbH'); SQL> INSERT INTO customer_0021.sales VALUES (1, 'Oranges', 2, 1.7, 3.4), (2, 'Salad', 5, 1.2, 6); SQL> CREATE DATABASE customer_0022; SQL> CREATE TABLE customer_0022.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0022.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0022.address VALUES (1, 'Customer 22 Gebr.'); SQL> INSERT INTO customer_0022.sales VALUES (1, 'Oranges', 2, 1.7, 3.4), (2, 'Salad', 5, 1.2, 6); -- On shard 3: 1 customer SQL> CREATE DATABASE customer_0030; SQL> CREATE TABLE customer_0030.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0030.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0030.address VALUES (1, 'Customer 30 GmbH'); SQL> INSERT INTO customer_0030.sales VALUES (1, 'Pickles', 2, 2.2, 4.4), (2, 'Salad', 1, 3.1, 3.1), (3, 'Pudding', 5, 2.2, 11.0), (4, 'Asparagus', 12, .3, 3.6);
Create roles and users

Since in a sharded system, in contrast to a Galera cluster for example, the individual database instances do not know anything about each other and do not communicate with each other, we have to create the roles and users or accounts individually on EACH shard.

MariaDB MaxScale needs a user for the SchemaRouter service and the monitor (on each shard).

As the name suggests, the monitor user is responsible for monitoring and the SchemaRouter service user is responsible for collecting the user account information from the sharding backends and forwarding the queries to the correct shard.

Since a redundant system typically works with at least two MaxScale routers and we wanted to prevent the privileges of the accounts from diverging, we work with roles for both the MaxScale users and the application users.

MaxScale Monitor User SQL> CREATE ROLE maxscale_monitor_role; SQL> GRANT SELECT ON mysql.user TO 'maxscale_monitor_role'; SQL> GRANT REPLICATION CLIENT ON *.* TO 'maxscale_monitor_role'; SQL> GRANT SLAVE MONITOR ON *.* TO 'maxscale_monitor_role'; SQL> GRANT FILE ON *.* TO 'maxscale_monitor_role'; SQL> GRANT CONNECTION ADMIN ON *.* TO 'maxscale_monitor_role'; SQL> SHOW GRANTS FOR maxscale_monitor_role; +-----------------------------------------------------------------------------------------------+ | Grants for maxscale_monitor_role | +-----------------------------------------------------------------------------------------------+ | GRANT FILE, BINLOG MONITOR, CONNECTION ADMIN, SLAVE MONITOR ON *.* TO `maxscale_monitor_role` | | GRANT SELECT ON `mysql`.`user` TO `maxscale_monitor_role` | +-----------------------------------------------------------------------------------------------+ SQL> CREATE USER maxscale_monitor@'10.139.158.210' IDENTIFIED BY 'secret'; SQL> CREATE USER maxscale_monitor@'10.139.158.211' IDENTIFIED BY 'secret'; SQL> GRANT maxscale_monitor_role TO maxscale_monitor@'10.139.158.210'; SQL> GRANT maxscale_monitor_role TO maxscale_monitor@'10.139.158.211'; SQL> SET DEFAULT ROLE maxscale_monitor_role FOR maxscale_monitor@'10.139.158.210'; SQL> SET DEFAULT ROLE maxscale_monitor_role FOR maxscale_monitor@'10.139.158.211'; SQL> SELECT user, host, is_role, default_role FROM mysql.user WHERE user LIKE 'maxscale_monitor%'; +-----------------------+----------------+---------+-----------------------+ | User | Host | is_role | default_role | +-----------------------+----------------+---------+-----------------------+ | maxscale_monitor_role | | Y | | | maxscale_monitor | 10.139.158.210 | N | maxscale_monitor_role | | maxscale_monitor | 10.139.158.211 | N | maxscale_monitor_role | +-----------------------+----------------+---------+-----------------------+ SQL> SHOW GRANTS FOR maxscale_monitor@'10.139.158.211'; +------------------------------------------------------------------------------------------------------------------------------+ | Grants for maxscale_monitor@10.139.158.211 | +------------------------------------------------------------------------------------------------------------------------------+ | GRANT `maxscale_monitor_role` TO `maxscale_monitor`@`10.139.158.211` | | GRANT USAGE ON *.* TO `maxscale_monitor`@`10.139.158.211` IDENTIFIED BY PASSWORD '*14E65567ABDB5135D0CFD9A70B3032C179A49EE7' | | SET DEFAULT ROLE `maxscale_monitor_role` FOR `maxscale_monitor`@`10.139.158.211` | +------------------------------------------------------------------------------------------------------------------------------+
MaxScale Admin User SQL> CREATE ROLE maxscale_admin_role; SQL> GRANT SHOW DATABASES ON *.* TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.user TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.db TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.tables_priv TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.columns_priv TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.proxies_priv TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.roles_mapping TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.procs_priv TO 'maxscale_admin_role'; SQL> SHOW GRANTS FOR maxscale_admin_role; +------------------------------------------------------------------+ | Grants for maxscale_admin_role | +------------------------------------------------------------------+ | GRANT USAGE ON *.* TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`user` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`roles_mapping` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`tables_priv` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`procs_priv` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`db` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`columns_priv` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`proxies_priv` TO `maxscale_admin_role` | +------------------------------------------------------------------+ SQL> CREATE USER maxscale_admin@'10.139.158.210' IDENTIFIED BY 'secret'; SQL> CREATE USER maxscale_admin@'10.139.158.211' IDENTIFIED BY 'secret'; SQL> GRANT maxscale_admin_role TO maxscale_admin@'10.139.158.210'; SQL> GRANT maxscale_admin_role TO maxscale_admin@'10.139.158.211'; SQL> SET DEFAULT ROLE maxscale_admin_role FOR maxscale_admin@'10.139.158.210'; SQL> SET DEFAULT ROLE maxscale_admin_role FOR maxscale_admin@'10.139.158.211'; SQL> SELECT user, host, is_role, default_role FROM mysql.user WHERE user LIKE 'maxscale_admin%'; +---------------------+----------------+---------+---------------------+ | User | Host | is_role | default_role | +---------------------+----------------+---------+---------------------+ | maxscale_admin_role | | Y | | | maxscale_admin | 10.139.158.210 | N | maxscale_admin_role | | maxscale_admin | 10.139.158.211 | N | maxscale_admin_role | +---------------------+----------------+---------+---------------------+ SQL> SHOW GRANTS FOR maxscale_admin@'10.139.158.211'; +----------------------------------------------------------------------------------------------------------------------------+ | Grants for maxscale_admin@10.139.158.211 | +----------------------------------------------------------------------------------------------------------------------------+ | GRANT `maxscale_admin_role` TO `maxscale_admin`@`10.139.158.211` | | GRANT USAGE ON *.* TO `maxscale_admin`@`10.139.158.211` IDENTIFIED BY PASSWORD '*14E65567ABDB5135D0CFD9A70B3032C179A49EE7' | | SET DEFAULT ROLE `maxscale_admin_role` FOR `maxscale_admin`@`10.139.158.211` | +----------------------------------------------------------------------------------------------------------------------------+

See also: SchemaRouter Configuration

Create application role and accounts

The application also requires a user, which we create here as on every shard as follows:

SQL> CREATE ROLE app_role; SQL> GRANT SELECT, INSERT, UPDATE, DELETE ON `customer_%`.* TO 'app_role'; SQL> GRANT SHOW DATABASES ON *.* TO 'app_role'; SQL> GRANT CREATE, DROP, ALTER ON *.* TO 'app_role'; -- For creating new tenant databases SQL> SHOW GRANTS FOR app_role; +----------------------------------------------------------------------+ | Grants for app_role | +----------------------------------------------------------------------+ | GRANT SHOW DATABASES ON *.* TO `app_role` | | GRANT SELECT, INSERT, UPDATE, DELETE ON `customer_%`.* TO `app_role` | +----------------------------------------------------------------------+ SQL> CREATE USER app@'10.139.158.%' IDENTIFIED BY 'secret'; SQL> GRANT app_role TO app@'10.139.158.%'; SQL> SET DEFAULT ROLE app_role FOR app@'10.139.158.%'; SQL> SELECT user, host, is_role, default_role FROM mysql.user WHERE user LIKE 'app%'; +----------+--------------+---------+--------------+ | User | Host | is_role | default_role | +----------+--------------+---------+--------------+ | app_role | | Y | | | app | 10.139.158.% | N | app_role | +----------+--------------+---------+--------------+ SQL> SHOW GRANTS FOR app@'10.139.158.%'; +---------------------------------------------------------------------------------------------------------------+ | Grants for app@10.139.158.% | +---------------------------------------------------------------------------------------------------------------+ | GRANT `app_role` TO `app`@`10.139.158.%` | | GRANT USAGE ON *.* TO `app`@`10.139.158.%` IDENTIFIED BY PASSWORD '*14E65567ABDB5135D0CFD9A70B3032C179A49EE7' | | SET DEFAULT ROLE `app_role` FOR `app`@`10.139.158.%` | +---------------------------------------------------------------------------------------------------------------+
Proxy protocol

Load balancers and proxies have the property that they exchange the IP addresses of the clients with their own IP addresses. On the one hand, this means that you can no longer see where the client originally came from in the database and, on the other hand, you can no longer assign access authorisations to users and IPs, as the IP of the load balancer is always checked.

These two problems can be solved using the proxy protocol.

To do this, both the database and the load balancer, in this case MaxScale, must have the proxy protocol activated.

On the database side, the proxy protocol is activated as follows:

# # my.cnf # [mariadbd] proxy_protocol_networks = ::1, 10.139.158.0/24, localhost

and on the MaxScale side with:

# # /etc/maxscale.cnf # [shard] type = server proxy_protocol = true

You can check the two settings with:

SQL> SHOW GLOBAL VARIABLES LIKE 'proxy%'; +-------------------------+---------------------------------+ | Variable_name | Value | +-------------------------+---------------------------------+ | proxy_protocol_networks | ::1, 10.139.158.0/24, localhost | +-------------------------+---------------------------------+ shell> maxctrl show server shard1 | grep proxy │ │ "proxy_protocol": true, │

Sources:


MaxScale SchemaRouter configuration

Next, we prepare the MaxScale configuration for sharding. The file recommended by MariaDB is /etc/maxscale.cnf. Whether it makes more sense to create a separate configuration file under /etc/maxscale.cnf.d/ or even to configure the entire MaxScale dynamically (/var/lib/maxscale/maxscale.cnf.d/*.cnf) remains to be seen in the long term. See also warnings below. The configuration file for this sharding PoC looks like this:

# # /etc/maxscale.cnf # [maxscale] threads = auto admin_gui = false [shard1] type = server address = 10.139.158.1 port = 3363 proxy_protocol = true [shard2] type=server address=10.139.158.1 port=3364 proxy_protocol = true [shard3] type = server address = 10.139.158.1 port = 3365 proxy_protocol = true [sharding monitor] type = monitor module = galeramon servers = shard1,shard2,shard3 user = maxscale_monitor password = secret monitor_interval = 1s [Sharded-Service-Listener] type = listener service = Sharded-Service protocol = MariaDBClient port = 3306 [Sharded-Service] type = service router = schemarouter servers = shard1,shard2,shard3 user = maxscale_admin password = secret auth_all_servers = true

Note: Recommendation of the MaxScale developer: "One workaround might be to actually use galeramon to monitor the nodes instead of mariadbmon."

Starting and stopping the MaxScale Load Balancer

MaxScale is started and stopped as usual via SystemD:

shell> systemctl restart maxscale shell> systemctl status maxscale ● maxscale.service - MariaDB MaxScale Database Proxy Loaded: loaded (/lib/systemd/system/maxscale.service; enabled; vendor preset: enabled) Drop-In: /run/systemd/system/service.d └─zzz-lxc-service.conf Active: active (running) since Tue 2024-02-27 09:52:57 UTC; 39s ago Process: 187 ExecStart=/usr/bin/maxscale (code=exited, status=0/SUCCESS) Main PID: 188 (maxscale) Tasks: 10 (limit: 18663) Memory: 4.6M CPU: 150ms CGroup: /system.slice/maxscale.service └─188 /usr/bin/maxscale systemd[1]: Starting MariaDB MaxScale Database Proxy... maxscale[188]: Module 'galeramon' loaded from '/usr/lib/x86_64-linux-gnu/maxscale/libgaleramon.so'. maxscale[188]: Module 'schemarouter' loaded from '/usr/lib/x86_64-linux-gnu/maxscale/libschemarouter.so'. maxscale[188]: Using up to 2.3GiB of memory for query classifier cache systemd[1]: Started MariaDB MaxScale Database Proxy.

If there were errors or warnings, you can see them in the MaxScale error log:

shell> grep -v notice /var/log/maxscale/maxscale.log 2024-02-13 16:47:22 MariaDB MaxScale is shut down. ---------------------------------------------------- MariaDB MaxScale /var/log/maxscale/maxscale.log Tue Feb 13 16:47:22 2024 ---------------------------------------------------------------------------- 2024-02-27 09:52:56 warning: Discarding journal file '/var/lib/maxscale/Sharding-Monitor_journal.json'. File is for module 'mariadbmon'. Current module is 'galeramon'. 2024-02-27 09:52:56 warning: [galeramon] Invalid 'wsrep_local_index' on server 'shard1': 18446744073709551615
Application tests Simple application tests shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --execute='show databases' +--------------------+ | Database | +--------------------+ | customer_0010 | | customer_0011 | | customer_0020 | | customer_0021 | | customer_0022 | | customer_0030 | | information_schema | | mysql | | performance_schema | | sys | +--------------------+
New command show shards shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 customer_0030 --execute='show shards' | grep customer_00.* | sort | column -t customer_0010.address shard1 customer_0010.sales shard1 customer_0010. shard1 customer_0011.address shard1 customer_0011.sales shard1 customer_0011. shard1 customer_0020.address shard2 customer_0020.sales shard2 customer_0020. shard2 customer_0021.address shard2 customer_0021.sales shard2 customer_0021. shard2 customer_0022.address shard2 customer_0022.sales shard2 customer_0022. shard2 customer_0030.address shard3 customer_0030.sales shard3 customer_0030. shard3

New databases are not displayed immediately, but only when the cached data has been updated (refresh_interval (300s / 5 min)).

See also: Custom SQL commands

More general test

As a reminder:

ShardPortCustomerState #13363customer_001<n>Running #23364customer_002<n>Running #33365customer_003<n>Running #43366customer_004<n>Running
shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3363 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --database=customer_0010 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3363 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --database=customer_0020 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3364 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --execute='use customer_0020; SELECT @@port' +--------+ | @@port | +--------+ | 3364 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 customer_0010 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3363 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 customer_0020 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3364 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 customer_0030 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3365 | +--------+
Less simple (backup) test shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction customer_0010 > /tmp/customer_0010.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction customer_0020 > /tmp/customer_0020.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction customer_0030 > /tmp/customer_0030.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction --databases customer_0011 > /tmp/customer_0011.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction --databases customer_0021 > /tmp/customer_0021.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction --databases customer_0030 > /tmp/customer_0030.sql shell> echo $? 0 shell> ll /tmp/customer_00*sql -rw-rw-r-- 1 oli oli 2738 Mar 18 12:07 /tmp/customer_0010.sql -rw-rw-r-- 1 oli oli 2904 Mar 18 12:08 /tmp/customer_0011.sql -rw-rw-r-- 1 oli oli 2712 Mar 18 12:08 /tmp/customer_0020.sql -rw-rw-r-- 1 oli oli 2906 Mar 18 12:08 /tmp/customer_0021.sql -rw-rw-r-- 1 oli oli 2964 Mar 18 12:08 /tmp/customer_0030.sql shell> tail -n 1 /tmp/customer_*.sql ==> /tmp/customer_0010.sql <== -- Dump completed on 2024-02-13 14:39:21 ==> /tmp/customer_0011.sql <== -- Dump completed on 2024-02-13 14:39:35 ==> /tmp/customer_0020.sql <== -- Dump completed on 2024-02-13 14:40:15 ==> /tmp/customer_0021.sql <== -- Dump completed on 2024-02-13 14:40:42 ==> /tmp/customer_0030.sql <== -- Dump completed on 2024-02-13 14:40:52 shell> cat /tmp/customer_00*sql | grep -A1 -i insert INSERT INTO `address` VALUES (1,'Customer 10 GmbH'); -- INSERT INTO `sales` VALUES (1,'Apples',5,1.20,6.00), -- INSERT INTO `address` VALUES (1,'Customer 11 SE'); -- INSERT INTO `sales` VALUES (1,'Oranges',2,1.70,3.40), -- INSERT INTO `address` VALUES (1,'Customer 20 AG'); -- INSERT INTO `sales` VALUES (1,'Oranges',2,1.70,3.40), -- INSERT INTO `address` VALUES (1,'Customer 21 GmbH'); -- INSERT INTO `sales` VALUES (1,'Oranges',2,1.70,3.40), -- INSERT INTO `address` VALUES (1,'Customer 30 GmbH'); -- INSERT INTO `sales` VALUES (1,'Pickles',2,2.20,4.40),

In MaxScale 23.08.4 there was a pretty bad bug: A return value of 0 but no data in the backup!!! See also the tickets: MXS-4966: mariadb-dump gets an error dumping schemas and MXS-4947: Tables in information_schema are treated as a normal tables. Symptoms of the bug look like this:

Error: Couldn't read status information for table address () Error: Couldn't read status information for table sales ()

We therefore strongly recommend upgrading to MaxScale 23.08.5!

More complex application tests

We have created a somewhat more complex test (./sharding_test.php) that processes the following queries:

SET NAMES utf8mb4 SHOW DATABASES use customer_ START TRANSACTION; SELECT MIN(id) AS first, MAX(id) AS last FROM `sales` INSERT INTO sales (id, product, sales, amount, total_amount) VALUES (%d, '%s', %f, %f, %f) INSERT INTO sales (id, product, sales, sales, amount, total_amount) VALUES (%d, '%s', %f, %f, %f) UPDATE sales SET product = 'Prepare to delete' WHERE id = %d DELETE FROM sales WHERE id = %d COMMIT

This test ran flawlessly. The corresponding control query:

SQL> SELECT * FROM customer_0021.sales WHERE id >= (SELECT MAX(id) - 10 FROM customer_0021.sales);

Various load scenarios can also be tested with db_bench or the Acronis perfkit. For more information, see here.

Cross-shard tests

In any case, you might come up with the idea of running cross-shard queries. This will NOT work, which should not really be surprising, firstly because it is not easy to implement and secondly because it is described here:

"Note: As the sharding solution in MaxScale is relatively simple, cross-database queries between two or more shards are not supported."

Source: Simple Sharding with Two Servers

and

"USE db1 is routed to the server with db1. If the database is divided to multiple servers, only one server will get the command."

Source: SchemaRouter.

Here is a test with UNION:

SQL> use customer_0030 Database changed SQL> SELECT * FROM customer_0020.sales UNION SELECT * FROM customer_0030.sales; ERROR 1146 (42S02): Table 'customer_0020.sales' doesn't exist

And here is the proof to the contrary:

SQL> use customer_0020 Database changed SQL> SELECT * FROM customer_0020.sales UNION SELECT * FROM customer_0030.sales; ERROR 1146 (42S02): Table 'customer_0030.sales' doesn't exist

And here is the test with JOIN:

SQL> use customer_0020 SQL> SELECT * FROM customer_0020.sales a JOIN customer_0030.sales b ON a.id = b.id WHERE a.sales > 1 ; ERROR 1146 (42S02): Table 'customer_0030.sales' doesn't exist SQL> use customer_0030 SQL> SELECT * FROM customer_0020.sales a JOIN customer_0030.sales b ON a.id = b.id WHERE a.sales > 1 ; ERROR 1146 (42S02): Table 'customer_0020.sales' doesn't exist
Operation of a MaxScale sharding system

In this chapter we discuss some points that can be useful for the operation of a MariaDB MaxScale sharding system.

Do-on-all-shards

Since it can always happen that O/S or database operations have to be executed on all shards, it would certainly make sense to create a script that executes the same command on all shards in turn:

shell> ./do-on-all-shards.sh --sql='SHOW DATABASES'

A script of this type should greatly reduce the error rate during operation. Operations such as the re-sharding of a tenant, as described below, are also sensibly scripted and executed centrally (MXS-5029).

Invalidating the database map cache

The invalidate command can be used to invalidate the database map cache of the MariaDB MaxScale SchemaRouter. This allows us to quickly update the cache after adding or removing tenants.

shell> maxctrl call command schemarouter invalidate Sharded-Service OK

In contrast to the invalidate command, which updates the entries after the next refresh_interval, the clear command deletes the entries and a remap is executed immediately.

If you want to invalidate the database map cache remotely with a REST API call, you can do this as follows:

shell> curl -i -X POST -u api_admin:secret http://10.139.158.211:8989/v1/maxscale/modules/schemarouter/clear?Sharded-Service HTTP/1.1 204 No Content Connection: close Date: Mon, 18 Mar 24 11:49:58 GMT X-Frame-Options: Deny X-XSS-Protection: 1 Referrer-Policy: same-origin Cache-Control: no-cache

Sources:


How to change SchemaRouter variables dynamically?

The refresh_interval specifies the lifetime of the entries in the SchemaRouter Database Map Cache. The default value is 300 s (5 min). Refresh Interval is therefore, in my opinion, an unfortunate term as it does not define the interval between two mappings but the lifetime of the cache entries (livetime?, timeout?). As soon as the entry has been deleted, a new refresh of the "database map" is triggered on each shard. The command currently looks like this:

SELECT LOWER(t.table_schema), LOWER(t.table_name) FROM information_schema.tables t UNION ALL SELECT LOWER(s.schema_name), '' FROM information_schema.schemata s

It looks like a simple connect is enough to trigger the refresh of the database map.

The current value for refresh_interval can be queried as follows:

shell> maxctrl show service Sharded-Service | grep refresh_interval | awk -F'│' '{ print $3 }' "refresh_interval": "300000ms",

The following command helps to change the value dynamically:

shell> MAXCTRL_WARNINGS=0 maxctrl alter service Sharded-Service refresh_interval=10s OK

The value should not be set too small, as all other connections are stopped during the mapping process.

Sources:


Adding and removing a tenant

Adding a new tenant to a shard is not a major problem:

SQL> CREATE DATABASE customer_0029; SQL> use customer_0029 SQL> CREATE TABLE address LIKE customer_template.address; SQL> CREATE TABLE sales LIKE customer_template.sales; shell> maxctrl call command schemarouter invalidate Sharded-Service OK

Removing a tenant from a shard, on the other hand, is somewhat more complicated and must be done in consultation with the application:

SQL> DROP DATABASE customer_0011; shell> ./sharding_test.php .....ERROR: Table 'customer_0011.sales' doesn't exist...ERROR: Unknown database 'customer_0011'.ERROR: Unknown database 'customer_0011'......ERROR: Unknown database 'customer_0011'... shell> maxctrl call command schemarouter clear Sharded-Service OK

At least I have not come up with a cleverer variant yet. See also Moving a tenant below.

Moving a tenant

The combination of adding and removing would then be moving a tenant from one shard to another shard, also known as re-sharding. This also requires a concerted action to be planned together with the application.

If this is not possible, at least the time that the application receives errors can be reduced... The following procedure can be used to move a tenant from shard 2 to shard 3:

SQL> use customer_0020; LOCK TABLES address READ, sales READ; -- On Shard 2, application will be blocked at best! shell> mariadb-dump --user=app --password=secret --host=10.139.158.1 --port=3364 --single-transaction --skip-add-locks --databases customer_0020 | mariadb --user=app --password=secret --host=10.139.158.1 --port=3365 # Copy tenant 20 from shard 2 to shard 3 SQL> DROP DATABASE customer_0020; -- Deleting tenant 20 does not work! ERROR 1192 (HY000): Can't execute the given command because you have active locked tables or an active transaction SQL> UNLOCK TABLES; DROP DATABASE customer_0020; # How to delete tenant 20. shell> maxctrl call command schemarouter clear Sharded-Service # Update MaxScale Database Map. Do it quickly!!!

Until the database map is refreshed, the following errors may occur:

error : (47621) [schemarouter] (Sharded-Service); 'customer_0020.' found on servers 'shard2','shard3' for user 'app'@'10.139.158.1'. error : (47621) [schemarouter] (Sharded-Service); 'customer_0020.address' found on servers 'shard2','shard3' for user 'app'@'10.139.158.1'. error : (47621) [schemarouter] (Sharded-Service); 'customer_0020.sales' found on servers 'shard2','shard3' for user 'app'@'10.139.158.1'. error : (47621) [schemarouter] (Sharded-Service); Duplicate tables found, closing session.

And on the application side too:

ERROR: Error: duplicate tables found on two different shards
Adding or removing a shard

Moving a tenant from one shard to another shard is a small re-sharding operation. It becomes somewhat more complex if you want to add new shards or remove old shards. Subsequently (after the addition or before the removal), a large re-sharding would then take place. The first step is to add a shard to the cluster:

shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬───────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 0 │ Running │ 0-3363-26014 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 0 │ Running │ 0-3364-240612 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 0 │ Running │ 0-3365-289873 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴───────────────┴──────────────────┘

The prepared shard is made known to MaxScale:

shell> maxctrl create server shard4 10.139.158.1 3366 OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 1 │ Running │ 0-3363-23676 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-52321 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-39751 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 0 │ Down │ │ │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

The new shard is then linked to the MaxScale Monitor and the service:

shell> MAXCTRL_WARNINGS=0 maxctrl link monitor Sharding monitor shard4 OK shell> MAXCTRL_WARNINGS=0 maxctrl link service Sharded service shard4 OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 1 │ Running │ 0-3363-24961 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-56215 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-45177 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-32 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

Whether this second step is also absolutely necessary was not investigated.

You can follow the entire process in the MariaDB MaxScale error log:

warning: Discarding journal file '/var/lib/maxscale/Sharding-Monitor_journal.json'. Servers described in the journal are different from the ones configured on the current monitor. warning: Saving runtime modifications to 'Sharding-Monitor' in '/var/lib/maxscale/maxscale.cnf.d/Sharding-Monitor.cnf'. The modified values will override the values found in the static configuration files. notice : shard4 sent version string '10.11.7-MariaDB-log'. Detected type: MariaDB, version: 10.11.7. notice : Server 'shard4' charset: latin1_swedish_ci notice : Server changed state: shard4[10.139.158.1:3366]: server_up. [Down] -> [Running] warning: Saving runtime modifications to 'Sharded-Service' in '/var/lib/maxscale/maxscale.cnf.d/Sharded-Service.cnf'. The modified values will override the values found in the static configuration files. notice : Added 'shard4' to 'Sharded-Service'

What we must not forget here is to also equip the new shard with the proxy protocol:

shell> maxctrl show server shard4 | grep proxy │ │ "proxy_protocol": false, │ MAXCTRL_WARNINGS=0 maxctrl alter server shard4 proxy_protocol=true OK

And now new tenants can be added to the new shard or old tenants can be moved to the new shard... In our setup, we want to move all tenants from shard 1 to shard 4 and also create a new tenant customer_0040 on shard 4. The individual steps required for this are listed above.

Once shard 1 has been emptied, it can be dismantled:

shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 1 │ Running │ 0-3363-25916 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-62887 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-54035 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-2247 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

A shard is deleted with the destroy server command. Before this works, however, a shard must be removed from the monitor and the service:

shell> MAXCTRL_WARNINGS=0 maxctrl unlink service Sharded service shard1 OK shell> MAXCTRL_WARNINGS=0 maxctrl unlink monitor Sharding monitor shard1 OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 0 │ Running │ │ │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-64394 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-56072 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-3267 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

Once the shard has been removed from the monitor and the service, it can then be deleted:

shell> maxctrl destroy server shard1 Warning: Object 'shard1' is defined in a static configuration file and cannot be permanently deleted. If MaxScale is restarted, the object will appear again. To hide these warnings, run: export MAXCTRL_WARNINGS=0 OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-65018 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-56886 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-3648 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

And you can follow the changes in the MaxScale Error Log:

notice : Removed 'shard1' from 'Sharded-Service' warning: Discarding journal file '/var/lib/maxscale/Sharding-Monitor_journal.json'. Servers described in the journal are different from the ones configured on the current monitor. notice : Destroyed server 'shard1' at 10.139.158.1:3363

Important: I was informed that with destroy server --force the unlink service and unlink monitor commands are automatically executed by MaxScale.

Source:


Customising the configuration files

During the shard operations described above we received some warnings:

Warning: Object 'shard1' is defined in a static configuration file and cannot be permanently deleted. If MaxScale is restarted, the object will appear again.

and

Warning: Saving runtime modifications to 'Sharding-Monitor' in '/var/lib/maxscale/maxscale.cnf.d/Sharding-Monitor.cnf'. The modified values will override the values found in the static configuration files.

The corresponding configuration files are automatically created by MaxScale when dynamic system changes are made:

shell> ll /var/lib/maxscale/maxscale.cnf.d/ /etc/maxscale.cnf -rw-r--r-- 1 root root 612 Feb 13 14:23 /etc/maxscale.cnf /var/lib/maxscale/maxscale.cnf.d/: total 12 -rw------- 1 maxscale maxscale 187 Feb 13 16:08 Sharding-Monitor.cnf -rw------- 1 maxscale maxscale 150 Feb 13 16:07 Sharded-Service.cnf -rw------- 1 maxscale maxscale 52 Feb 13 15:46 shard4.cnf cat /var/lib/maxscale/maxscale.cnf.d/* [Sharded-Service] debug=true refresh_interval=10000ms auth_all_servers=true log_debug=true password=secret router=schemarouter type=service user=maxscale_admin targets=shard2,shard3,shard4 [sharding monitor] module=galeramon monitor_interval=1000ms password=secret servers=shard2,shard3,shard4 type=monitor user=maxscale_monitor [shard4] address=10.139.158.1 port=3366 type=server

The configuration files still need to be improved accordingly. You should generally consider whether you should not configure everything dynamically via commands in a highly dynamic system...

Maintenance work on the shard

If a shard is to be taken offline for maintenance work, here in the example shard2, this can be done as follows:

shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-69817 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-63166 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-6902 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘ shell> maxctrl set server shard2 drain OK shell> maxctrl set server shard2 maintenance OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬──────────────────────┬───────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼──────────────────────┼───────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 0 │ Maintenance, Running │ 0-3364-240612 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼──────────────────────┼───────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 0 │ Running │ 0-3365-289873 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼──────────────────────┼───────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 0 │ Running │ 0-3366-119848 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴──────────────────────┴───────────────┴──────────────────┘

At this point, maintenance work can be carried out on the machine or the database...

Afterwards, BOTH statuses must be cleared again if both have been set (MXS-5028):

shell> maxctrl clear server shard2 maintenance OK shell> maxctrl clear server shard2 drain OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬───────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 0 │ Running │ 0-3364-240612 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 0 │ Running │ 0-3365-289873 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 0 │ Running │ 0-3366-119848 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴───────────────┴──────────────────┘

The difference between drain and maintenance is that with drain, no new connections are allowed to the shard, but existing connections wait until they are closed. With maintenance, the connections are terminated immediately by force.

Observation of a MariaDB MaxScale sharding system

The MaxScale CLI client maxtrl can be used to query the status of the MariaDB MaxScale load balancer. There are numerous commands for this, mainly list and show:

shell> maxctrl show module schemarouter | head -n 12 ┌─────────────┬────────────────────────────────────────────────┐ │ Module │ schemarouter │ ├─────────────┼────────────────────────────────────────────────┤ │ Type │ Router │ ├─────────────┼────────────────────────────────────────────────┤ │ Version │ V1.0.0 │ ├─────────────┼────────────────────────────────────────────────┤ │ Maturity │ Beta │ ├─────────────┼────────────────────────────────────────────────┤ │ Description │ A database sharding router for simple sharding │ ├─────────────┼────────────────────────────────────────────────┤ │ Parameters │ ... │ shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬───────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 4 │ Running │ 0-3364-290859 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 4 │ Running │ 0-3365-322671 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 4 │ Running │ 0-3366-140018 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴───────────────┴──────────────────┘

The information for the Connections column is confusing because in this case we only have 1, 1 and 2 connections open on each shard in this sharding system.

However, if you look at the situation on the respective shard with SHOW PROCESSLIST, you can see that MaxScale also establishes a connection on EACH shard for each incoming connection. So the display above is actually technically correct, just not what you would expect:

SQL> SHOW PROCESSLIST; +--------+------------------+----------------------+---------------+---------+------+----------+-----------------------------------------------------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +--------+------------------+----------------------+---------------+---------+------+----------+-----------------------------------------------------------------+----------+ | 123 | root | localhost | customer_0021 | Query | 0 | starting | show processlist | 0.000 | | 68107 | maxscale_monitor | 10.139.158.211:35418 | NULL | Sleep | 0 | | NULL | 0.000 | | 113372 | app | 10.139.158.1:47548 | NULL | Sleep | 47 | | NULL | 0.000 | | 113538 | app | 10.139.158.1:49058 | NULL | Sleep | 41 | | NULL | 0.000 | | 113662 | app | 10.139.158.1:47072 | NULL | Sleep | 37 | | NULL | 0.000 | | 114789 | app | 10.139.158.1:39574 | customer_0022 | Query | 0 | Updating | UPDATE sales SET product = 'Prepare to delete' WHERE id = 15622 | 0.000 | +--------+------------------+----------------------+---------------+---------+------+----------+-----------------------------------------------------------------+----------+

This does not scale with large systems with hundreds or thousands of clients! Maybe the MariaDB thread pool feature is used in this case.

According to the MaxScale developer, this is expected behaviour... (MXS-4977)

shell> maxctrl list services ┌─────────────────┬──────────────┬─────────────┬───────────────────┬────────────────────────┐ │ Service │ Router │ Connections │ Total Connections │ Targets │ ├─────────────────┼──────────────┼─────────────┼───────────────────┼────────────────────────┤ │ Sharded-Service │ schemarouter │ 4 │ 82776 │ shard2, shard3, shard4 │ └─────────────────┴──────────────┴─────────────┴───────────────────┴────────────────────────┘ shell> maxctrl list listeners ┌──────────────────────────┬──────┬──────┬─────────┬─────────────────┐ │ Name │ Port │ Host │ State │ Service │ ├──────────────────────────┼──────┼──────┼─────────┼─────────────────┤ │ Sharded-Service-Listener │ 3306 │ :: │ Running │ Sharded-Service │ └──────────────────────────┴──────┴──────┴─────────┴─────────────────┘ shell> maxctrl list monitors ┌──────────────────┬─────────┬────────────────────────┐ │ Monitor │ State │ Servers │ ├──────────────────┼─────────┼────────────────────────┤ │ Sharding-Monitor │ Running │ shard2, shard3, shard4 │ └──────────────────┴─────────┴────────────────────────┘ shell> maxctrl show server shard2 | head -n 20 ┌─────────────────────┬──────────────────────────────────────────────┐ │ Server │ shard2 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Source │ /etc/maxscale.cnf │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Address │ 10.139.158.1 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Port │ 3364 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ State │ Running │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Version │ 10.11.7-MariaDB-log │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Uptime │ 178960 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Last Event │ server_down │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Triggered At │ Sun, 04 Feb 2024 07:37:17 GMT │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Services │ Sharded-Service │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Monitors │ Sharding-Monitor │ ├─────────────────────┼──────────────────────────────────────────────┤ ... ├─────────────────────┼──────────────────────────────────────────────┤ │ Current Connections │ 5 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Total Connections │ 27 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Max Connections │ 5 │ shell> maxctrl show service Sharded-Service ┌─────────────────────┬──────────────────────────────────────────────────────┐ │ Service │ Sharded-Service │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Source │ /var/lib/maxscale/maxscale.cnf.d/Sharded-Service.cnf │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Router │ schemarouter │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ State │ Started │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Started At │ 3/18/2024, 1:52:30 PM │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Users Loaded At │ 3/18/2024, 1:52:30 PM │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Current Connections │ 4 │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Total Connections │ 84590 │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Max Connections │ 5 │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Cluster │ │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Servers │ shard2 │ │ │ shard3 │ │ │ shard4 │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Services │ │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Filters │ │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Parameters │ { │ │ │ "auth_all_servers": true, │ │ │ "connection_keepalive": "300000ms", │ │ │ "debug": true, │ │ │ "disable_sescmd_history": false, │ │ │ "enable_root_user": false, │ │ │ "force_connection_keepalive": false, │ │ │ "idle_session_pool_time": "-1ms", │ │ │ "ignore_tables": [], │ │ │ "ignore_tables_regex": null, │ │ │ "localhost_match_wildcard_host": true, │ │ │ "log_auth_warnings": true, │ │ │ "log_debug": true, │ │ │ "log_info": false, │ │ │ "log_notice": false, │ │ │ "log_warning": false, │ │ │ "max_connections": 0, │ │ │ "max_sescmd_history": 50, │ │ │ "max_staleness": "150000ms", │ │ │ "multiplex_timeout": "60000ms", │ │ │ "net_write_timeout": "0ms", │ │ │ "password": "*****", │ │ │ "prune_sescmd_history": true, │ │ │ "rank": "primary", │ │ │ "refresh_databases": false, │ │ │ "refresh_interval": "10000ms", │ │ │ "retain_last_statements": -1, │ │ │ "router": "schemarouter", │ │ │ "session_trace": false, │ │ │ "strip_db_esc": true, │ │ │ "type": "service", │ │ │ "user": "maxscale_admin", │ │ │ "user_accounts_file": null, │ │ │ "user_accounts_file_usage": "add_when_load_ok", │ │ │ "version_string": null, │ │ │ "wait_timeout": "0ms" │ │ │ } │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Router Diagnostics │ { │ │ │ "average_session": 0.028822357131634554, │ │ │ "longest_sescmd_chain": 4, │ │ │ "longest_session": 50, │ │ │ "queries": 761134, │ │ │ "sescmd_percentage": 44.44342257736483, │ │ │ "shard_map_hits": 84356, │ │ │ "shard_map_misses": 5, │ │ │ "shard_map_stale": 229, │ │ │ "shard_map_updates": 216, │ │ │ "shortest_session": 0, │ │ │ "times_sescmd_limit_exceeded": 0 │ │ │ } │ └─────────────────────┴──────────────────────────────────────────────────────┘

See also MaxScale SchemaRouter Router diagnostics.

shell> maxctrl show monitor Sharding-Monitor ┌─────────────────────┬──────────────────────────────────────────────────────────┐ │ Monitor │ Sharding-Monitor │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Source │ /etc/maxscale.cnf │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Module │ galeramon │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ State │ Running │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Servers │ shard1 │ │ │ shard2 │ │ │ shard3 │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Parameters │ { │ │ │ "available_when_donor": false, │ │ │ "backend_connect_attempts": 1, │ │ │ "backend_connect_timeout": "3000ms", │ │ │ "backend_read_timeout": "3000ms", │ │ │ "backend_write_timeout": "3000ms", │ │ │ "disable_master_failback": false, │ │ │ "disable_master_role_setting": false, │ │ │ "disk_space_check_interval": "0ms", │ │ │ "disk_space_threshold": null, │ │ │ "events": "all,master_down,master_up,...,new_donor", │ │ │ "journal_max_age": "28800000ms", │ │ │ "module": "galeramon", │ │ │ "monitor_interval": "1000ms", │ │ │ "password": "*****", │ │ │ "root_node_as_master": false, │ │ │ "script": null, │ │ │ "script_timeout": "90000ms", │ │ │ "set_donor_nodes": false, │ │ │ "type": "monitor", │ │ │ "use_priority": false, │ │ │ "user": "maxscale_monitor" │ │ │ } │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Monitor Diagnostics │ { │ │ │ "disable_master_failback": false, │ │ │ "disable_master_role_setting": false, │ │ │ "root_node_as_master": false, │ │ │ "server_info": [ │ │ │ { │ │ │ "gtid_binlog_pos": "0-3363-26014", │ │ │ "gtid_current_pos": "0-3363-26014", │ │ │ "master_id": 0, │ │ │ "name": "shard1", │ │ │ "read_only": false, │ │ │ "server_id": 3363 │ │ │ }, │ │ │ { │ │ │ "gtid_binlog_pos": "0-3364-240612", │ │ │ "gtid_current_pos": "0-3364-240612", │ │ │ "master_id": 0, │ │ │ "name": "shard2", │ │ │ "read_only": false, │ │ │ "server_id": 3364 │ │ │ }, │ │ │ { │ │ │ "gtid_binlog_pos": "0-3365-289873", │ │ │ "gtid_current_pos": "0-3365-289873", │ │ │ "master_id": 0, │ │ │ "name": "shard3", │ │ │ "read_only": false, │ │ │ "server_id": 3365 │ │ │ } │ │ │ ], │ │ │ "set_donor_nodes": false, │ │ │ "use_priority": false │ │ │ } │ └─────────────────────┴──────────────────────────────────────────────────────────┘ shell> maxctrl list sessions; ┌───────┬──────┬──────────────┬───────────────────────┬───────┬─────────────────┬────────┬──────────────┐ │ Id │ User │ Host │ Connected │ Idle │ Service │ Memory │ I/O-Activity │ ├───────┼──────┼──────────────┼───────────────────────┼───────┼─────────────────┼────────┼──────────────┤ │ 87240 │ app │ 10.139.158.1 │ 3/18/2024, 2:33:54 PM │ 0 │ Sharded-Service │ 68644 │ 33 │ ├───────┼──────┼──────────────┼───────────────────────┼───────┼─────────────────┼────────┼──────────────┤ │ 72654 │ app │ 10.139.158.1 │ 3/18/2024, 2:25:27 PM │ 506.3 │ Sharded-Service │ 199328 │ 0 │ ├───────┼──────┼──────────────┼───────────────────────┼───────┼─────────────────┼────────┼──────────────┤ │ 72364 │ app │ 10.139.158.1 │ 3/18/2024, 2:25:18 PM │ 516 │ Sharded-Service │ 199328 │ 0 │ ├───────┼──────┼──────────────┼───────────────────────┼───────┼─────────────────┼────────┼──────────────┤ │ 72530 │ app │ 10.139.158.1 │ 3/18/2024, 2:25:23 PM │ 510.5 │ Sharded-Service │ 199328 │ 0 │ └───────┴──────┴──────────────┴───────────────────────┴───────┴─────────────────┴────────┴──────────────┘ shell> maxctrl show session 26 ┌───────────────────────┬───────────────────────────────────────┐ │ Id │ 26 │ ├───────────────────────┼───────────────────────────────────────┤ │ Service │ Sharded-Service │ ├───────────────────────┼───────────────────────────────────────┤ │ State │ Session started │ ├───────────────────────┼───────────────────────────────────────┤ │ User │ app │ ├───────────────────────┼───────────────────────────────────────┤ │ Host │ 10.139.158.1 │ ├───────────────────────┼───────────────────────────────────────┤ │ Port │ 42854 │ ├───────────────────────┼───────────────────────────────────────┤ │ Database │ │ ├───────────────────────┼───────────────────────────────────────┤ │ Connected │ 2/4/2024, 9:31:12 AM │ ├───────────────────────┼───────────────────────────────────────┤ │ Idle │ 610.4 │ ├───────────────────────┼───────────────────────────────────────┤ │ Parameters │ { │ │ │ "log_error": false, │ │ │ "log_info": false, │ │ │ "log_notice": false, │ │ │ "log_warning": false │ │ │ } │ ├───────────────────────┼───────────────────────────────────────┤ │ Client TLS Cipher │ │ ├───────────────────────┼───────────────────────────────────────┤ │ Connection attributes │ { │ │ │ "_client_name": "libmariadb", │ │ │ "_client_version": "3.3.8", │ │ │ "_os": "Linux", │ │ │ "_pid": "251037", │ │ │ "_platform": "x86_64", │ │ │ "_server_host": "10.139.158.211", │ │ │ "program_name": "mysql" │ │ │ } │ ├───────────────────────┼───────────────────────────────────────┤ │ Connections │ shard1 │ │ │ shard2 │ │ │ shard3 │ ├───────────────────────┼───────────────────────────────────────┤ │ Connection IDs │ 666 │ │ │ 139 │ │ │ 138 │ ├───────────────────────┼───────────────────────────────────────┤ │ Queries │ │ ├───────────────────────┼───────────────────────────────────────┤ │ Log │ │ ├───────────────────────┼───────────────────────────────────────┤ │ Memory │ { │ │ │ "connection_buffers": { │ │ │ "backends": { │ │ │ "shard1": { │ │ │ "misc": 678, │ │ │ "readq": 65536, │ │ │ "total": 66214, │ │ │ "writeq": 0 │ │ │ }, │ │ │ "shard2": { │ │ │ "misc": 662, │ │ │ "readq": 0, │ │ │ "total": 662, │ │ │ "writeq": 0 │ │ │ }, │ │ │ "shard3": { │ │ │ "misc": 678, │ │ │ "readq": 65536, │ │ │ "total": 66214, │ │ │ "writeq": 0 │ │ │ } │ │ │ }, │ │ │ "client": { │ │ │ "misc": 654, │ │ │ "readq": 65536, │ │ │ "total": 66190, │ │ │ "writeq": 0 │ │ │ }, │ │ │ "total": 199280 │ │ │ }, │ │ │ "exec_metadata": 0, │ │ │ "last_queries": 0, │ │ │ "sescmd_history": 48, │ │ │ "total": 199328, │ │ │ "variables": 0 │ │ │ } │ ├───────────────────────┼───────────────────────────────────────┤ │ I/O Activity │ 0 │ └───────────────────────┴───────────────────────────────────────┘ shell> maxctrl show listener Sharded-Service-Listener ┌────────────┬───────────────────────────────────────────┐ │ Name │ Sharded-Service-Listener │ ├────────────┼───────────────────────────────────────────┤ │ Source │ /etc/maxscale.cnf │ ├────────────┼───────────────────────────────────────────┤ │ Service │ Sharded-Service │ ├────────────┼───────────────────────────────────────────┤ │ Parameters │ { │ │ │ "MariaDBProtocol": { │ │ │ "allow_replication": true │ │ │ }, │ │ │ "address": "::", │ │ │ "authenticator": null, │ │ │ "authenticator_options": null, │ │ │ "connection_init_sql_file": null, │ │ │ "connection_metadata": [ │ │ │ "character_set_client=auto", │ │ │ "character_set_connection=auto", │ │ │ "character_set_results=auto", │ │ │ "max_allowed_packet=auto", │ │ │ "system_time_zone=auto", │ │ │ "time_zone=auto", │ │ │ "tx_isolation=auto" │ │ │ ], │ │ │ "port": 3306, │ │ │ "protocol": "MariaDBProtocol", │ │ │ "proxy_protocol_networks": null, │ │ │ "service": "Sharded-Service", │ │ │ "socket": null, │ │ │ "sql_mode": "default", │ │ │ "ssl": false, │ │ │ "ssl_ca": null, │ │ │ "ssl_cert": null, │ │ │ "ssl_cert_verify_depth": 9, │ │ │ "ssl_cipher": null, │ │ │ "ssl_crl": null, │ │ │ "ssl_key": null, │ │ │ "ssl_verify_peer_certificate": false, │ │ │ "ssl_verify_peer_host": false, │ │ │ "ssl_version": "MAX", │ │ │ "type": "listener", │ │ │ "user_mapping_file": null │ │ │ } │ └────────────┴───────────────────────────────────────────┘ shell> maxctrl show module schemarouter ┌─────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Module │ schemarouter │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Type │ Router │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Version │ V1.0.0 │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Maturity │ Beta │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Description │ A database sharding router for simple sharding │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Parameters │ [ │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Enable debug mode", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "debug", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": [], │ │ │ "description": "List of tables to ignore when checking for duplicates", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "ignore_tables", │ │ │ "type": "stringlist" │ │ │ }, │ │ │ { │ │ │ "description": "Regex of tables to ignore when checking for duplicates", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "ignore_tables_regex", │ │ │ "type": "regex" │ │ │ }, │ │ │ { │ │ │ "default_value": "150000ms", │ │ │ "description": "Maximum allowed staleness of a database map entry before clients block and wait for an update", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "max_staleness", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Refresh database mapping information", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "refresh_databases", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": "300000ms", │ │ │ "description": "How often to refresh the database mapping information", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "refresh_interval", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Retrieve users from all backend servers instead of only one", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "auth_all_servers", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": "300000ms", │ │ │ "description": "How ofted idle connections are pinged", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "connection_keepalive", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "deprecated": true, │ │ │ "description": "Alias for 'wait_timeout'", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "connection_timeout", │ │ │ "type": "duration" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Disable session command history", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "disable_sescmd_history", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Allow the root user to connect to this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "enable_root_user", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Ping connections unconditionally", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "force_connection_keepalive", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": "-1ms", │ │ │ "description": "Put connections into pool after session has been idle for this long", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "idle_session_pool_time", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "default_value": true, │ │ │ "description": "Match localhost to wildcard host", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "localhost_match_wildcard_host", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": true, │ │ │ "description": "Log a warning when client authentication fails", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_auth_warnings", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Log debug messages for this service (debug builds only)", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_debug", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Log info messages for this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_info", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Log notice messages for this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_notice", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Log warning messages for this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_warning", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": 0, │ │ │ "description": "Maximum number of connections", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "max_connections", │ │ │ "type": "count" │ │ │ }, │ │ │ { │ │ │ "default_value": 50, │ │ │ "description": "Session command history size", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "max_sescmd_history", │ │ │ "type": "count" │ │ │ }, │ │ │ { │ │ │ "default_value": "60000ms", │ │ │ "description": "How long a session can wait for a connection to become available", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "multiplex_timeout", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "default_value": "0ms", │ │ │ "description": "Network write timeout", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "net_write_timeout", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "description": "Password for the user used to retrieve database users", │ │ │ "mandatory": true, │ │ │ "modifiable": true, │ │ │ "name": "password", │ │ │ "type": "password" │ │ │ }, │ │ │ { │ │ │ "default_value": true, │ │ │ "description": "Prune old session command history if the limit is exceeded", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "prune_sescmd_history", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": "primary", │ │ │ "description": "Service rank", │ │ │ "enum_values": [ │ │ │ "primary", │ │ │ "secondary" │ │ │ ], │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "rank", │ │ │ "type": "enum" │ │ │ }, │ │ │ { │ │ │ "default_value": -1, │ │ │ "description": "Number of statements kept in memory", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "retain_last_statements", │ │ │ "type": "int" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Enable session tracing for this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "session_trace", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "deprecated": true, │ │ │ "description": "Track session state using server responses", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "session_track_trx_state", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": true, │ │ │ "deprecated": true, │ │ │ "description": "Strip escape characters from database names", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "strip_db_esc", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "description": "Username used to retrieve database users", │ │ │ "mandatory": true, │ │ │ "modifiable": true, │ │ │ "name": "user", │ │ │ "type": "string" │ │ │ }, │ │ │ { │ │ │ "description": "Load additional users from a file", │ │ │ "mandatory": false, │ │ │ "modifiable": false, │ │ │ "name": "user_accounts_file", │ │ │ "type": "path" │ │ │ }, │ │ │ { │ │ │ "default_value": "add_when_load_ok", │ │ │ "description": "When and how the user accounts file is used", │ │ │ "enum_values": [ │ │ │ "add_when_load_ok", │ │ │ "file_only_always" │ │ │ ], │ │ │ "mandatory": false, │ │ │ "modifiable": false, │ │ │ "name": "user_accounts_file_usage", │ │ │ "type": "enum" │ │ │ }, │ │ │ { │ │ │ "description": "Custom version string to use", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "version_string", │ │ │ "type": "string" │ │ │ }, │ │ │ { │ │ │ "default_value": "0ms", │ │ │ "description": "Connection idle timeout", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "wait_timeout", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ } │ │ │ ] │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Commands │ [ │ │ │ { │ │ │ "attributes": { │ │ │ "arg_max": 1, │ │ │ "arg_min": 1, │ │ │ "description": "Clear schemarouter shard map cache", │ │ │ "method": "POST", │ │ │ "parameters": [ │ │ │ { │ │ │ "description": "The schemarouter service", │ │ │ "required": true, │ │ │ "type": "SERVICE" │ │ │ } │ │ │ ] │ │ │ }, │ │ │ "id": "clear", │ │ │ "links": { │ │ │ "self": "http://127.0.0.1:8989/v1/modules/schemarouter/clear/" │ │ │ }, │ │ │ "type": "module_command" │ │ │ }, │ │ │ { │ │ │ "attributes": { │ │ │ "arg_max": 1, │ │ │ "arg_min": 1, │ │ │ "description": "Invalidate schemarouter shard map cache", │ │ │ "method": "POST", │ │ │ "parameters": [ │ │ │ { │ │ │ "description": "The schemarouter service", │ │ │ "required": true, │ │ │ "type": "SERVICE" │ │ │ } │ │ │ ] │ │ │ }, │ │ │ "id": "invalidate", │ │ │ "links": { │ │ │ "self": "http://127.0.0.1:8989/v1/modules/schemarouter/invalidate/" │ │ │ }, │ │ │ "type": "module_command" │ │ │ } │ │ │ ] │ └─────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ shell> maxctrl show commands schemarouter ┌────────────┬────────────┬──────────────────────────┐ │ Command │ Parameters │ Descriptions │ ├────────────┼────────────┼──────────────────────────┤ │ clear │ SERVICE │ The schemarouter service │ ├────────────┼────────────┼──────────────────────────┤ │ invalidate │ SERVICE │ The schemarouter service │ └────────────┴────────────┴──────────────────────────┘ shell> maxctrl show dbusers Sharded-Service ┌───────────────────────┬────────────────┬───────────────────────┬───────┬───────┬────────┬───────┬──────┐ │ User │ Host │ Plugin │ TLS │ Super │ Global │ Proxy │ Role │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ PUBLIC │ │ │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ app │ 10.139.158.% │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ app_role │ │ │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ mariadb.sys │ localhost │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_admin │ 10.139.158.210 │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_admin │ 10.139.158.211 │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_admin_role │ │ │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_monitor │ 10.139.158.210 │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_monitor │ 10.139.158.211 │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_monitor_role │ │ │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ mysql │ localhost │ mysql_native_password │ false │ true │ true │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ root │ localhost │ mysql_native_password │ false │ true │ true │ false │ │ └───────────────────────┴────────────────┴───────────────────────┴───────┴───────┴────────┴───────┴──────┘ shell> maxctrl show commands mariadbmon ┌───────────────────────────┬─────────────────────────────┬───────────────────────────────────────────────────────────────────────────────┐ │ Command │ Parameters │ Descriptions │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ switchover │ MONITOR, [SERVER], [SERVER] │ Monitor name, New primary (optional), Current primary (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ switchover-force │ MONITOR, [SERVER], [SERVER] │ Monitor name, New primary (optional), Current primary (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-switchover │ MONITOR, [SERVER], [SERVER] │ Monitor name, New primary (optional), Current primary (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ failover │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-failover │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ rejoin │ MONITOR, SERVER │ Monitor name, Joining server │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-rejoin │ MONITOR, SERVER │ Monitor name, Joining server │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ reset-replication │ MONITOR, [SERVER] │ Monitor name, Primary server (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-reset-replication │ MONITOR, [SERVER] │ Monitor name, Primary server (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ release-locks │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-release-locks │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ fetch-cmd-result │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ cancel-cmd │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-add-node │ MONITOR, STRING, STRING │ Monitor name, Hostname/IP of node to add to ColumnStore cluster, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-remove-node │ MONITOR, STRING, STRING │ Monitor name, Hostname/IP of node to remove from ColumnStore cluster, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ cs-get-status │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-get-status │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-start-cluster │ MONITOR, STRING │ Monitor name, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-stop-cluster │ MONITOR, STRING │ Monitor name, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-set-readonly │ MONITOR, STRING │ Monitor name, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-set-readwrite │ MONITOR, STRING │ Monitor name, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-rebuild-server │ MONITOR, SERVER, [SERVER] │ Monitor name, Target server, Source server (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-create-backup │ MONITOR, SERVER, STRING │ Monitor name, Source server, Backup name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-restore-from-backup │ MONITOR, SERVER, STRING │ Monitor name, Target server, Backup name │ └───────────────────────────┴─────────────────────────────┴───────────────────────────────────────────────────────────────────────────────┘
Literature / Sources
Taxonomy upgrade extras: shardingmaxscaleschemarouterload balancermulti-tenant

Sharding with MariaDB MaxScale

Shinguz - Tue, 2024-03-19 17:02
Table of contents
Overview

This feature should more or less work with MariaDB MaxScale 6.x.y, 22.08.x, 23.02.x, 23.08.x and 24.02.x. We have tested it with the latest MaxScale version 23.08.05, as we encountered problems with an older version (MXS-5026).

shell> maxscale --version MaxScale 23.08.5

We used MariaDB 10.11 as the database backend (shards).

Less than approx. 2% of all MariaDB installations known to us are what we technically understand by multi-tenant systems (each customer in its own database (also called a schema)).

This MariaDB MaxScale feature is therefore used relatively rarely and there is an increased risk of encountering bugs that no-one has come across before!

This feature is called SchemaRouter at MariadDB MaxScale and is still declared as beta quality (MXS-5025):

maxctrl> show module schemarouter ┌─────────────┬────────────────────────────────────────────────┐ │ Module │ schemarouter │ ├─────────────┼────────────────────────────────────────────────┤ │ Type │ Router │ ├─────────────┼────────────────────────────────────────────────┤ │ Version │ V1.0.0 │ ├─────────────┼────────────────────────────────────────────────┤ │ Maturity │ Beta │ ├─────────────┼────────────────────────────────────────────────┤ │ Description │ A database sharding router for simple sharding │ ├─────────────┼────────────────────────────────────────────────┤ │ ...

The target topology should look like this: Each customer (client, tenant) is located in its own database (= schema). The databases are distributed across several MariaDB instances (shards). So that the application can access the database transparently, a pair of MaxScale load balancers is connected in front of it, which knows where the customer is located and forwards the traffic to the shard accordingly. To ensure that the MaxScale load balancers are designed for high availability, a virtual IP (VIP) is connected upstream, e.g. using Keepalived. If this is still too simple for you, you can design each individual shard as a master/slave or Galera cluster construct...


Preparation of the shards (MariaDB database instances)

The first problem we had with this PoC was with the test database. By deleting the test database on all shards, the problem disappeared. Alternatively, you can run mariadb-secure-installation, which you should do on production systems anyway, or you can use the MaxScale configuration parameters: ignore_tables or ignore_tables_regex to allow the same tables in different shards (MXS-5027).

See also: MaxScale Router Parameters.

Create test data

So that we have something to play with, we have created test data:

-- On shard 1: 2 customers SQL> CREATE DATABASE customer_0010; SQL> CREATE TABLE customer_0010.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0010.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0010.address VALUES (1, 'Customer 10 GmbH'); SQL> INSERT INTO customer_0010.sales VALUES (1, 'Apples', 5, 1.2, 6), (2, 'Pears', 2, 0.9, 1.8), (3, 'Bread', 1, 2.5, 2.5); SQL> CREATE DATABASE customer_0011; SQL> CREATE TABLE customer_0011.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0011.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0011.address VALUES (1, 'Customer 11 SE'); SQL> INSERT INTO customer_0011.sales VALUES (1, 'Oranges', 2, 1.7, 3.4), (2, 'Salad', 5, 1.2, 6); -- On shard 2: 3 customers SQL> CREATE DATABASE customer_0020; SQL> CREATE TABLE customer_0020.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0020.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0020.address VALUES (1, 'Customer 20 AG'); SQL> INSERT INTO customer_0020.sales VALUES (1, 'Oranges', 2, 1.7, 3.4), (2, 'Salad', 5, 1.2, 6); SQL> CREATE DATABASE customer_0021; SQL> CREATE TABLE customer_0021.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0021.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0021.address VALUES (1, 'Customer 21 GmbH'); SQL> INSERT INTO customer_0021.sales VALUES (1, 'Oranges', 2, 1.7, 3.4), (2, 'Salad', 5, 1.2, 6); SQL> CREATE DATABASE customer_0022; SQL> CREATE TABLE customer_0022.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0022.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0022.address VALUES (1, 'Customer 22 Gebr.'); SQL> INSERT INTO customer_0022.sales VALUES (1, 'Oranges', 2, 1.7, 3.4), (2, 'Salad', 5, 1.2, 6); -- On shard 3: 1 customer SQL> CREATE DATABASE customer_0030; SQL> CREATE TABLE customer_0030.address (id INT UNSIGNED, name VARCHAR(255)); SQL> CREATE TABLE customer_0030.sales (id INT UNSIGNED, product VARCHAR(255), sales TINYINT, amount DECIMAL(6, 2), total_amount DECIMAL(6, 2)); SQL> INSERT INTO customer_0030.address VALUES (1, 'Customer 30 GmbH'); SQL> INSERT INTO customer_0030.sales VALUES (1, 'Pickles', 2, 2.2, 4.4), (2, 'Salad', 1, 3.1, 3.1), (3, 'Pudding', 5, 2.2, 11.0), (4, 'Asparagus', 12, .3, 3.6);
Create roles and users

Since in a sharded system, in contrast to a Galera cluster for example, the individual database instances do not know anything about each other and do not communicate with each other, we have to create the roles and users or accounts individually on EACH shard.

MariaDB MaxScale needs a user for the SchemaRouter service and the monitor (on each shard).

As the name suggests, the monitor user is responsible for monitoring and the SchemaRouter service user is responsible for collecting the user account information from the sharding backends and forwarding the queries to the correct shard.

Since a redundant system typically works with at least two MaxScale routers and we wanted to prevent the privileges of the accounts from diverging, we work with roles for both the MaxScale users and the application users.

MaxScale Monitor User SQL> CREATE ROLE maxscale_monitor_role; SQL> GRANT SELECT ON mysql.user TO 'maxscale_monitor_role'; SQL> GRANT REPLICATION CLIENT ON *.* TO 'maxscale_monitor_role'; SQL> GRANT SLAVE MONITOR ON *.* TO 'maxscale_monitor_role'; SQL> GRANT FILE ON *.* TO 'maxscale_monitor_role'; SQL> GRANT CONNECTION ADMIN ON *.* TO 'maxscale_monitor_role'; SQL> SHOW GRANTS FOR maxscale_monitor_role; +-----------------------------------------------------------------------------------------------+ | Grants for maxscale_monitor_role | +-----------------------------------------------------------------------------------------------+ | GRANT FILE, BINLOG MONITOR, CONNECTION ADMIN, SLAVE MONITOR ON *.* TO `maxscale_monitor_role` | | GRANT SELECT ON `mysql`.`user` TO `maxscale_monitor_role` | +-----------------------------------------------------------------------------------------------+ SQL> CREATE USER maxscale_monitor@'10.139.158.210' IDENTIFIED BY 'secret'; SQL> CREATE USER maxscale_monitor@'10.139.158.211' IDENTIFIED BY 'secret'; SQL> GRANT maxscale_monitor_role TO maxscale_monitor@'10.139.158.210'; SQL> GRANT maxscale_monitor_role TO maxscale_monitor@'10.139.158.211'; SQL> SET DEFAULT ROLE maxscale_monitor_role FOR maxscale_monitor@'10.139.158.210'; SQL> SET DEFAULT ROLE maxscale_monitor_role FOR maxscale_monitor@'10.139.158.211'; SQL> SELECT user, host, is_role, default_role FROM mysql.user WHERE user LIKE 'maxscale_monitor%'; +-----------------------+----------------+---------+-----------------------+ | User | Host | is_role | default_role | +-----------------------+----------------+---------+-----------------------+ | maxscale_monitor_role | | Y | | | maxscale_monitor | 10.139.158.210 | N | maxscale_monitor_role | | maxscale_monitor | 10.139.158.211 | N | maxscale_monitor_role | +-----------------------+----------------+---------+-----------------------+ SQL> SHOW GRANTS FOR maxscale_monitor@'10.139.158.211'; +------------------------------------------------------------------------------------------------------------------------------+ | Grants for maxscale_monitor@10.139.158.211 | +------------------------------------------------------------------------------------------------------------------------------+ | GRANT `maxscale_monitor_role` TO `maxscale_monitor`@`10.139.158.211` | | GRANT USAGE ON *.* TO `maxscale_monitor`@`10.139.158.211` IDENTIFIED BY PASSWORD '*14E65567ABDB5135D0CFD9A70B3032C179A49EE7' | | SET DEFAULT ROLE `maxscale_monitor_role` FOR `maxscale_monitor`@`10.139.158.211` | +------------------------------------------------------------------------------------------------------------------------------+
MaxScale Admin User SQL> CREATE ROLE maxscale_admin_role; SQL> GRANT SHOW DATABASES ON *.* TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.user TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.db TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.tables_priv TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.columns_priv TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.proxies_priv TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.roles_mapping TO 'maxscale_admin_role'; SQL> GRANT SELECT ON mysql.procs_priv TO 'maxscale_admin_role'; SQL> SHOW GRANTS FOR maxscale_admin_role; +------------------------------------------------------------------+ | Grants for maxscale_admin_role | +------------------------------------------------------------------+ | GRANT USAGE ON *.* TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`user` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`roles_mapping` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`tables_priv` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`procs_priv` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`db` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`columns_priv` TO `maxscale_admin_role` | | GRANT SELECT ON `mysql`.`proxies_priv` TO `maxscale_admin_role` | +------------------------------------------------------------------+ SQL> CREATE USER maxscale_admin@'10.139.158.210' IDENTIFIED BY 'secret'; SQL> CREATE USER maxscale_admin@'10.139.158.211' IDENTIFIED BY 'secret'; SQL> GRANT maxscale_admin_role TO maxscale_admin@'10.139.158.210'; SQL> GRANT maxscale_admin_role TO maxscale_admin@'10.139.158.211'; SQL> SET DEFAULT ROLE maxscale_admin_role FOR maxscale_admin@'10.139.158.210'; SQL> SET DEFAULT ROLE maxscale_admin_role FOR maxscale_admin@'10.139.158.211'; SQL> SELECT user, host, is_role, default_role FROM mysql.user WHERE user LIKE 'maxscale_admin%'; +---------------------+----------------+---------+---------------------+ | User | Host | is_role | default_role | +---------------------+----------------+---------+---------------------+ | maxscale_admin_role | | Y | | | maxscale_admin | 10.139.158.210 | N | maxscale_admin_role | | maxscale_admin | 10.139.158.211 | N | maxscale_admin_role | +---------------------+----------------+---------+---------------------+ SQL> SHOW GRANTS FOR maxscale_admin@'10.139.158.211'; +----------------------------------------------------------------------------------------------------------------------------+ | Grants for maxscale_admin@10.139.158.211 | +----------------------------------------------------------------------------------------------------------------------------+ | GRANT `maxscale_admin_role` TO `maxscale_admin`@`10.139.158.211` | | GRANT USAGE ON *.* TO `maxscale_admin`@`10.139.158.211` IDENTIFIED BY PASSWORD '*14E65567ABDB5135D0CFD9A70B3032C179A49EE7' | | SET DEFAULT ROLE `maxscale_admin_role` FOR `maxscale_admin`@`10.139.158.211` | +----------------------------------------------------------------------------------------------------------------------------+

See also: SchemaRouter Configuration

Create application role and accounts

The application also requires a user, which we create here as on every shard as follows:

SQL> CREATE ROLE app_role; SQL> GRANT SELECT, INSERT, UPDATE, DELETE ON `customer_%`.* TO 'app_role'; SQL> GRANT SHOW DATABASES ON *.* TO 'app_role'; SQL> GRANT CREATE, DROP, ALTER ON *.* TO 'app_role'; -- For creating new tenant databases SQL> SHOW GRANTS FOR app_role; +----------------------------------------------------------------------+ | Grants for app_role | +----------------------------------------------------------------------+ | GRANT SHOW DATABASES ON *.* TO `app_role` | | GRANT SELECT, INSERT, UPDATE, DELETE ON `customer_%`.* TO `app_role` | +----------------------------------------------------------------------+ SQL> CREATE USER app@'10.139.158.%' IDENTIFIED BY 'secret'; SQL> GRANT app_role TO app@'10.139.158.%'; SQL> SET DEFAULT ROLE app_role FOR app@'10.139.158.%'; SQL> SELECT user, host, is_role, default_role FROM mysql.user WHERE user LIKE 'app%'; +----------+--------------+---------+--------------+ | User | Host | is_role | default_role | +----------+--------------+---------+--------------+ | app_role | | Y | | | app | 10.139.158.% | N | app_role | +----------+--------------+---------+--------------+ SQL> SHOW GRANTS FOR app@'10.139.158.%'; +---------------------------------------------------------------------------------------------------------------+ | Grants for app@10.139.158.% | +---------------------------------------------------------------------------------------------------------------+ | GRANT `app_role` TO `app`@`10.139.158.%` | | GRANT USAGE ON *.* TO `app`@`10.139.158.%` IDENTIFIED BY PASSWORD '*14E65567ABDB5135D0CFD9A70B3032C179A49EE7' | | SET DEFAULT ROLE `app_role` FOR `app`@`10.139.158.%` | +---------------------------------------------------------------------------------------------------------------+
Proxy protocol

Load balancers and proxies have the property that they exchange the IP addresses of the clients with their own IP addresses. On the one hand, this means that you can no longer see where the client originally came from in the database and, on the other hand, you can no longer assign access authorisations to users and IPs, as the IP of the load balancer is always checked.

These two problems can be solved using the proxy protocol.

To do this, both the database and the load balancer, in this case MaxScale, must have the proxy protocol activated.

On the database side, the proxy protocol is activated as follows:

# # my.cnf # [mariadbd] proxy_protocol_networks = ::1, 10.139.158.0/24, localhost

and on the MaxScale side with:

# # /etc/maxscale.cnf # [shard] type = server proxy_protocol = true

You can check the two settings with:

SQL> SHOW GLOBAL VARIABLES LIKE 'proxy%'; +-------------------------+---------------------------------+ | Variable_name | Value | +-------------------------+---------------------------------+ | proxy_protocol_networks | ::1, 10.139.158.0/24, localhost | +-------------------------+---------------------------------+ shell> maxctrl show server shard1 | grep proxy │ │ "proxy_protocol": true, │

Sources:


MaxScale SchemaRouter configuration

Next, we prepare the MaxScale configuration for sharding. The file recommended by MariaDB is /etc/maxscale.cnf. Whether it makes more sense to create a separate configuration file under /etc/maxscale.cnf.d/ or even to configure the entire MaxScale dynamically (/var/lib/maxscale/maxscale.cnf.d/*.cnf) remains to be seen in the long term. See also warnings below. The configuration file for this sharding PoC looks like this:

# # /etc/maxscale.cnf # [maxscale] threads = auto admin_gui = false [shard1] type = server address = 10.139.158.1 port = 3363 proxy_protocol = true [shard2] type=server address=10.139.158.1 port=3364 proxy_protocol = true [shard3] type = server address = 10.139.158.1 port = 3365 proxy_protocol = true [sharding monitor] type = monitor module = galeramon servers = shard1,shard2,shard3 user = maxscale_monitor password = secret monitor_interval = 1s [Sharded-Service-Listener] type = listener service = Sharded-Service protocol = MariaDBClient port = 3306 [Sharded-Service] type = service router = schemarouter servers = shard1,shard2,shard3 user = maxscale_admin password = secret auth_all_servers = true

Note: Recommendation of the MaxScale developer: "One workaround might be to actually use galeramon to monitor the nodes instead of mariadbmon."

Starting and stopping the MaxScale Load Balancer

MaxScale is started and stopped as usual via SystemD:

shell> systemctl restart maxscale shell> systemctl status maxscale ● maxscale.service - MariaDB MaxScale Database Proxy Loaded: loaded (/lib/systemd/system/maxscale.service; enabled; vendor preset: enabled) Drop-In: /run/systemd/system/service.d └─zzz-lxc-service.conf Active: active (running) since Tue 2024-02-27 09:52:57 UTC; 39s ago Process: 187 ExecStart=/usr/bin/maxscale (code=exited, status=0/SUCCESS) Main PID: 188 (maxscale) Tasks: 10 (limit: 18663) Memory: 4.6M CPU: 150ms CGroup: /system.slice/maxscale.service └─188 /usr/bin/maxscale systemd[1]: Starting MariaDB MaxScale Database Proxy... maxscale[188]: Module 'galeramon' loaded from '/usr/lib/x86_64-linux-gnu/maxscale/libgaleramon.so'. maxscale[188]: Module 'schemarouter' loaded from '/usr/lib/x86_64-linux-gnu/maxscale/libschemarouter.so'. maxscale[188]: Using up to 2.3GiB of memory for query classifier cache systemd[1]: Started MariaDB MaxScale Database Proxy.

If there were errors or warnings, you can see them in the MaxScale error log:

shell> grep -v notice /var/log/maxscale/maxscale.log 2024-02-13 16:47:22 MariaDB MaxScale is shut down. ---------------------------------------------------- MariaDB MaxScale /var/log/maxscale/maxscale.log Tue Feb 13 16:47:22 2024 ---------------------------------------------------------------------------- 2024-02-27 09:52:56 warning: Discarding journal file '/var/lib/maxscale/Sharding-Monitor_journal.json'. File is for module 'mariadbmon'. Current module is 'galeramon'. 2024-02-27 09:52:56 warning: [galeramon] Invalid 'wsrep_local_index' on server 'shard1': 18446744073709551615
Application tests Simple application tests shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --execute='show databases' +--------------------+ | Database | +--------------------+ | customer_0010 | | customer_0011 | | customer_0020 | | customer_0021 | | customer_0022 | | customer_0030 | | information_schema | | mysql | | performance_schema | | sys | +--------------------+
New command show shards shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 customer_0030 --execute='show shards' | grep customer_00.* | sort | column -t customer_0010.address shard1 customer_0010.sales shard1 customer_0010. shard1 customer_0011.address shard1 customer_0011.sales shard1 customer_0011. shard1 customer_0020.address shard2 customer_0020.sales shard2 customer_0020. shard2 customer_0021.address shard2 customer_0021.sales shard2 customer_0021. shard2 customer_0022.address shard2 customer_0022.sales shard2 customer_0022. shard2 customer_0030.address shard3 customer_0030.sales shard3 customer_0030. shard3

New databases are not displayed immediately, but only when the cached data has been updated (refresh_interval (300s / 5 min)).

See also: Custom SQL commands

More general test

As a reminder:

ShardPortCustomerState #13363customer_001<n>Running #23364customer_002<n>Running #33365customer_003<n>Running #43366customer_004<n>Running
shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3363 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --database=customer_0010 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3363 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --database=customer_0020 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3364 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 --execute='use customer_0020; SELECT @@port' +--------+ | @@port | +--------+ | 3364 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 customer_0010 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3363 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 customer_0020 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3364 | +--------+ shell> mariadb --user=app --password=secret --host=10.139.158.211 --port=3306 customer_0030 --execute='SELECT @@port' +--------+ | @@port | +--------+ | 3365 | +--------+
Less simple (backup) test shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction customer_0010 > /tmp/customer_0010.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction customer_0020 > /tmp/customer_0020.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction customer_0030 > /tmp/customer_0030.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction --databases customer_0011 > /tmp/customer_0011.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction --databases customer_0021 > /tmp/customer_0021.sql shell> echo $? 0 shell> mariadb-dump --user=app --password=secret --host=10.139.158.211 --port=3306 --single-transaction --databases customer_0030 > /tmp/customer_0030.sql shell> echo $? 0 shell> ll /tmp/customer_00*sql -rw-rw-r-- 1 oli oli 2738 Mar 18 12:07 /tmp/customer_0010.sql -rw-rw-r-- 1 oli oli 2904 Mar 18 12:08 /tmp/customer_0011.sql -rw-rw-r-- 1 oli oli 2712 Mar 18 12:08 /tmp/customer_0020.sql -rw-rw-r-- 1 oli oli 2906 Mar 18 12:08 /tmp/customer_0021.sql -rw-rw-r-- 1 oli oli 2964 Mar 18 12:08 /tmp/customer_0030.sql shell> tail -n 1 /tmp/customer_*.sql ==> /tmp/customer_0010.sql <== -- Dump completed on 2024-02-13 14:39:21 ==> /tmp/customer_0011.sql <== -- Dump completed on 2024-02-13 14:39:35 ==> /tmp/customer_0020.sql <== -- Dump completed on 2024-02-13 14:40:15 ==> /tmp/customer_0021.sql <== -- Dump completed on 2024-02-13 14:40:42 ==> /tmp/customer_0030.sql <== -- Dump completed on 2024-02-13 14:40:52 shell> cat /tmp/customer_00*sql | grep -A1 -i insert INSERT INTO `address` VALUES (1,'Customer 10 GmbH'); -- INSERT INTO `sales` VALUES (1,'Apples',5,1.20,6.00), -- INSERT INTO `address` VALUES (1,'Customer 11 SE'); -- INSERT INTO `sales` VALUES (1,'Oranges',2,1.70,3.40), -- INSERT INTO `address` VALUES (1,'Customer 20 AG'); -- INSERT INTO `sales` VALUES (1,'Oranges',2,1.70,3.40), -- INSERT INTO `address` VALUES (1,'Customer 21 GmbH'); -- INSERT INTO `sales` VALUES (1,'Oranges',2,1.70,3.40), -- INSERT INTO `address` VALUES (1,'Customer 30 GmbH'); -- INSERT INTO `sales` VALUES (1,'Pickles',2,2.20,4.40),

In MaxScale 23.08.4 there was a pretty bad bug: A return value of 0 but no data in the backup!!! See also the tickets: MXS-4966: mariadb-dump gets an error dumping schemas and MXS-4947: Tables in information_schema are treated as a normal tables. Symptoms of the bug look like this:

Error: Couldn't read status information for table address () Error: Couldn't read status information for table sales ()

We therefore strongly recommend upgrading to MaxScale 23.08.5!

More complex application tests

We have created a somewhat more complex test (./sharding_test.php) that processes the following queries:

SET NAMES utf8mb4 SHOW DATABASES use customer_ START TRANSACTION; SELECT MIN(id) AS first, MAX(id) AS last FROM `sales` INSERT INTO sales (id, product, sales, amount, total_amount) VALUES (%d, '%s', %f, %f, %f) INSERT INTO sales (id, product, sales, sales, amount, total_amount) VALUES (%d, '%s', %f, %f, %f) UPDATE sales SET product = 'Prepare to delete' WHERE id = %d DELETE FROM sales WHERE id = %d COMMIT

This test ran flawlessly. The corresponding control query:

SQL> SELECT * FROM customer_0021.sales WHERE id >= (SELECT MAX(id) - 10 FROM customer_0021.sales);

Various load scenarios can also be tested with db_bench or the Acronis perfkit. For more information, see here.

Cross-shard tests

In any case, you might come up with the idea of running cross-shard queries. This will NOT work, which should not really be surprising, firstly because it is not easy to implement and secondly because it is described here:

"Note: As the sharding solution in MaxScale is relatively simple, cross-database queries between two or more shards are not supported."

Source: Simple Sharding with Two Servers

and

"USE db1 is routed to the server with db1. If the database is divided to multiple servers, only one server will get the command."

Source: SchemaRouter.

Here is a test with UNION:

SQL> use customer_0030 Database changed SQL> SELECT * FROM customer_0020.sales UNION SELECT * FROM customer_0030.sales; ERROR 1146 (42S02): Table 'customer_0020.sales' doesn't exist

And here is the proof to the contrary:

SQL> use customer_0020 Database changed SQL> SELECT * FROM customer_0020.sales UNION SELECT * FROM customer_0030.sales; ERROR 1146 (42S02): Table 'customer_0030.sales' doesn't exist

And here is the test with JOIN:

SQL> use customer_0020 SQL> SELECT * FROM customer_0020.sales a JOIN customer_0030.sales b ON a.id = b.id WHERE a.sales > 1 ; ERROR 1146 (42S02): Table 'customer_0030.sales' doesn't exist SQL> use customer_0030 SQL> SELECT * FROM customer_0020.sales a JOIN customer_0030.sales b ON a.id = b.id WHERE a.sales > 1 ; ERROR 1146 (42S02): Table 'customer_0020.sales' doesn't exist
Operation of a MaxScale sharding system

In this chapter we discuss some points that can be useful for the operation of a MariaDB MaxScale sharding system.

Do-on-all-shards

Since it can always happen that O/S or database operations have to be executed on all shards, it would certainly make sense to create a script that executes the same command on all shards in turn:

shell> ./do-on-all-shards.sh --sql='SHOW DATABASES'

A script of this type should greatly reduce the error rate during operation. Operations such as the re-sharding of a tenant, as described below, are also sensibly scripted and executed centrally (MXS-5029).

Invalidating the database map cache

The invalidate command can be used to invalidate the database map cache of the MariaDB MaxScale SchemaRouter. This allows us to quickly update the cache after adding or removing tenants.

shell> maxctrl call command schemarouter invalidate Sharded-Service OK

In contrast to the invalidate command, which updates the entries after the next refresh_interval, the clear command deletes the entries and a remap is executed immediately.

If you want to invalidate the database map cache remotely with a REST API call, you can do this as follows:

shell> curl -i -X POST -u api_admin:secret http://10.139.158.211:8989/v1/maxscale/modules/schemarouter/clear?Sharded-Service HTTP/1.1 204 No Content Connection: close Date: Mon, 18 Mar 24 11:49:58 GMT X-Frame-Options: Deny X-XSS-Protection: 1 Referrer-Policy: same-origin Cache-Control: no-cache

Sources:


How to change SchemaRouter variables dynamically?

The refresh_interval specifies the lifetime of the entries in the SchemaRouter Database Map Cache. The default value is 300 s (5 min). Refresh Interval is therefore, in my opinion, an unfortunate term as it does not define the interval between two mappings but the lifetime of the cache entries (livetime?, timeout?). As soon as the entry has been deleted, a new refresh of the "database map" is triggered on each shard. The command currently looks like this:

SELECT LOWER(t.table_schema), LOWER(t.table_name) FROM information_schema.tables t UNION ALL SELECT LOWER(s.schema_name), '' FROM information_schema.schemata s

It looks like a simple connect is enough to trigger the refresh of the database map.

The current value for refresh_interval can be queried as follows:

shell> maxctrl show service Sharded-Service | grep refresh_interval | awk -F'│' '{ print $3 }' "refresh_interval": "300000ms",

The following command helps to change the value dynamically:

shell> MAXCTRL_WARNINGS=0 maxctrl alter service Sharded-Service refresh_interval=10s OK

The value should not be set too small, as all other connections are stopped during the mapping process.

Sources:


Adding and removing a tenant

Adding a new tenant to a shard is not a major problem:

SQL> CREATE DATABASE customer_0029; SQL> use customer_0029 SQL> CREATE TABLE address LIKE customer_template.address; SQL> CREATE TABLE sales LIKE customer_template.sales; shell> maxctrl call command schemarouter invalidate Sharded-Service OK

Removing a tenant from a shard, on the other hand, is somewhat more complicated and must be done in consultation with the application:

SQL> DROP DATABASE customer_0011; shell> ./sharding_test.php .....ERROR: Table 'customer_0011.sales' doesn't exist...ERROR: Unknown database 'customer_0011'.ERROR: Unknown database 'customer_0011'......ERROR: Unknown database 'customer_0011'... shell> maxctrl call command schemarouter clear Sharded-Service OK

At least I have not come up with a cleverer variant yet. See also Moving a tenant below.

Moving a tenant

The combination of adding and removing would then be moving a tenant from one shard to another shard, also known as re-sharding. This also requires a concerted action to be planned together with the application.

If this is not possible, at least the time that the application receives errors can be reduced... The following procedure can be used to move a tenant from shard 2 to shard 3:

SQL> use customer_0020; LOCK TABLES address READ, sales READ; -- On Shard 2, application will be blocked at best! shell> mariadb-dump --user=app --password=secret --host=10.139.158.1 --port=3364 --single-transaction --skip-add-locks --databases customer_0020 | mariadb --user=app --password=secret --host=10.139.158.1 --port=3365 # Copy tenant 20 from shard 2 to shard 3 SQL> DROP DATABASE customer_0020; -- Deleting tenant 20 does not work! ERROR 1192 (HY000): Can't execute the given command because you have active locked tables or an active transaction SQL> UNLOCK TABLES; DROP DATABASE customer_0020; # How to delete tenant 20. shell> maxctrl call command schemarouter clear Sharded-Service # Update MaxScale Database Map. Do it quickly!!!

Until the database map is refreshed, the following errors may occur:

error : (47621) [schemarouter] (Sharded-Service); 'customer_0020.' found on servers 'shard2','shard3' for user 'app'@'10.139.158.1'. error : (47621) [schemarouter] (Sharded-Service); 'customer_0020.address' found on servers 'shard2','shard3' for user 'app'@'10.139.158.1'. error : (47621) [schemarouter] (Sharded-Service); 'customer_0020.sales' found on servers 'shard2','shard3' for user 'app'@'10.139.158.1'. error : (47621) [schemarouter] (Sharded-Service); Duplicate tables found, closing session.

And on the application side too:

ERROR: Error: duplicate tables found on two different shards
Adding or removing a shard

Moving a tenant from one shard to another shard is a small re-sharding operation. It becomes somewhat more complex if you want to add new shards or remove old shards. Subsequently (after the addition or before the removal), a large re-sharding would then take place. The first step is to add a shard to the cluster:

shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬───────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 0 │ Running │ 0-3363-26014 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 0 │ Running │ 0-3364-240612 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 0 │ Running │ 0-3365-289873 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴───────────────┴──────────────────┘

The prepared shard is made known to MaxScale:

shell> maxctrl create server shard4 10.139.158.1 3366 OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 1 │ Running │ 0-3363-23676 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-52321 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-39751 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 0 │ Down │ │ │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

The new shard is then linked to the MaxScale Monitor and the service:

shell> MAXCTRL_WARNINGS=0 maxctrl link monitor Sharding monitor shard4 OK shell> MAXCTRL_WARNINGS=0 maxctrl link service Sharded service shard4 OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 1 │ Running │ 0-3363-24961 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-56215 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-45177 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-32 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

Whether this second step is also absolutely necessary was not investigated.

You can follow the entire process in the MariaDB MaxScale error log:

warning: Discarding journal file '/var/lib/maxscale/Sharding-Monitor_journal.json'. Servers described in the journal are different from the ones configured on the current monitor. warning: Saving runtime modifications to 'Sharding-Monitor' in '/var/lib/maxscale/maxscale.cnf.d/Sharding-Monitor.cnf'. The modified values will override the values found in the static configuration files. notice : shard4 sent version string '10.11.7-MariaDB-log'. Detected type: MariaDB, version: 10.11.7. notice : Server 'shard4' charset: latin1_swedish_ci notice : Server changed state: shard4[10.139.158.1:3366]: server_up. [Down] -> [Running] warning: Saving runtime modifications to 'Sharded-Service' in '/var/lib/maxscale/maxscale.cnf.d/Sharded-Service.cnf'. The modified values will override the values found in the static configuration files. notice : Added 'shard4' to 'Sharded-Service'

What we must not forget here is to also equip the new shard with the proxy protocol:

shell> maxctrl show server shard4 | grep proxy │ │ "proxy_protocol": false, │ MAXCTRL_WARNINGS=0 maxctrl alter server shard4 proxy_protocol=true OK

And now new tenants can be added to the new shard or old tenants can be moved to the new shard... In our setup, we want to move all tenants from shard 1 to shard 4 and also create a new tenant customer_0040 on shard 4. The individual steps required for this are listed above.

Once shard 1 has been emptied, it can be dismantled:

shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 1 │ Running │ 0-3363-25916 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-62887 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-54035 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-2247 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

A shard is deleted with the destroy server command. Before this works, however, a shard must be removed from the monitor and the service:

shell> MAXCTRL_WARNINGS=0 maxctrl unlink service Sharded service shard1 OK shell> MAXCTRL_WARNINGS=0 maxctrl unlink monitor Sharding monitor shard1 OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard1 │ 10.139.158.1 │ 3363 │ 0 │ Running │ │ │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-64394 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-56072 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-3267 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

Once the shard has been removed from the monitor and the service, it can then be deleted:

shell> maxctrl destroy server shard1 Warning: Object 'shard1' is defined in a static configuration file and cannot be permanently deleted. If MaxScale is restarted, the object will appear again. To hide these warnings, run: export MAXCTRL_WARNINGS=0 OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-65018 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-56886 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-3648 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘

And you can follow the changes in the MaxScale Error Log:

notice : Removed 'shard1' from 'Sharded-Service' warning: Discarding journal file '/var/lib/maxscale/Sharding-Monitor_journal.json'. Servers described in the journal are different from the ones configured on the current monitor. notice : Destroyed server 'shard1' at 10.139.158.1:3363

Important: I was informed that with destroy server --force the unlink service and unlink monitor commands are automatically executed by MaxScale.

Source:


Customising the configuration files

During the shard operations described above we received some warnings:

Warning: Object 'shard1' is defined in a static configuration file and cannot be permanently deleted. If MaxScale is restarted, the object will appear again.

and

Warning: Saving runtime modifications to 'Sharding-Monitor' in '/var/lib/maxscale/maxscale.cnf.d/Sharding-Monitor.cnf'. The modified values will override the values found in the static configuration files.

The corresponding configuration files are automatically created by MaxScale when dynamic system changes are made:

shell> ll /var/lib/maxscale/maxscale.cnf.d/ /etc/maxscale.cnf -rw-r--r-- 1 root root 612 Feb 13 14:23 /etc/maxscale.cnf /var/lib/maxscale/maxscale.cnf.d/: total 12 -rw------- 1 maxscale maxscale 187 Feb 13 16:08 Sharding-Monitor.cnf -rw------- 1 maxscale maxscale 150 Feb 13 16:07 Sharded-Service.cnf -rw------- 1 maxscale maxscale 52 Feb 13 15:46 shard4.cnf cat /var/lib/maxscale/maxscale.cnf.d/* [Sharded-Service] debug=true refresh_interval=10000ms auth_all_servers=true log_debug=true password=secret router=schemarouter type=service user=maxscale_admin targets=shard2,shard3,shard4 [sharding monitor] module=galeramon monitor_interval=1000ms password=secret servers=shard2,shard3,shard4 type=monitor user=maxscale_monitor [shard4] address=10.139.158.1 port=3366 type=server

The configuration files still need to be improved accordingly. You should generally consider whether you should not configure everything dynamically via commands in a highly dynamic system...

Maintenance work on the shard

If a shard is to be taken offline for maintenance work, here in the example shard2, this can be done as follows:

shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬──────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 1 │ Running │ 0-3364-69817 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 1 │ Running │ 0-3365-63166 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼──────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 1 │ Running │ 0-3366-6902 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴──────────────┴──────────────────┘ shell> maxctrl set server shard2 drain OK shell> maxctrl set server shard2 maintenance OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬──────────────────────┬───────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼──────────────────────┼───────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 0 │ Maintenance, Running │ 0-3364-240612 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼──────────────────────┼───────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 0 │ Running │ 0-3365-289873 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼──────────────────────┼───────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 0 │ Running │ 0-3366-119848 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴──────────────────────┴───────────────┴──────────────────┘

At this point, maintenance work can be carried out on the machine or the database...

Afterwards, BOTH statuses must be cleared again if both have been set (MXS-5028):

shell> maxctrl clear server shard2 maintenance OK shell> maxctrl clear server shard2 drain OK shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬───────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 0 │ Running │ 0-3364-240612 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 0 │ Running │ 0-3365-289873 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 0 │ Running │ 0-3366-119848 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴───────────────┴──────────────────┘

The difference between drain and maintenance is that with drain, no new connections are allowed to the shard, but existing connections wait until they are closed. With maintenance, the connections are terminated immediately by force.

Observation of a MariaDB MaxScale sharding system

The MaxScale CLI client maxtrl can be used to query the status of the MariaDB MaxScale load balancer. There are numerous commands for this, mainly list and show:

shell> maxctrl show module schemarouter | head -n 12 ┌─────────────┬────────────────────────────────────────────────┐ │ Module │ schemarouter │ ├─────────────┼────────────────────────────────────────────────┤ │ Type │ Router │ ├─────────────┼────────────────────────────────────────────────┤ │ Version │ V1.0.0 │ ├─────────────┼────────────────────────────────────────────────┤ │ Maturity │ Beta │ ├─────────────┼────────────────────────────────────────────────┤ │ Description │ A database sharding router for simple sharding │ ├─────────────┼────────────────────────────────────────────────┤ │ Parameters │ ... │ shell> maxctrl list servers ┌────────┬──────────────┬──────┬─────────────┬─────────┬───────────────┬──────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard2 │ 10.139.158.1 │ 3364 │ 4 │ Running │ 0-3364-290859 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard3 │ 10.139.158.1 │ 3365 │ 4 │ Running │ 0-3365-322671 │ Sharding-Monitor │ ├────────┼──────────────┼──────┼─────────────┼─────────┼───────────────┼──────────────────┤ │ shard4 │ 10.139.158.1 │ 3366 │ 4 │ Running │ 0-3366-140018 │ Sharding-Monitor │ └────────┴──────────────┴──────┴─────────────┴─────────┴───────────────┴──────────────────┘

The information for the Connections column is confusing because in this case we only have 1, 1 and 2 connections open on each shard in this sharding system.

However, if you look at the situation on the respective shard with SHOW PROCESSLIST, you can see that MaxScale also establishes a connection on EACH shard for each incoming connection. So the display above is actually technically correct, just not what you would expect:

SQL> SHOW PROCESSLIST; +--------+------------------+----------------------+---------------+---------+------+----------+-----------------------------------------------------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +--------+------------------+----------------------+---------------+---------+------+----------+-----------------------------------------------------------------+----------+ | 123 | root | localhost | customer_0021 | Query | 0 | starting | show processlist | 0.000 | | 68107 | maxscale_monitor | 10.139.158.211:35418 | NULL | Sleep | 0 | | NULL | 0.000 | | 113372 | app | 10.139.158.1:47548 | NULL | Sleep | 47 | | NULL | 0.000 | | 113538 | app | 10.139.158.1:49058 | NULL | Sleep | 41 | | NULL | 0.000 | | 113662 | app | 10.139.158.1:47072 | NULL | Sleep | 37 | | NULL | 0.000 | | 114789 | app | 10.139.158.1:39574 | customer_0022 | Query | 0 | Updating | UPDATE sales SET product = 'Prepare to delete' WHERE id = 15622 | 0.000 | +--------+------------------+----------------------+---------------+---------+------+----------+-----------------------------------------------------------------+----------+

This does not scale with large systems with hundreds or thousands of clients! Maybe the MariaDB thread pool feature is used in this case.

According to the MaxScale developer, this is expected behaviour... (MXS-4977)

shell> maxctrl list services ┌─────────────────┬──────────────┬─────────────┬───────────────────┬────────────────────────┐ │ Service │ Router │ Connections │ Total Connections │ Targets │ ├─────────────────┼──────────────┼─────────────┼───────────────────┼────────────────────────┤ │ Sharded-Service │ schemarouter │ 4 │ 82776 │ shard2, shard3, shard4 │ └─────────────────┴──────────────┴─────────────┴───────────────────┴────────────────────────┘ shell> maxctrl list listeners ┌──────────────────────────┬──────┬──────┬─────────┬─────────────────┐ │ Name │ Port │ Host │ State │ Service │ ├──────────────────────────┼──────┼──────┼─────────┼─────────────────┤ │ Sharded-Service-Listener │ 3306 │ :: │ Running │ Sharded-Service │ └──────────────────────────┴──────┴──────┴─────────┴─────────────────┘ shell> maxctrl list monitors ┌──────────────────┬─────────┬────────────────────────┐ │ Monitor │ State │ Servers │ ├──────────────────┼─────────┼────────────────────────┤ │ Sharding-Monitor │ Running │ shard2, shard3, shard4 │ └──────────────────┴─────────┴────────────────────────┘ shell> maxctrl show server shard2 | head -n 20 ┌─────────────────────┬──────────────────────────────────────────────┐ │ Server │ shard2 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Source │ /etc/maxscale.cnf │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Address │ 10.139.158.1 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Port │ 3364 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ State │ Running │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Version │ 10.11.7-MariaDB-log │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Uptime │ 178960 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Last Event │ server_down │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Triggered At │ Sun, 04 Feb 2024 07:37:17 GMT │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Services │ Sharded-Service │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Monitors │ Sharding-Monitor │ ├─────────────────────┼──────────────────────────────────────────────┤ ... ├─────────────────────┼──────────────────────────────────────────────┤ │ Current Connections │ 5 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Total Connections │ 27 │ ├─────────────────────┼──────────────────────────────────────────────┤ │ Max Connections │ 5 │ shell> maxctrl show service Sharded-Service ┌─────────────────────┬──────────────────────────────────────────────────────┐ │ Service │ Sharded-Service │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Source │ /var/lib/maxscale/maxscale.cnf.d/Sharded-Service.cnf │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Router │ schemarouter │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ State │ Started │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Started At │ 3/18/2024, 1:52:30 PM │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Users Loaded At │ 3/18/2024, 1:52:30 PM │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Current Connections │ 4 │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Total Connections │ 84590 │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Max Connections │ 5 │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Cluster │ │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Servers │ shard2 │ │ │ shard3 │ │ │ shard4 │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Services │ │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Filters │ │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Parameters │ { │ │ │ "auth_all_servers": true, │ │ │ "connection_keepalive": "300000ms", │ │ │ "debug": true, │ │ │ "disable_sescmd_history": false, │ │ │ "enable_root_user": false, │ │ │ "force_connection_keepalive": false, │ │ │ "idle_session_pool_time": "-1ms", │ │ │ "ignore_tables": [], │ │ │ "ignore_tables_regex": null, │ │ │ "localhost_match_wildcard_host": true, │ │ │ "log_auth_warnings": true, │ │ │ "log_debug": true, │ │ │ "log_info": false, │ │ │ "log_notice": false, │ │ │ "log_warning": false, │ │ │ "max_connections": 0, │ │ │ "max_sescmd_history": 50, │ │ │ "max_staleness": "150000ms", │ │ │ "multiplex_timeout": "60000ms", │ │ │ "net_write_timeout": "0ms", │ │ │ "password": "*****", │ │ │ "prune_sescmd_history": true, │ │ │ "rank": "primary", │ │ │ "refresh_databases": false, │ │ │ "refresh_interval": "10000ms", │ │ │ "retain_last_statements": -1, │ │ │ "router": "schemarouter", │ │ │ "session_trace": false, │ │ │ "strip_db_esc": true, │ │ │ "type": "service", │ │ │ "user": "maxscale_admin", │ │ │ "user_accounts_file": null, │ │ │ "user_accounts_file_usage": "add_when_load_ok", │ │ │ "version_string": null, │ │ │ "wait_timeout": "0ms" │ │ │ } │ ├─────────────────────┼──────────────────────────────────────────────────────┤ │ Router Diagnostics │ { │ │ │ "average_session": 0.028822357131634554, │ │ │ "longest_sescmd_chain": 4, │ │ │ "longest_session": 50, │ │ │ "queries": 761134, │ │ │ "sescmd_percentage": 44.44342257736483, │ │ │ "shard_map_hits": 84356, │ │ │ "shard_map_misses": 5, │ │ │ "shard_map_stale": 229, │ │ │ "shard_map_updates": 216, │ │ │ "shortest_session": 0, │ │ │ "times_sescmd_limit_exceeded": 0 │ │ │ } │ └─────────────────────┴──────────────────────────────────────────────────────┘

See also MaxScale SchemaRouter Router diagnostics.

shell> maxctrl show monitor Sharding-Monitor ┌─────────────────────┬──────────────────────────────────────────────────────────┐ │ Monitor │ Sharding-Monitor │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Source │ /etc/maxscale.cnf │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Module │ galeramon │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ State │ Running │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Servers │ shard1 │ │ │ shard2 │ │ │ shard3 │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Parameters │ { │ │ │ "available_when_donor": false, │ │ │ "backend_connect_attempts": 1, │ │ │ "backend_connect_timeout": "3000ms", │ │ │ "backend_read_timeout": "3000ms", │ │ │ "backend_write_timeout": "3000ms", │ │ │ "disable_master_failback": false, │ │ │ "disable_master_role_setting": false, │ │ │ "disk_space_check_interval": "0ms", │ │ │ "disk_space_threshold": null, │ │ │ "events": "all,master_down,master_up,...,new_donor", │ │ │ "journal_max_age": "28800000ms", │ │ │ "module": "galeramon", │ │ │ "monitor_interval": "1000ms", │ │ │ "password": "*****", │ │ │ "root_node_as_master": false, │ │ │ "script": null, │ │ │ "script_timeout": "90000ms", │ │ │ "set_donor_nodes": false, │ │ │ "type": "monitor", │ │ │ "use_priority": false, │ │ │ "user": "maxscale_monitor" │ │ │ } │ ├─────────────────────┼──────────────────────────────────────────────────────────┤ │ Monitor Diagnostics │ { │ │ │ "disable_master_failback": false, │ │ │ "disable_master_role_setting": false, │ │ │ "root_node_as_master": false, │ │ │ "server_info": [ │ │ │ { │ │ │ "gtid_binlog_pos": "0-3363-26014", │ │ │ "gtid_current_pos": "0-3363-26014", │ │ │ "master_id": 0, │ │ │ "name": "shard1", │ │ │ "read_only": false, │ │ │ "server_id": 3363 │ │ │ }, │ │ │ { │ │ │ "gtid_binlog_pos": "0-3364-240612", │ │ │ "gtid_current_pos": "0-3364-240612", │ │ │ "master_id": 0, │ │ │ "name": "shard2", │ │ │ "read_only": false, │ │ │ "server_id": 3364 │ │ │ }, │ │ │ { │ │ │ "gtid_binlog_pos": "0-3365-289873", │ │ │ "gtid_current_pos": "0-3365-289873", │ │ │ "master_id": 0, │ │ │ "name": "shard3", │ │ │ "read_only": false, │ │ │ "server_id": 3365 │ │ │ } │ │ │ ], │ │ │ "set_donor_nodes": false, │ │ │ "use_priority": false │ │ │ } │ └─────────────────────┴──────────────────────────────────────────────────────────┘ shell> maxctrl list sessions; ┌───────┬──────┬──────────────┬───────────────────────┬───────┬─────────────────┬────────┬──────────────┐ │ Id │ User │ Host │ Connected │ Idle │ Service │ Memory │ I/O-Activity │ ├───────┼──────┼──────────────┼───────────────────────┼───────┼─────────────────┼────────┼──────────────┤ │ 87240 │ app │ 10.139.158.1 │ 3/18/2024, 2:33:54 PM │ 0 │ Sharded-Service │ 68644 │ 33 │ ├───────┼──────┼──────────────┼───────────────────────┼───────┼─────────────────┼────────┼──────────────┤ │ 72654 │ app │ 10.139.158.1 │ 3/18/2024, 2:25:27 PM │ 506.3 │ Sharded-Service │ 199328 │ 0 │ ├───────┼──────┼──────────────┼───────────────────────┼───────┼─────────────────┼────────┼──────────────┤ │ 72364 │ app │ 10.139.158.1 │ 3/18/2024, 2:25:18 PM │ 516 │ Sharded-Service │ 199328 │ 0 │ ├───────┼──────┼──────────────┼───────────────────────┼───────┼─────────────────┼────────┼──────────────┤ │ 72530 │ app │ 10.139.158.1 │ 3/18/2024, 2:25:23 PM │ 510.5 │ Sharded-Service │ 199328 │ 0 │ └───────┴──────┴──────────────┴───────────────────────┴───────┴─────────────────┴────────┴──────────────┘ shell> maxctrl show session 26 ┌───────────────────────┬───────────────────────────────────────┐ │ Id │ 26 │ ├───────────────────────┼───────────────────────────────────────┤ │ Service │ Sharded-Service │ ├───────────────────────┼───────────────────────────────────────┤ │ State │ Session started │ ├───────────────────────┼───────────────────────────────────────┤ │ User │ app │ ├───────────────────────┼───────────────────────────────────────┤ │ Host │ 10.139.158.1 │ ├───────────────────────┼───────────────────────────────────────┤ │ Port │ 42854 │ ├───────────────────────┼───────────────────────────────────────┤ │ Database │ │ ├───────────────────────┼───────────────────────────────────────┤ │ Connected │ 2/4/2024, 9:31:12 AM │ ├───────────────────────┼───────────────────────────────────────┤ │ Idle │ 610.4 │ ├───────────────────────┼───────────────────────────────────────┤ │ Parameters │ { │ │ │ "log_error": false, │ │ │ "log_info": false, │ │ │ "log_notice": false, │ │ │ "log_warning": false │ │ │ } │ ├───────────────────────┼───────────────────────────────────────┤ │ Client TLS Cipher │ │ ├───────────────────────┼───────────────────────────────────────┤ │ Connection attributes │ { │ │ │ "_client_name": "libmariadb", │ │ │ "_client_version": "3.3.8", │ │ │ "_os": "Linux", │ │ │ "_pid": "251037", │ │ │ "_platform": "x86_64", │ │ │ "_server_host": "10.139.158.211", │ │ │ "program_name": "mysql" │ │ │ } │ ├───────────────────────┼───────────────────────────────────────┤ │ Connections │ shard1 │ │ │ shard2 │ │ │ shard3 │ ├───────────────────────┼───────────────────────────────────────┤ │ Connection IDs │ 666 │ │ │ 139 │ │ │ 138 │ ├───────────────────────┼───────────────────────────────────────┤ │ Queries │ │ ├───────────────────────┼───────────────────────────────────────┤ │ Log │ │ ├───────────────────────┼───────────────────────────────────────┤ │ Memory │ { │ │ │ "connection_buffers": { │ │ │ "backends": { │ │ │ "shard1": { │ │ │ "misc": 678, │ │ │ "readq": 65536, │ │ │ "total": 66214, │ │ │ "writeq": 0 │ │ │ }, │ │ │ "shard2": { │ │ │ "misc": 662, │ │ │ "readq": 0, │ │ │ "total": 662, │ │ │ "writeq": 0 │ │ │ }, │ │ │ "shard3": { │ │ │ "misc": 678, │ │ │ "readq": 65536, │ │ │ "total": 66214, │ │ │ "writeq": 0 │ │ │ } │ │ │ }, │ │ │ "client": { │ │ │ "misc": 654, │ │ │ "readq": 65536, │ │ │ "total": 66190, │ │ │ "writeq": 0 │ │ │ }, │ │ │ "total": 199280 │ │ │ }, │ │ │ "exec_metadata": 0, │ │ │ "last_queries": 0, │ │ │ "sescmd_history": 48, │ │ │ "total": 199328, │ │ │ "variables": 0 │ │ │ } │ ├───────────────────────┼───────────────────────────────────────┤ │ I/O Activity │ 0 │ └───────────────────────┴───────────────────────────────────────┘ shell> maxctrl show listener Sharded-Service-Listener ┌────────────┬───────────────────────────────────────────┐ │ Name │ Sharded-Service-Listener │ ├────────────┼───────────────────────────────────────────┤ │ Source │ /etc/maxscale.cnf │ ├────────────┼───────────────────────────────────────────┤ │ Service │ Sharded-Service │ ├────────────┼───────────────────────────────────────────┤ │ Parameters │ { │ │ │ "MariaDBProtocol": { │ │ │ "allow_replication": true │ │ │ }, │ │ │ "address": "::", │ │ │ "authenticator": null, │ │ │ "authenticator_options": null, │ │ │ "connection_init_sql_file": null, │ │ │ "connection_metadata": [ │ │ │ "character_set_client=auto", │ │ │ "character_set_connection=auto", │ │ │ "character_set_results=auto", │ │ │ "max_allowed_packet=auto", │ │ │ "system_time_zone=auto", │ │ │ "time_zone=auto", │ │ │ "tx_isolation=auto" │ │ │ ], │ │ │ "port": 3306, │ │ │ "protocol": "MariaDBProtocol", │ │ │ "proxy_protocol_networks": null, │ │ │ "service": "Sharded-Service", │ │ │ "socket": null, │ │ │ "sql_mode": "default", │ │ │ "ssl": false, │ │ │ "ssl_ca": null, │ │ │ "ssl_cert": null, │ │ │ "ssl_cert_verify_depth": 9, │ │ │ "ssl_cipher": null, │ │ │ "ssl_crl": null, │ │ │ "ssl_key": null, │ │ │ "ssl_verify_peer_certificate": false, │ │ │ "ssl_verify_peer_host": false, │ │ │ "ssl_version": "MAX", │ │ │ "type": "listener", │ │ │ "user_mapping_file": null │ │ │ } │ └────────────┴───────────────────────────────────────────┘ shell> maxctrl show module schemarouter ┌─────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Module │ schemarouter │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Type │ Router │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Version │ V1.0.0 │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Maturity │ Beta │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Description │ A database sharding router for simple sharding │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Parameters │ [ │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Enable debug mode", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "debug", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": [], │ │ │ "description": "List of tables to ignore when checking for duplicates", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "ignore_tables", │ │ │ "type": "stringlist" │ │ │ }, │ │ │ { │ │ │ "description": "Regex of tables to ignore when checking for duplicates", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "ignore_tables_regex", │ │ │ "type": "regex" │ │ │ }, │ │ │ { │ │ │ "default_value": "150000ms", │ │ │ "description": "Maximum allowed staleness of a database map entry before clients block and wait for an update", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "max_staleness", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Refresh database mapping information", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "refresh_databases", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": "300000ms", │ │ │ "description": "How often to refresh the database mapping information", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "refresh_interval", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Retrieve users from all backend servers instead of only one", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "auth_all_servers", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": "300000ms", │ │ │ "description": "How ofted idle connections are pinged", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "connection_keepalive", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "deprecated": true, │ │ │ "description": "Alias for 'wait_timeout'", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "connection_timeout", │ │ │ "type": "duration" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Disable session command history", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "disable_sescmd_history", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Allow the root user to connect to this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "enable_root_user", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Ping connections unconditionally", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "force_connection_keepalive", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": "-1ms", │ │ │ "description": "Put connections into pool after session has been idle for this long", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "idle_session_pool_time", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "default_value": true, │ │ │ "description": "Match localhost to wildcard host", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "localhost_match_wildcard_host", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": true, │ │ │ "description": "Log a warning when client authentication fails", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_auth_warnings", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Log debug messages for this service (debug builds only)", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_debug", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Log info messages for this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_info", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Log notice messages for this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_notice", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Log warning messages for this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "log_warning", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": 0, │ │ │ "description": "Maximum number of connections", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "max_connections", │ │ │ "type": "count" │ │ │ }, │ │ │ { │ │ │ "default_value": 50, │ │ │ "description": "Session command history size", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "max_sescmd_history", │ │ │ "type": "count" │ │ │ }, │ │ │ { │ │ │ "default_value": "60000ms", │ │ │ "description": "How long a session can wait for a connection to become available", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "multiplex_timeout", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "default_value": "0ms", │ │ │ "description": "Network write timeout", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "net_write_timeout", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ }, │ │ │ { │ │ │ "description": "Password for the user used to retrieve database users", │ │ │ "mandatory": true, │ │ │ "modifiable": true, │ │ │ "name": "password", │ │ │ "type": "password" │ │ │ }, │ │ │ { │ │ │ "default_value": true, │ │ │ "description": "Prune old session command history if the limit is exceeded", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "prune_sescmd_history", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": "primary", │ │ │ "description": "Service rank", │ │ │ "enum_values": [ │ │ │ "primary", │ │ │ "secondary" │ │ │ ], │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "rank", │ │ │ "type": "enum" │ │ │ }, │ │ │ { │ │ │ "default_value": -1, │ │ │ "description": "Number of statements kept in memory", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "retain_last_statements", │ │ │ "type": "int" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "description": "Enable session tracing for this service", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "session_trace", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": false, │ │ │ "deprecated": true, │ │ │ "description": "Track session state using server responses", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "session_track_trx_state", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "default_value": true, │ │ │ "deprecated": true, │ │ │ "description": "Strip escape characters from database names", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "strip_db_esc", │ │ │ "type": "bool" │ │ │ }, │ │ │ { │ │ │ "description": "Username used to retrieve database users", │ │ │ "mandatory": true, │ │ │ "modifiable": true, │ │ │ "name": "user", │ │ │ "type": "string" │ │ │ }, │ │ │ { │ │ │ "description": "Load additional users from a file", │ │ │ "mandatory": false, │ │ │ "modifiable": false, │ │ │ "name": "user_accounts_file", │ │ │ "type": "path" │ │ │ }, │ │ │ { │ │ │ "default_value": "add_when_load_ok", │ │ │ "description": "When and how the user accounts file is used", │ │ │ "enum_values": [ │ │ │ "add_when_load_ok", │ │ │ "file_only_always" │ │ │ ], │ │ │ "mandatory": false, │ │ │ "modifiable": false, │ │ │ "name": "user_accounts_file_usage", │ │ │ "type": "enum" │ │ │ }, │ │ │ { │ │ │ "description": "Custom version string to use", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "version_string", │ │ │ "type": "string" │ │ │ }, │ │ │ { │ │ │ "default_value": "0ms", │ │ │ "description": "Connection idle timeout", │ │ │ "mandatory": false, │ │ │ "modifiable": true, │ │ │ "name": "wait_timeout", │ │ │ "type": "duration", │ │ │ "unit": "ms" │ │ │ } │ │ │ ] │ ├─────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Commands │ [ │ │ │ { │ │ │ "attributes": { │ │ │ "arg_max": 1, │ │ │ "arg_min": 1, │ │ │ "description": "Clear schemarouter shard map cache", │ │ │ "method": "POST", │ │ │ "parameters": [ │ │ │ { │ │ │ "description": "The schemarouter service", │ │ │ "required": true, │ │ │ "type": "SERVICE" │ │ │ } │ │ │ ] │ │ │ }, │ │ │ "id": "clear", │ │ │ "links": { │ │ │ "self": "http://127.0.0.1:8989/v1/modules/schemarouter/clear/" │ │ │ }, │ │ │ "type": "module_command" │ │ │ }, │ │ │ { │ │ │ "attributes": { │ │ │ "arg_max": 1, │ │ │ "arg_min": 1, │ │ │ "description": "Invalidate schemarouter shard map cache", │ │ │ "method": "POST", │ │ │ "parameters": [ │ │ │ { │ │ │ "description": "The schemarouter service", │ │ │ "required": true, │ │ │ "type": "SERVICE" │ │ │ } │ │ │ ] │ │ │ }, │ │ │ "id": "invalidate", │ │ │ "links": { │ │ │ "self": "http://127.0.0.1:8989/v1/modules/schemarouter/invalidate/" │ │ │ }, │ │ │ "type": "module_command" │ │ │ } │ │ │ ] │ └─────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ shell> maxctrl show commands schemarouter ┌────────────┬────────────┬──────────────────────────┐ │ Command │ Parameters │ Descriptions │ ├────────────┼────────────┼──────────────────────────┤ │ clear │ SERVICE │ The schemarouter service │ ├────────────┼────────────┼──────────────────────────┤ │ invalidate │ SERVICE │ The schemarouter service │ └────────────┴────────────┴──────────────────────────┘ shell> maxctrl show dbusers Sharded-Service ┌───────────────────────┬────────────────┬───────────────────────┬───────┬───────┬────────┬───────┬──────┐ │ User │ Host │ Plugin │ TLS │ Super │ Global │ Proxy │ Role │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ PUBLIC │ │ │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ app │ 10.139.158.% │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ app_role │ │ │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ mariadb.sys │ localhost │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_admin │ 10.139.158.210 │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_admin │ 10.139.158.211 │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_admin_role │ │ │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_monitor │ 10.139.158.210 │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_monitor │ 10.139.158.211 │ mysql_native_password │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ maxscale_monitor_role │ │ │ false │ false │ false │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ mysql │ localhost │ mysql_native_password │ false │ true │ true │ false │ │ ├───────────────────────┼────────────────┼───────────────────────┼───────┼───────┼────────┼───────┼──────┤ │ root │ localhost │ mysql_native_password │ false │ true │ true │ false │ │ └───────────────────────┴────────────────┴───────────────────────┴───────┴───────┴────────┴───────┴──────┘ shell> maxctrl show commands mariadbmon ┌───────────────────────────┬─────────────────────────────┬───────────────────────────────────────────────────────────────────────────────┐ │ Command │ Parameters │ Descriptions │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ switchover │ MONITOR, [SERVER], [SERVER] │ Monitor name, New primary (optional), Current primary (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ switchover-force │ MONITOR, [SERVER], [SERVER] │ Monitor name, New primary (optional), Current primary (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-switchover │ MONITOR, [SERVER], [SERVER] │ Monitor name, New primary (optional), Current primary (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ failover │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-failover │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ rejoin │ MONITOR, SERVER │ Monitor name, Joining server │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-rejoin │ MONITOR, SERVER │ Monitor name, Joining server │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ reset-replication │ MONITOR, [SERVER] │ Monitor name, Primary server (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-reset-replication │ MONITOR, [SERVER] │ Monitor name, Primary server (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ release-locks │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-release-locks │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ fetch-cmd-result │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ cancel-cmd │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-add-node │ MONITOR, STRING, STRING │ Monitor name, Hostname/IP of node to add to ColumnStore cluster, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-remove-node │ MONITOR, STRING, STRING │ Monitor name, Hostname/IP of node to remove from ColumnStore cluster, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ cs-get-status │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-get-status │ MONITOR │ Monitor name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-start-cluster │ MONITOR, STRING │ Monitor name, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-stop-cluster │ MONITOR, STRING │ Monitor name, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-set-readonly │ MONITOR, STRING │ Monitor name, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-cs-set-readwrite │ MONITOR, STRING │ Monitor name, Timeout │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-rebuild-server │ MONITOR, SERVER, [SERVER] │ Monitor name, Target server, Source server (optional) │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-create-backup │ MONITOR, SERVER, STRING │ Monitor name, Source server, Backup name │ ├───────────────────────────┼─────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤ │ async-restore-from-backup │ MONITOR, SERVER, STRING │ Monitor name, Target server, Backup name │ └───────────────────────────┴─────────────────────────────┴───────────────────────────────────────────────────────────────────────────────┘
Literature / Sources
Taxonomy upgrade extras: shardingmaxscaleschemarouterload balancermulti-tenant

dbstat for MariaDB (and MySQL)

Shinguz - Thu, 2024-03-14 15:36
Table of contents

An idea that I have been thinking about for a long time and have now, thanks to a customer, finally tackled is dbstat for MariaDB/MySQL. The idea is based on sar/sysstat by Sebastien Godard:

sar - Collect, report, or save system activity information.

and Oracle Statspack:

Statspack is a performance tuning tool ... to quickly gather detailed analysis of the performance of that database instance.

Functionality of dbstat

Although we have had the performance schema for some time, it does not cover some points that we see as a problem in practice and that are requested by customers:

  • The table_size module collects data on the growth of tables. This allows statements to be made about the growth of individual tables, databases, future MariaDB Catalogs or the entire instance. This is interesting for users who are using multi-tenant systems or are otherwise struggling with uncontrolled growth.
  • The processlist module takes a snapshot of the process list at regular intervals and saves it. This information is useful for post-mortem analyses if the user was too slow to save his process list or to understand how a problem has built up.
  • The problem is often caused by long-running transactions, row locks or metadata locks. These are recorded and saved by the trx_and_lck and metadata_lock modules. This means that we can see problems that we did not even notice before or we can see what led to the problem after the accident (analogous to a tachograph in a vehicle).
  • Another question that we sometimes encounter in practice is: When was which database variable changed and what did it look like before? This is covered by the global_variables module. Unfortunately, it is not possible to find out who changed the variable or why. Operational processes are required for this.
  • The last module, global_status, actually covers what sar/sysstat does. It collects the values from SHOW GLOBAL STATUS; and saves them for later analysis purposes or to simply create graphs.

How does dbstat work

dbstat uses the database Event Scheduler as a scheduler. This must first be switched on for MariaDB (event_scheduler = ON). With MySQL it is already switched on by default. The Event Scheduler has the advantage that we can activate the jobs at a finer granularity, for example 10 s, which would not be possible with the crontab.

The Event Scheduler then executes SQL/PSM code to collect the data on the one hand and to delete the data on the other, so that the dbstat database does not grow immeasurably.

The following jobs are currently planned:

ModuleCollectDeleteQuantity structureRemarks table_size1/d at 02:0412/h, 1000 rows, > 31 d1000 tab × 31 d = 31k rowsShould work up to 288k tables. processlist1/min1/min, 1000 rows, > 7 d1000 con × 1440 min × 7 d = 10M rowsShould work up to 1000 concurrent connections. trx_and_lck1/min1/min, 1000 rows, > 7 d100 lck × 1440 min × 7 d = 1M rowsDepends very much on the application. metadata_lock1/min12/h, 1000 rows, > 30 d100 mdl × 1440 × 30 d = 4M rowsDepends very much on the application. global_variables1/minnever1000 rowsNormally this table should not grow. global_status1/min1/min, 1000 rows, > 30 d1000 rows × 1440 × 30 d = 40MRows can become large?
How to install dbstat

dbstat can be downloaded from Github and is licensed under GPLv2.

The installation is simple: First execute the SQL file create_user_and_db.sql. Then execute the corresponding create_*.sql files for the respective modules in the dbstat database. There are currently no direct dependencies between the modules. If you want to use a different user or a different database than dbstat, you have to take care of this yourself.

Query dbstat

Some possible queries on the data have already been prepared. They can be found in the query_*.sql files. Here are a few examples:

table_size SELECT `table_schema`, `table_name`, `ts`, `table_rows`, `data_length`, `index_length` FROM `table_size` WHERE `table_catalog` = 'def' AND `table_schema` = 'dbstat' AND `table_name` = 'table_size' ORDER BY `ts` ASC ; +--------------+------------+---------------------+------------+-------------+--------------+ | table_schema | table_name | ts | table_rows | data_length | index_length | +--------------+------------+---------------------+------------+-------------+--------------+ | dbstat | table_size | 2024-03-09 20:01:00 | 0 | 16384 | 16384 | | dbstat | table_size | 2024-03-10 17:26:33 | 310 | 65536 | 16384 | | dbstat | table_size | 2024-03-11 08:28:12 | 622 | 114688 | 49152 | | dbstat | table_size | 2024-03-12 08:02:38 | 934 | 114688 | 49152 | | dbstat | table_size | 2024-03-13 08:08:55 | 1247 | 278528 | 81920 | +--------------+------------+---------------------+------------+-------------+--------------+
processlist SELECT connection_id, ts, time, state, SUBSTR(REGEXP_REPLACE(REPLACE(query, "\n", ' '), '\ +', ' '), 1, 64) AS query FROM processlist WHERE command != 'Sleep' AND connection_id = @connection_id ORDER BY ts ASC LIMIT 5 ; +---------------+---------------------+---------+---------------------------------+---------------------------------------------+ | connection_id | ts | time | state | query | +---------------+---------------------+---------+---------------------------------+---------------------------------------------+ | 14956 | 2024-03-09 20:21:12 | 13.042 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | | 14956 | 2024-03-09 20:22:12 | 73.045 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | | 14956 | 2024-03-09 20:23:12 | 133.044 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | | 14956 | 2024-03-09 20:24:12 | 193.044 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | | 14956 | 2024-03-09 20:25:12 | 253.041 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | +---------------+---------------------+---------+---------------------------------+---------------------------------------------+
trx_and_lck SELECT * FROM trx_and_lck\G *************************** 1. row *************************** machine_name: connection_id: 14815 trx_id: 269766 ts: 2024-03-09 20:05:57 user: root host: localhost db: test command: Query time: 41.000 running_since: 2024-03-09 20:05:16 state: Statistics info: select * from test where id = 6 for update trx_state: LOCK WAIT trx_started: 2024-03-09 20:05:15 trx_requested_lock_id: 269766:821:5:7 trx_tables_in_use: 1 trx_tables_locked: 1 trx_lock_structs: 2 trx_rows_locked: 1 trx_rows_modified: 0 lock_mode: X lock_type: RECORD lock_table_schema: test lock_table_name: test lock_index: PRIMARY lock_space: 821 lock_page: 5 lock_rec: 7 lock_data: 6 *************************** 2. row *************************** machine_name: connection_id: 14817 trx_id: 269760 ts: 2024-03-09 20:05:57 user: root host: localhost db: test command: Sleep time: 60.000 running_since: 2024-03-09 20:04:57 state: info: trx_state: RUNNING trx_started: 2024-03-09 20:04:56 trx_requested_lock_id: NULL trx_tables_in_use: 0 trx_tables_locked: 1 trx_lock_structs: 2 trx_rows_locked: 1 trx_rows_modified: 1 lock_mode: X lock_type: RECORD lock_table_schema: test lock_table_name: test lock_index: PRIMARY lock_space: 821 lock_page: 5 lock_rec: 7 lock_data: 6
metadata_lock SELECT lock_mode, ts, user, host, lock_type, table_schema, table_name, time, started, state, query FROM metadata_lock WHERE connection_id = 14347 ORDER BY started DESC LIMIT 5 ; +-------------------------+---------------------+------+-----------+----------------------+--------------+------------+-------+---------------------+----------------+------------------------------------------------------+ | lock_mode | ts | user | host | lock_type | table_schema | table_name | time | started | state | query | +-------------------------+---------------------+------+-----------+----------------------+--------------+------------+-------+---------------------+----------------+------------------------------------------------------+ | MDL_SHARED_WRITE | 2024-03-13 10:27:33 | root | localhost | Table metadata lock | test | test | 1.000 | 2024-03-13 10:27:32 | Updating | UPDATE test set data3 = MD5(id) | | MDL_BACKUP_TRANS_DML | 2024-03-13 10:27:33 | root | localhost | Backup lock | | | 1.000 | 2024-03-13 10:27:32 | Updating | UPDATE test set data3 = MD5(id) | | MDL_BACKUP_ALTER_COPY | 2024-03-13 10:22:33 | root | localhost | Backup lock | | | 0.000 | 2024-03-13 10:22:33 | altering table | ALTER TABLE test DROP INDEX ts, ADD INDEX (ts, data) | | MDL_SHARED_UPGRADABLE | 2024-03-13 10:22:33 | root | localhost | Table metadata lock | test | test | 0.000 | 2024-03-13 10:22:33 | altering table | ALTER TABLE test DROP INDEX ts, ADD INDEX (ts, data) | | MDL_INTENTION_EXCLUSIVE | 2024-03-13 10:22:33 | root | localhost | Schema metadata lock | test | | 0.000 | 2024-03-13 10:22:33 | altering table | ALTER TABLE test DROP INDEX ts, ADD INDEX (ts, data) | +-------------------------+---------------------+------+-----------+----------------------+--------------+------------+-------+---------------------+----------------+------------------------------------------------------+
global_variables SELECT variable_name, COUNT(*) AS cnt FROM global_variables GROUP BY variable_name HAVING COUNT(*) > 1 ; +-------------------------+-----+ | variable_name | cnt | +-------------------------+-----+ | innodb_buffer_pool_size | 7 | +-------------------------+-----+ SELECT variable_name, ts, variable_value FROM global_variables WHERE variable_name = 'innodb_buffer_pool_size' ; +-------------------------+---------------------+----------------+ | variable_name | ts | variable_value | +-------------------------+---------------------+----------------+ | innodb_buffer_pool_size | 2024-03-09 21:36:28 | 134217728 | | innodb_buffer_pool_size | 2024-03-09 21:40:25 | 268435456 | | innodb_buffer_pool_size | 2024-03-09 21:48:14 | 134217728 | +-------------------------+---------------------+----------------+
global_status SELECT s1.ts , s1.variable_value AS 'table_open_cache_misses' , s2.variable_value AS 'table_open_cache_hits' FROM global_status AS s1 JOIN global_status AS s2 ON s1.ts = s2.ts WHERE s1.variable_name = 'table_open_cache_misses' AND s2.variable_name = 'table_open_cache_hits' AND s1.ts BETWEEN '2024-03-13 11:55:00' AND '2024-03-13 12:05:00' ORDER BY ts ASC ; +---------------------+-------------------------+-----------------------+ | ts | table_open_cache_misses | table_open_cache_hits | +---------------------+-------------------------+-----------------------+ | 2024-03-13 11:55:47 | 1001 | 60711 | | 2024-03-13 11:56:47 | 1008 | 61418 | | 2024-03-13 11:57:47 | 1015 | 62125 | | 2024-03-13 11:58:47 | 1022 | 62829 | | 2024-03-13 11:59:47 | 1029 | 63533 | | 2024-03-13 12:00:47 | 1036 | 64237 | | 2024-03-13 12:01:47 | 1043 | 64944 | | 2024-03-13 12:02:47 | 1050 | 65651 | | 2024-03-13 12:03:47 | 1057 | 66355 | | 2024-03-13 12:04:47 | 1064 | 67059 | +---------------------+-------------------------+-----------------------+
Testing

We have currently rolled out dbstat on our test and production systems to test it and see whether our assumptions regarding stability and calculations of the quantity structure are correct. In addition, using it ourselves is the best way to find out if something is missing or if the handling is impractical (Eat your own dog food).

Sources
Taxonomy upgrade extras: performancemonitoringperformance monitoringmetadata locklocklockingperformance_schema

dbstat for MariaDB (and MySQL)

Shinguz - Thu, 2024-03-14 15:36
Table of contents

An idea that I have been thinking about for a long time and have now, thanks to a customer, finally tackled is dbstat for MariaDB/MySQL. The idea is based on sar/sysstat by Sebastien Godard:

sar - Collect, report, or save system activity information.

and Oracle Statspack:

Statspack is a performance tuning tool ... to quickly gather detailed analysis of the performance of that database instance.

Functionality of dbstat

Although we have had the performance schema for some time, it does not cover some points that we see as a problem in practice and that are requested by customers:

  • The table_size module collects data on the growth of tables. This allows statements to be made about the growth of individual tables, databases, future MariaDB Catalogs or the entire instance. This is interesting for users who are using multi-tenant systems or are otherwise struggling with uncontrolled growth.
  • The processlist module takes a snapshot of the process list at regular intervals and saves it. This information is useful for post-mortem analyses if the user was too slow to save his process list or to understand how a problem has built up.
  • The problem is often caused by long-running transactions, row locks or metadata locks. These are recorded and saved by the trx_and_lck and metadata_lock modules. This means that we can see problems that we did not even notice before or we can see what led to the problem after the accident (analogous to a tachograph in a vehicle).
  • Another question that we sometimes encounter in practice is: When was which database variable changed and what did it look like before? This is covered by the global_variables module. Unfortunately, it is not possible to find out who changed the variable or why. Operational processes are required for this.
  • The last module, global_status, actually covers what sar/sysstat does. It collects the values from SHOW GLOBAL STATUS; and saves them for later analysis purposes or to simply create graphs.

How does dbstat work

dbstat uses the database Event Scheduler as a scheduler. This must first be switched on for MariaDB (event_scheduler = ON). With MySQL it is already switched on by default. The Event Scheduler has the advantage that we can activate the jobs at a finer granularity, for example 10 s, which would not be possible with the crontab.

The Event Scheduler then executes SQL/PSM code to collect the data on the one hand and to delete the data on the other, so that the dbstat database does not grow immeasurably.

The following jobs are currently planned:

ModuleCollectDeleteQuantity structureRemarks table_size1/d at 02:0412/h, 1000 rows, > 31 d1000 tab × 31 d = 31k rowsShould work up to 288k tables. processlist1/min1/min, 1000 rows, > 7 d1000 con × 1440 min × 7 d = 10M rowsShould work up to 1000 concurrent connections. trx_and_lck1/min1/min, 1000 rows, > 7 d100 lck × 1440 min × 7 d = 1M rowsDepends very much on the application. metadata_lock1/min12/h, 1000 rows, > 30 d100 mdl × 1440 × 30 d = 4M rowsDepends very much on the application. global_variables1/minnever1000 rowsNormally this table should not grow. global_status1/min1/min, 1000 rows, > 30 d1000 rows × 1440 × 30 d = 40MRows Can become large?
How to install dbstat

dbstat can be downloaded from Github and is licensed under GPLv2.

The installation is simple: First execute the SQL file create_user_and_db.sql. Then execute the corresponding create_*.sql files for the respective modules in the dbstat database. There are currently no direct dependencies between the modules. If you want to use a different user or a different database than dbstat, you have to take care of this yourself.

Query dbstat

Some possible queries on the data have already been prepared. They can be found in the query_*.sql files. Here are a few examples:

table_size SELECT `table_schema`, `table_name`, `ts`, `table_rows`, `data_length`, `index_length` FROM `table_size` WHERE `table_catalog` = 'def' AND `table_schema` = 'dbstat' AND `table_name` = 'table_size' ORDER BY `ts` ASC ; +--------------+------------+---------------------+------------+-------------+--------------+ | table_schema | table_name | ts | table_rows | data_length | index_length | +--------------+------------+---------------------+------------+-------------+--------------+ | dbstat | table_size | 2024-03-09 20:01:00 | 0 | 16384 | 16384 | | dbstat | table_size | 2024-03-10 17:26:33 | 310 | 65536 | 16384 | | dbstat | table_size | 2024-03-11 08:28:12 | 622 | 114688 | 49152 | | dbstat | table_size | 2024-03-12 08:02:38 | 934 | 114688 | 49152 | | dbstat | table_size | 2024-03-13 08:08:55 | 1247 | 278528 | 81920 | +--------------+------------+---------------------+------------+-------------+--------------+
processlist SELECT connection_id, ts, time, state, SUBSTR(REGEXP_REPLACE(REPLACE(query, "\n", ' '), '\ +', ' '), 1, 64) AS query FROM processlist WHERE command != 'Sleep' AND connection_id = @connection_id ORDER BY ts ASC LIMIT 5 ; +---------------+---------------------+---------+---------------------------------+---------------------------------------------+ | connection_id | ts | time | state | query | +---------------+---------------------+---------+---------------------------------+---------------------------------------------+ | 14956 | 2024-03-09 20:21:12 | 13.042 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | | 14956 | 2024-03-09 20:22:12 | 73.045 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | | 14956 | 2024-03-09 20:23:12 | 133.044 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | | 14956 | 2024-03-09 20:24:12 | 193.044 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | | 14956 | 2024-03-09 20:25:12 | 253.041 | Waiting for table metadata lock | update test set data = 'bla' where id = 100 | +---------------+---------------------+---------+---------------------------------+---------------------------------------------+
trx_and_lck SELECT * FROM trx_and_lck\G *************************** 1. row *************************** machine_name: connection_id: 14815 trx_id: 269766 ts: 2024-03-09 20:05:57 user: root host: localhost db: test command: Query time: 41.000 running_since: 2024-03-09 20:05:16 state: Statistics info: select * from test where id = 6 for update trx_state: LOCK WAIT trx_started: 2024-03-09 20:05:15 trx_requested_lock_id: 269766:821:5:7 trx_tables_in_use: 1 trx_tables_locked: 1 trx_lock_structs: 2 trx_rows_locked: 1 trx_rows_modified: 0 lock_mode: X lock_type: RECORD lock_table_schema: test lock_table_name: test lock_index: PRIMARY lock_space: 821 lock_page: 5 lock_rec: 7 lock_data: 6 *************************** 2. row *************************** machine_name: connection_id: 14817 trx_id: 269760 ts: 2024-03-09 20:05:57 user: root host: localhost db: test command: Sleep time: 60.000 running_since: 2024-03-09 20:04:57 state: info: trx_state: RUNNING trx_started: 2024-03-09 20:04:56 trx_requested_lock_id: NULL trx_tables_in_use: 0 trx_tables_locked: 1 trx_lock_structs: 2 trx_rows_locked: 1 trx_rows_modified: 1 lock_mode: X lock_type: RECORD lock_table_schema: test lock_table_name: test lock_index: PRIMARY lock_space: 821 lock_page: 5 lock_rec: 7 lock_data: 6
metadata_lock SELECT lock_mode, ts, user, host, lock_type, table_schema, table_name, time, started, state, query FROM metadata_lock WHERE connection_id = 14347 ORDER BY started DESC LIMIT 5 ; +-------------------------+---------------------+------+-----------+----------------------+--------------+------------+-------+---------------------+----------------+------------------------------------------------------+ | lock_mode | ts | user | host | lock_type | table_schema | table_name | time | started | state | query | +-------------------------+---------------------+------+-----------+----------------------+--------------+------------+-------+---------------------+----------------+------------------------------------------------------+ | MDL_SHARED_WRITE | 2024-03-13 10:27:33 | root | localhost | Table metadata lock | test | test | 1.000 | 2024-03-13 10:27:32 | Updating | UPDATE test set data3 = MD5(id) | | MDL_BACKUP_TRANS_DML | 2024-03-13 10:27:33 | root | localhost | Backup lock | | | 1.000 | 2024-03-13 10:27:32 | Updating | UPDATE test set data3 = MD5(id) | | MDL_BACKUP_ALTER_COPY | 2024-03-13 10:22:33 | root | localhost | Backup lock | | | 0.000 | 2024-03-13 10:22:33 | altering table | ALTER TABLE test DROP INDEX ts, ADD INDEX (ts, data) | | MDL_SHARED_UPGRADABLE | 2024-03-13 10:22:33 | root | localhost | Table metadata lock | test | test | 0.000 | 2024-03-13 10:22:33 | altering table | ALTER TABLE test DROP INDEX ts, ADD INDEX (ts, data) | | MDL_INTENTION_EXCLUSIVE | 2024-03-13 10:22:33 | root | localhost | Schema metadata lock | test | | 0.000 | 2024-03-13 10:22:33 | altering table | ALTER TABLE test DROP INDEX ts, ADD INDEX (ts, data) | +-------------------------+---------------------+------+-----------+----------------------+--------------+------------+-------+---------------------+----------------+------------------------------------------------------+
global_variables SELECT variable_name, COUNT(*) AS cnt FROM global_variables GROUP BY variable_name HAVING COUNT(*) > 1 ; +-------------------------+-----+ | variable_name | cnt | +-------------------------+-----+ | innodb_buffer_pool_size | 7 | +-------------------------+-----+ SELECT variable_name, ts, variable_value FROM global_variables WHERE variable_name = 'innodb_buffer_pool_size' ; +-------------------------+---------------------+----------------+ | variable_name | ts | variable_value | +-------------------------+---------------------+----------------+ | innodb_buffer_pool_size | 2024-03-09 21:36:28 | 134217728 | | innodb_buffer_pool_size | 2024-03-09 21:40:25 | 268435456 | | innodb_buffer_pool_size | 2024-03-09 21:48:14 | 134217728 | +-------------------------+---------------------+----------------+
global_status SELECT s1.ts , s1.variable_value AS 'table_open_cache_misses' , s2.variable_value AS 'table_open_cache_hits' FROM global_status AS s1 JOIN global_status AS s2 ON s1.ts = s2.ts WHERE s1.variable_name = 'table_open_cache_misses' AND s2.variable_name = 'table_open_cache_hits' AND s1.ts BETWEEN '2024-03-13 11:55:00' AND '2024-03-13 12:05:00' ORDER BY ts ASC ; +---------------------+-------------------------+-----------------------+ | ts | table_open_cache_misses | table_open_cache_hits | +---------------------+-------------------------+-----------------------+ | 2024-03-13 11:55:47 | 1001 | 60711 | | 2024-03-13 11:56:47 | 1008 | 61418 | | 2024-03-13 11:57:47 | 1015 | 62125 | | 2024-03-13 11:58:47 | 1022 | 62829 | | 2024-03-13 11:59:47 | 1029 | 63533 | | 2024-03-13 12:00:47 | 1036 | 64237 | | 2024-03-13 12:01:47 | 1043 | 64944 | | 2024-03-13 12:02:47 | 1050 | 65651 | | 2024-03-13 12:03:47 | 1057 | 66355 | | 2024-03-13 12:04:47 | 1064 | 67059 | +---------------------+-------------------------+-----------------------+
Testing

We have currently rolled out dbstat on our test and production systems to test it and see whether our assumptions regarding stability and calculations of the quantity structure are correct. In addition, using it ourselves is the best way to find out if something is missing or if the handling is impractical (Eat your own dog food).

Sources
Taxonomy upgrade extras: performancemonitoringperformance monitoringmetadata locklocklockingperformance_schema

Pages

Subscribe to FromDual aggregator - MySQL Tech-Feed (en)