You are here

FromDual TechFeed (en)

MariaDB/MySQL Environment MyEnv 2.1.0 has been released

Shinguz - Wed, 2024-02-28 17:22

FromDual has the pleasure to announce the release of the new version 2.1.0 of its popular MariaDB, Galera Cluster and MySQL multi-instance environment MyEnv.

The new MyEnv can be downloaded here. How to install MyEnv is described in the MyEnv Installation Guide.

In the inconceivable case that you find a bug in the MyEnv please report it to the FromDual bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x to 2.0

Please look at the MyEnv 2.0.0 Release Notes.

Upgrade from 2.0.x to 2.1.0 shell> cd ${HOME}/product shell> tar xf /download/myenv-2.1.0.tar.gz shell> rm -f myenv shell> ln -s myenv-2.1.0 myenv
Plug-ins

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

shell> cd ${HOME}/product/myenv shell> ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Upgrade of the instance directory structure

From MyEnv 1.0 to 2.0 the directory structure of instances has fundamentally changed. Nevertheless MyEnv 2.0 works fine with MyEnv 1.0 directory structures.

Changes in MyEnv 2.1.0 MyEnv
  • Removed hard coded parts for running MyEnv under O/S user mariadb.
  • Function substitute_path was refactored.
  • Branch guessing improved.
  • Warnings and errors are in color now.
  • MyEnv log file is now touched to avoid problems with O/S user root.
  • O/S user mysql removed in start/stop script.
  • Checks for DB start improved.
  • /var/run replaced by the more modern location /run.
  • Should now be completely MariaDB compatible (mariadbd vs. mysqld).
  • Wrapper mysqld_safe was extended to mariadbd-safe.
  • Replaced getVersionFromMysqld by getVersionAndBranchFromDaemon and extended functionality of this function.
  • LD_LIBRARY_PATH was set to the wrong directory.
  • Reverting Commit: fcc93c5 from v2.0.3 related to CDPATH. Break commands like cd log or cd etc.
  • Database mysql_innodb_cluster_metadata is hidden now.
  • Database #innodb_redo is suppressed now as well for MySQL 8.0, and hideschema is not added to every new instance any more to not overwrite the default.
  • Bug while stopping instance with missing my.cnf fixed.
  • Function getDistribution cleaned-up.
  • MySQL should now also be detected correctly from Ubuntu repository.
  • Function my_exec rewritten.
  • Debian GNU/Linux tag added for distros.
  • Function extractBranch made better to work on Debian and Ubuntu with distribution packages.
  • Oracle Linux is considered as well now.
  • Made scripts ready for new MariaDB behaviour.
  • my.cnf template adapted to newest knowledge.
  • Directory changed from /tmp to /var/tmp, code cleaned-up and renewal, PID file code and message improved in stopInstance.
  • Distributions cleaned-up and cloudlinux, rocky linux and almalinux added as centos compatible distros.

MyEnv Installer
  • Debian 10 and 11 do not support PHP 8.0 yet, fixed.
  • Unit file is copied now correctly.
  • MyEnv instance installation is automatizable now.
  • Instance creation automation added.
  • my.cnf template together with installMyenv should now work without errors or warnings for MariaDB 10.5 - 11.2 and MySQL 8.0 - 8.3.
  • Command yum replaced by dnf.
  • Command apt-get comments replaced by apt.

MyEnv Utilities
  • Client utility adapted in *monitor scripts.
  • InnoDB cluster monitor added.
  • wsrep_last_committed was added in galera_monitor.sh.
  • AWR added, sharding stuff added, lock and trx analysis scripts added.
  • Memory analysis added, NUMA maps output made ready for new variables.
  • connect_maxout utility added.

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: MyEnvmulti-instancevirtualizationconsolidationSaaSOperationsreleasemysqld_multi

MariaDB/MySQL Environment MyEnv 2.1.0 has been released

Shinguz - Wed, 2024-02-28 17:22

FromDual has the pleasure to announce the release of the new version 2.1.0 of its popular MariaDB, Galera Cluster and MySQL multi-instance environment MyEnv.

The new MyEnv can be downloaded here. How to install MyEnv is described in the MyEnv Installation Guide.

In the inconceivable case that you find a bug in the MyEnv please report it to the FromDual bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x to 2.0

Please look at the MyEnv 2.0.0 Release Notes.

Upgrade from 2.0.x to 2.1.0 shell> cd ${HOME}/product shell> tar xf /download/myenv-2.1.0.tar.gz shell> rm -f myenv shell> ln -s myenv-2.1.0 myenv
Plug-ins

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

shell> cd ${HOME}/product/myenv shell> ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Upgrade of the instance directory structure

From MyEnv 1.0 to 2.0 the directory structure of instances has fundamentally changed. Nevertheless MyEnv 2.0 works fine with MyEnv 1.0 directory structures.

Changes in MyEnv 2.1.0 MyEnv
  • Removed hard coded parts for running MyEnv under O/S user mariadb.
  • Function substitute_path was refactored.
  • Branch guessing improved.
  • Warnings and errors are in color now.
  • MyEnv log file is now touched to avoid problems with O/S user root.
  • O/S user mysql removed in start/stop script.
  • Checks for DB start improved.
  • /var/run replaced by the more modern location /run.
  • Should now be completely MariaDB compatible (mariadbd vs. mysqld).
  • Wrapper mysqld_safe was extended to mariadbd-safe.
  • Replaced getVersionFromMysqld by getVersionAndBranchFromDaemon and extended functionality of this function.
  • LD_LIBRARY_PATH was set to the wrong directory.
  • Reverting Commit: fcc93c5 from v2.0.3 related to CDPATH. Break commands like cd log or cd etc.
  • Database mysql_innodb_cluster_metadata is hidden now.
  • Database #innodb_redo is suppressed now as well for MySQL 8.0, and hideschema is not added to every new instance any more to not overwrite the default.
  • Bug while stopping instance with missing my.cnf fixed.
  • Function getDistribution cleaned-up.
  • MySQL should now also be detected correctly from Ubuntu repository.
  • Function my_exec rewritten.
  • Debian GNU/Linux tag added for distros.
  • Function extractBranch made better to work on Debian and Ubuntu with distribution packages.
  • Oracle Linux is considered as well now.
  • Made scripts ready for new MariaDB behaviour.
  • my.cnf template adapted to newest knowledge.
  • Directory changed from /tmp to /var/tmp, code cleaned-up and renewal, PID file code and message improved in stopInstance.
  • Distributions cleaned-up and cloudlinux, rocky linux and almalinux added as centos compatible distros.

MyEnv Installer
  • Debian 10 and 11 do not support PHP 8.0 yet, fixed.
  • Unit file is copied now correctly.
  • MyEnv instance installation is automatizable now.
  • Instance creation automation added.
  • my.cnf template together with installMyenv should now work without errors or warnings for MariaDB 10.5 - 11.2 and MySQL 8.0 - 8.3.
  • Command yum replaced by dnf.
  • Command apt-get comments replaced by apt.

MyEnv Utilities
  • Client utility adapted in *monitor scripts.
  • InnoDB cluster monitor added.
  • wsrep_last_committed was added in galera_monitor.sh.
  • AWR added, sharding stuff added, lock and trx analysis scripts added.
  • Memory analysis added, NUMA maps output made ready for new variables.
  • connect_maxout utility added.

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: MyEnvmulti-instancevirtualizationconsolidationSaaSOperationsreleasemysqld_multi

We build a data warehouse from the General Query Log

Shinguz - Wed, 2024-01-31 16:41

The design of a data warehouse differs from relational design. Data warehouses are often designed according to the concept of the star schema.

When building a data warehouse, you usually put the cart before the horse:

  • What questions should my data warehouse be able to answer?
  • How do I have to design my model so that my questions can be answered easily?
  • Where do I get the data to populate the model?
  • How do I fill my model with the data?

For training purposes, we have investigated an issue that arises from time to time with our support team: The system suddenly and unexpectedly starts to behave unusually, nobody has done anything and nobody knows why. Example with a customer last week: The system starts to become unstable at 3 pm, is then restarted hard and then stabilises again from 4 pm...

The easiest thing to do in such a case would be to quickly look at the database with the SHOW PROCESSLIST command and then it often becomes immediately clear where the problem lies. But customers often forget this or they are not fast enough. The General Query Log was already switched on for this customer, so this would be a great case for our General Query Log Data Warehouse!

What questions should my data warehouse be able to answer?

The generic question for this problem should be something like: "Who or what caused my system to behave abnormally?"

In technical terms, the question would be something like:

  • Who: Which user or account was on the database with how many connections at the time in question? What was unusual about it?
  • What: Which queries were running in which schema on the system at the time in question? Which of these queries were unusual?

What should my model look like?

We can already derive some facts and dimensions from the question:

  • User or account (user + host)
  • Time
  • Connections
  • Schema
  • Queries

And this also results in 4 dimensions and the fact table:

Data source

Where the data comes from is relatively easy to answer in this case: The customer provides his General Query Logs or you can also use the General Query Logs of our own systems for testing purposes.

How is the model populated?

Technically, this is known as an ETL process (Extract-Transform-Load). In our case, we have built a General Query Log parser that reads the General Query Log, prepares the data accordingly and saves it in the model.

Checking the model

And then we come to checking the model. We used test data from one of our systems for this:

  • Which user was on the system at the time in question?
  • Which user had how many connections open at the time in question?

SELECT td.time, cd.user, COUNT(*) AS count FROM connection_dim cd JOIN query_fact AS qf ON qf.connection_id = cd.connection_id JOIN time_dim AS td ON td.time_id = qf.time_id JOIN date_dim AS dd ON dd.date_id = qf.date_id WHERE td.time BETWEEN '17:00' AND '18:30' AND dd.date = '2019-08-02' GROUP BY td.time, cd.user ORDER BY td.time ASC, cd.user ; +----------+---------------+-------+ | time | user | count | +----------+---------------+-------+ | 17:58:00 | UNKNOWN USER | 1 | | 17:59:00 | brman | 58 | | 17:59:00 | brman_catalog | 18 | | 17:59:00 | root | 5 | | 18:00:00 | brman | 296 | | 18:00:00 | brman_catalog | 7 | | 18:00:00 | root | 3 | | 18:01:00 | brman_catalog | 18 | | 18:01:00 | root | 3 | | 18:06:00 | brman | 266 | | 18:06:00 | brman_catalog | 6 | | 18:07:00 | brman | 88 | | 18:07:00 | brman_catalog | 7 | | 18:10:00 | brman | 211 | | 18:10:00 | brman_catalog | 18 | | 18:10:00 | root | 4 | | 18:11:00 | brman | 141 | | 18:11:00 | root | 3 | | 18:13:00 | brman | 4 | | 18:14:00 | brman | 348 | | 18:17:00 | brman | 354 | | 18:17:00 | brman_catalog | 12 | | 18:17:00 | root | 1 | +----------+---------------+-------+
  • Which account was on the system at the time in question?
  • Which account had how many connections open at the time in question?

SELECT td.time, cd.user, cd.hostname, COUNT(*) AS count FROM connection_dim cd JOIN query_fact AS qf ON qf.connection_id = cd.connection_id JOIN time_dim AS td ON td.time_id = qf.time_id JOIN date_dim AS dd ON dd.date_id = qf.date_id WHERE td.time BETWEEN '17:00' AND '18:30' AND dd.date = '2019-08-02' GROUP BY td.time, cd.user, cd.hostname ORDER BY td.time ASC, cd.user ; +----------+---------------+--------------+-------+ | time | user | hostname | count | +----------+---------------+--------------+-------+ | 17:58:00 | UNKNOWN USER | UNKNOWN HOST | 1 | | 17:59:00 | brman | localhost | 58 | | 17:59:00 | brman_catalog | localhost | 18 | | 17:59:00 | root | localhost | 5 | | 18:00:00 | brman | localhost | 296 | | 18:00:00 | brman_catalog | localhost | 7 | | 18:00:00 | root | localhost | 3 | | 18:01:00 | brman_catalog | localhost | 18 | | 18:01:00 | root | localhost | 3 | | 18:06:00 | brman | localhost | 266 | | 18:06:00 | brman_catalog | localhost | 6 | | 18:07:00 | brman | localhost | 88 | | 18:07:00 | brman_catalog | localhost | 7 | | 18:10:00 | brman | localhost | 211 | | 18:10:00 | brman_catalog | localhost | 18 | | 18:10:00 | root | localhost | 4 | | 18:11:00 | brman | localhost | 141 | | 18:11:00 | root | localhost | 3 | | 18:13:00 | brman | localhost | 4 | | 18:14:00 | brman | localhost | 348 | | 18:17:00 | brman | localhost | 354 | | 18:17:00 | brman_catalog | localhost | 12 | | 18:17:00 | root | localhost | 1 | +----------+---------------+--------------+-------+
  • What was unusual about it?

SELECT cd.user, td.time, COUNT(*) AS count FROM connection_dim cd JOIN query_fact AS qf ON qf.connection_id = cd.connection_id JOIN time_dim AS td ON td.time_id = qf.time_id JOIN date_dim AS dd ON dd.date_id = qf.date_id WHERE td.time BETWEEN '17:00' AND '18:30' AND dd.date = '2019-08-02' GROUP BY td.time, cd.user ORDER BY cd.user ASC, td.time ASC ; +---------------+----------+-------+ | user | time | count | +---------------+----------+-------+ | brman | 17:59:00 | 58 | | brman | 18:00:00 | 296 | | brman | 18:06:00 | 266 | | brman | 18:07:00 | 88 | | brman | 18:10:00 | 211 | | brman | 18:11:00 | 141 | | brman | 18:13:00 | 4 | | brman | 18:14:00 | 348 | | brman | 18:17:00 | 354 | | brman_catalog | 17:59:00 | 18 | | brman_catalog | 18:00:00 | 7 | | brman_catalog | 18:01:00 | 18 | | brman_catalog | 18:06:00 | 6 | | brman_catalog | 18:07:00 | 7 | | brman_catalog | 18:10:00 | 18 | | brman_catalog | 18:17:00 | 12 | | root | 17:59:00 | 5 | | root | 18:00:00 | 3 | | root | 18:01:00 | 3 | | root | 18:10:00 | 4 | | root | 18:11:00 | 3 | | root | 18:17:00 | 1 | | UNKNOWN USER | 17:58:00 | 1 | +---------------+----------+-------+

One could deduce here, for example, that the user brman had a relatively large number of open connections during the period in question. Whether this is unusual, we have too little data or the time period is too short.

  • Which queries were running on the system at the time in question and in which schema?
  • Which of these queries were unusual?

SELECT sd.schema_name, td.time, SUBSTR(std.statement_text, 1, 128) AS query FROM query_fact AS qf JOIN time_dim AS td ON td.time_id = qf.time_id JOIN schema_dim AS sd ON sd.schema_id = qf.schema_id JOIN statement_dim AS std ON std.statement_id = qf.statement_id WHERE td.time BETWEEN '17:00' AND '18:30' AND sd.schema_name = 'brman_catalog' AND std.command = 'Query' ORDER BY td.time, qf.statement_id LIMIT 10 ; +---------------+----------+----------------------------------------------------------------------------------------------------------------------------------+ | schema_name | time | query | +---------------+----------+----------------------------------------------------------------------------------------------------------------------------------+ | brman_catalog | 17:59:00 | SET NAMES `utf8` | | brman_catalog | 17:59:00 | SELECT COUNT ( * ) AS `cnt` FROM `information_schema` . `tables` WHERE `table_schema` = ? AND TABLE_NAME = ? | | brman_catalog | 17:59:00 | SELECT COUNT ( * ) AS `cnt` FROM `information_schema` . `tables` WHERE `table_schema` = ? AND TABLE_NAME = ? | | brman_catalog | 17:59:00 | CREATE TABLE `metadata` ( `id` TINYINT UNSIGNED NOT NULL AUTO_INCREMENT , `key` VARCHARACTER (?) NOT NULL , `value` VARCHARACTER | | brman_catalog | 17:59:00 | INSERT INTO `metadata` ( `key` , `value` ) VALUES (...) | | brman_catalog | 17:59:00 | INSERT INTO `metadata` ( `key` , `value` ) VALUES (...) | | brman_catalog | 17:59:00 | CREATE TABLE `backups` ( `id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT , `instance_name` VARCHARACTER (?) NOT NULL , `start_ts` | | brman_catalog | 17:59:00 | CREATE TABLE `backup_details` ( `backup_id` INTEGER UNSIGNED NOT NULL , `hostname` VARCHARACTER (?) NULL , `binlog_file` VARCHAR | | brman_catalog | 17:59:00 | CREATE TABLE `files` ( `id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT , `schema_name` VARCHARACTER (?) NULL , `original_name` VAR | | brman_catalog | 17:59:00 | CREATE TABLE `binary_logs` ( `id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT , `filename` VARCHARACTER (?) NOT NULL , `begin_ts` I | +---------------+----------+----------------------------------------------------------------------------------------------------------------------------------+
Suggestions for improvement

Based on this first iteration of the model, you can already see which questions the model cannot yet answer or where the model is too imprecise. This can then be improved in a second round....

Examples of this are:

  • The granularity of the time dimension may be too coarse with an accuracy of minutes. Would it make more sense to use seconds?
  • The question of how long a connection was open is not so easy to answer. Perhaps a further fact table would be appropriate here? SELECT cd.connection_number, cd.user, cd.hostname, tdf.time AS time_from, tdt.time AS time_to, (UNIX_TIMESTAMP(tdt.time) - UNIX_TIMESTAMP(tdf.time)) AS duration FROM connection_dim AS cd JOIN query_fact AS qf1 ON cd.connection_id = qf1.connection_id JOIN time_dim AS tdf ON tdf.time_id = qf1.time_id JOIN statement_dim AS sdf ON sdf.statement_id = qf1.statement_id JOIN query_fact AS qf2 ON cd.connection_id = qf2.connection_id JOIN time_dim AS tdt ON tdt.time_id = qf2.time_id JOIN statement_dim AS sdt ON sdt.statement_id = qf2.statement_id WHERE tdf.time BETWEEN '17:00' AND '18:30' AND sdf.command = 'Connect' AND sdt.command = 'Quit' AND (UNIX_TIMESTAMP(tdt.time) - UNIX_TIMESTAMP(tdf.time)) > 0 ORDER BY tdf.time ; +-------------------+-------+-----------+-----------+----------+----------+ | connection_number | user | hostname | time_from | time_to | duration | +-------------------+-------+-----------+-----------+----------+----------+ | 211 | brman | localhost | 17:59:00 | 18:00:00 | 60 | | 215 | root | localhost | 18:00:00 | 18:17:00 | 1020 | | 219 | brman | localhost | 18:06:00 | 18:07:00 | 60 | | 225 | brman | localhost | 18:10:00 | 18:11:00 | 60 | | 226 | brman | localhost | 18:13:00 | 18:14:00 | 60 | +-------------------+-------+-----------+-----------+----------+----------+
  • Of course, it would be exciting if an AI were used to solve the problem. How do you train it correctly and does it find the problem once it has been trained?

So much for the little gimmick of building a data warehouse...

Taxonomy upgrade extras: data warehousegeneral query log

We build a data warehouse from the General Query Log

Shinguz - Wed, 2024-01-31 16:41

The design of a data warehouse differs from relational design. Data warehouses are often designed according to the concept of the star schema.

When building a data warehouse, you usually put the cart before the horse:

  • What questions should my data warehouse be able to answer?
  • How do I have to design my model so that my questions can be answered easily?
  • Where do I get the data to populate the model?
  • How do I fill my model with the data?

For training purposes, we have investigated an issue that arises from time to time with our support team: The system suddenly and unexpectedly starts to behave unusually, nobody has done anything and nobody knows why. Example with a customer last week: The system starts to become unstable at 3 pm, is then restarted hard and then stabilises again from 4 pm...

The easiest thing to do in such a case would be to quickly look at the database with the SHOW PROCESSLIST command and then it often becomes immediately clear where the problem lies. But customers often forget this or they are not fast enough. The General Query Log was already switched on for this customer, so this would be a great case for our General Query Log Data Warehouse!

What questions should my data warehouse be able to answer?

The generic question for this problem should be something like: "Who or what caused my system to behave abnormally?"

In technical terms, the question would be something like:

  • Who: Which user or account was on the database with how many connections at the time in question? What was unusual about it?
  • What: Which queries were running in which schema on the system at the time in question? Which of these queries were unusual?

What should my model look like?

We can already derive some facts and dimensions from the question:

  • User or account (user + host)
  • Time
  • Connections
  • Schema
  • Queries

And this also results in 4 dimensions and the fact table:

Data source

Where the data comes from is relatively easy to answer in this case: The customer provides his General Query Logs or you can also use the General Query Logs of our own systems for testing purposes.

How is the model populated?

Technically, this is known as an ETL process (Extract-Transform-Load). In our case, we have built a General Query Log parser that reads the General Query Log, prepares the data accordingly and saves it in the model.

Checking the model

And then we come to checking the model. We used test data from one of our systems for this:

  • Which user was on the system at the time in question?
  • Which user had how many connections open at the time in question?

SELECT td.time, cd.user, COUNT(*) AS count FROM connection_dim cd JOIN query_fact AS qf ON qf.connection_id = cd.connection_id JOIN time_dim AS td ON td.time_id = qf.time_id JOIN date_dim AS dd ON dd.date_id = qf.date_id WHERE td.time BETWEEN '17:00' AND '18:30' AND dd.date = '2019-08-02' GROUP BY td.time, cd.user ORDER BY td.time ASC, cd.user ; +----------+---------------+-------+ | time | user | count | +----------+---------------+-------+ | 17:58:00 | UNKNOWN USER | 1 | | 17:59:00 | brman | 58 | | 17:59:00 | brman_catalog | 18 | | 17:59:00 | root | 5 | | 18:00:00 | brman | 296 | | 18:00:00 | brman_catalog | 7 | | 18:00:00 | root | 3 | | 18:01:00 | brman_catalog | 18 | | 18:01:00 | root | 3 | | 18:06:00 | brman | 266 | | 18:06:00 | brman_catalog | 6 | | 18:07:00 | brman | 88 | | 18:07:00 | brman_catalog | 7 | | 18:10:00 | brman | 211 | | 18:10:00 | brman_catalog | 18 | | 18:10:00 | root | 4 | | 18:11:00 | brman | 141 | | 18:11:00 | root | 3 | | 18:13:00 | brman | 4 | | 18:14:00 | brman | 348 | | 18:17:00 | brman | 354 | | 18:17:00 | brman_catalog | 12 | | 18:17:00 | root | 1 | +----------+---------------+-------+
  • Which account was on the system at the time in question?
  • Which account had how many connections open at the time in question?

SELECT td.time, cd.user, cd.hostname, COUNT(*) AS count FROM connection_dim cd JOIN query_fact AS qf ON qf.connection_id = cd.connection_id JOIN time_dim AS td ON td.time_id = qf.time_id JOIN date_dim AS dd ON dd.date_id = qf.date_id WHERE td.time BETWEEN '17:00' AND '18:30' AND dd.date = '2019-08-02' GROUP BY td.time, cd.user, cd.hostname ORDER BY td.time ASC, cd.user ; +----------+---------------+--------------+-------+ | time | user | hostname | count | +----------+---------------+--------------+-------+ | 17:58:00 | UNKNOWN USER | UNKNOWN HOST | 1 | | 17:59:00 | brman | localhost | 58 | | 17:59:00 | brman_catalog | localhost | 18 | | 17:59:00 | root | localhost | 5 | | 18:00:00 | brman | localhost | 296 | | 18:00:00 | brman_catalog | localhost | 7 | | 18:00:00 | root | localhost | 3 | | 18:01:00 | brman_catalog | localhost | 18 | | 18:01:00 | root | localhost | 3 | | 18:06:00 | brman | localhost | 266 | | 18:06:00 | brman_catalog | localhost | 6 | | 18:07:00 | brman | localhost | 88 | | 18:07:00 | brman_catalog | localhost | 7 | | 18:10:00 | brman | localhost | 211 | | 18:10:00 | brman_catalog | localhost | 18 | | 18:10:00 | root | localhost | 4 | | 18:11:00 | brman | localhost | 141 | | 18:11:00 | root | localhost | 3 | | 18:13:00 | brman | localhost | 4 | | 18:14:00 | brman | localhost | 348 | | 18:17:00 | brman | localhost | 354 | | 18:17:00 | brman_catalog | localhost | 12 | | 18:17:00 | root | localhost | 1 | +----------+---------------+--------------+-------+
  • What was unusual about it?

SELECT cd.user, td.time, COUNT(*) AS count FROM connection_dim cd JOIN query_fact AS qf ON qf.connection_id = cd.connection_id JOIN time_dim AS td ON td.time_id = qf.time_id JOIN date_dim AS dd ON dd.date_id = qf.date_id WHERE td.time BETWEEN '17:00' AND '18:30' AND dd.date = '2019-08-02' GROUP BY td.time, cd.user ORDER BY cd.user ASC, td.time ASC ; +---------------+----------+-------+ | user | time | count | +---------------+----------+-------+ | brman | 17:59:00 | 58 | | brman | 18:00:00 | 296 | | brman | 18:06:00 | 266 | | brman | 18:07:00 | 88 | | brman | 18:10:00 | 211 | | brman | 18:11:00 | 141 | | brman | 18:13:00 | 4 | | brman | 18:14:00 | 348 | | brman | 18:17:00 | 354 | | brman_catalog | 17:59:00 | 18 | | brman_catalog | 18:00:00 | 7 | | brman_catalog | 18:01:00 | 18 | | brman_catalog | 18:06:00 | 6 | | brman_catalog | 18:07:00 | 7 | | brman_catalog | 18:10:00 | 18 | | brman_catalog | 18:17:00 | 12 | | root | 17:59:00 | 5 | | root | 18:00:00 | 3 | | root | 18:01:00 | 3 | | root | 18:10:00 | 4 | | root | 18:11:00 | 3 | | root | 18:17:00 | 1 | | UNKNOWN USER | 17:58:00 | 1 | +---------------+----------+-------+

One could deduce here, for example, that the user brman had a relatively large number of open connections during the period in question. Whether this is unusual, we have too little data or the time period is too short.

  • Which queries were running on the system at the time in question and in which schema?
  • Which of these queries were unusual?

SELECT sd.schema_name, td.time, SUBSTR(std.statement_text, 1, 128) AS query FROM query_fact AS qf JOIN time_dim AS td ON td.time_id = qf.time_id JOIN schema_dim AS sd ON sd.schema_id = qf.schema_id JOIN statement_dim AS std ON std.statement_id = qf.statement_id WHERE td.time BETWEEN '17:00' AND '18:30' AND sd.schema_name = 'brman_catalog' AND std.command = 'Query' ORDER BY td.time, qf.statement_id LIMIT 10 ; +---------------+----------+----------------------------------------------------------------------------------------------------------------------------------+ | schema_name | time | query | +---------------+----------+----------------------------------------------------------------------------------------------------------------------------------+ | brman_catalog | 17:59:00 | SET NAMES `utf8` | | brman_catalog | 17:59:00 | SELECT COUNT ( * ) AS `cnt` FROM `information_schema` . `tables` WHERE `table_schema` = ? AND TABLE_NAME = ? | | brman_catalog | 17:59:00 | SELECT COUNT ( * ) AS `cnt` FROM `information_schema` . `tables` WHERE `table_schema` = ? AND TABLE_NAME = ? | | brman_catalog | 17:59:00 | CREATE TABLE `metadata` ( `id` TINYINT UNSIGNED NOT NULL AUTO_INCREMENT , `key` VARCHARACTER (?) NOT NULL , `value` VARCHARACTER | | brman_catalog | 17:59:00 | INSERT INTO `metadata` ( `key` , `value` ) VALUES (...) | | brman_catalog | 17:59:00 | INSERT INTO `metadata` ( `key` , `value` ) VALUES (...) | | brman_catalog | 17:59:00 | CREATE TABLE `backups` ( `id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT , `instance_name` VARCHARACTER (?) NOT NULL , `start_ts` | | brman_catalog | 17:59:00 | CREATE TABLE `backup_details` ( `backup_id` INTEGER UNSIGNED NOT NULL , `hostname` VARCHARACTER (?) NULL , `binlog_file` VARCHAR | | brman_catalog | 17:59:00 | CREATE TABLE `files` ( `id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT , `schema_name` VARCHARACTER (?) NULL , `original_name` VAR | | brman_catalog | 17:59:00 | CREATE TABLE `binary_logs` ( `id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT , `filename` VARCHARACTER (?) NOT NULL , `begin_ts` I | +---------------+----------+----------------------------------------------------------------------------------------------------------------------------------+
Suggestions for improvement

Based on this first iteration of the model, you can already see which questions the model cannot yet answer or where the model is too imprecise. This can then be improved in a second round....

Examples of this are:

  • The granularity of the time dimension may be too coarse with an accuracy of minutes. Would it make more sense to use seconds?
  • The question of how long a connection was open is not so easy to answer. Perhaps a further fact table would be appropriate here? SELECT cd.connection_number, cd.user, cd.hostname, tdf.time AS time_from, tdt.time AS time_to, (UNIX_TIMESTAMP(tdt.time) - UNIX_TIMESTAMP(tdf.time)) AS duration FROM connection_dim AS cd JOIN query_fact AS qf1 ON cd.connection_id = qf1.connection_id JOIN time_dim AS tdf ON tdf.time_id = qf1.time_id JOIN statement_dim AS sdf ON sdf.statement_id = qf1.statement_id JOIN query_fact AS qf2 ON cd.connection_id = qf2.connection_id JOIN time_dim AS tdt ON tdt.time_id = qf2.time_id JOIN statement_dim AS sdt ON sdt.statement_id = qf2.statement_id WHERE tdf.time BETWEEN '17:00' AND '18:30' AND sdf.command = 'Connect' AND sdt.command = 'Quit' AND (UNIX_TIMESTAMP(tdt.time) - UNIX_TIMESTAMP(tdf.time)) > 0 ORDER BY tdf.time ; +-------------------+-------+-----------+-----------+----------+----------+ | connection_number | user | hostname | time_from | time_to | duration | +-------------------+-------+-----------+-----------+----------+----------+ | 211 | brman | localhost | 17:59:00 | 18:00:00 | 60 | | 215 | root | localhost | 18:00:00 | 18:17:00 | 1020 | | 219 | brman | localhost | 18:06:00 | 18:07:00 | 60 | | 225 | brman | localhost | 18:10:00 | 18:11:00 | 60 | | 226 | brman | localhost | 18:13:00 | 18:14:00 | 60 | +-------------------+-------+-----------+-----------+----------+----------+
  • Of course, it would be exciting if an AI were used to solve the problem. How do you train it correctly and does it find the problem once it has been trained?

So much for the little gimmick of building a data warehouse...

Taxonomy upgrade extras: data warehousegeneral query log

FromDual Performance Monitor for MariaDB 2.1.0 has been released

Shinguz - Sat, 2023-12-09 18:27

FromDual has the pleasure to announce the release of the new version 2.1.0 of its popular Database Performance Monitor for MariaDB and Galera Cluster fpmmm.

The FromDual Performance Monitor for MariaDB (fpmmm) enables DBAs and System Administrators to monitor and understand what is going on inside their MariaDB database instances and on the machines where the databases reside.

More detailed information you can find in the fpmmm Installation Guide.

Download

The new FromDual Performance Monitor for MariaDB (fpmmm) can be downloaded from here or you can use our FromDual repositories. How to install and use fpmmm is documented in the fpmmm Installation Guide.

In the inconceivable case that you find a bug in the FromDual Performance Monitor for MariaDB please report it to the FromDual Bug-tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to us.

Monitoring as a Service (MaaS)

You do not want to set-up your Database monitoring yourself? No problem: Choose our MariaDB Monitoring as a Service (Maas) program to safe time and costs!

Installation of Performance Monitor 2.1.0

A complete guide on how to install FromDual Performance Monitor you can find in the fpmmm Installation Guide.

Upgrade of fpmmm tarball from 1.x to 2.1.0

There are some changes in the configuration file (fpmmm.conf):

  • The access rights should be change as follows: chmod 600 /etc/fpmmm.conf
  • The key Methode was spelled wrong in the configuration file. It was renamed to Method.
  • The key PidFile is ambiguous which could lead to problems and bugs. Thus it was changed to either MyPidFile for fpmmm and DbPidFile for the database.

Upgrade with DEB/RPM packages should happen automatically. For tarballs follow this:

shell> cd /opt shell> tar xf /download/fpmmm-2.1.0.tar.gz shell> rm -f fpmmm shell> ln -s fpmmm-2.1.0 fpmmm
Changes in FromDual Performance Monitor for MariaDB 2.1.0

This release contains new features and various bug fixes.

You can verify your current FromDual Performance Monitor for MariaDB version with the following command:

shell> /opt/fpmmm/bin/fpmmm --version
General
  • Comments and some more debugging information added.
  • All needed logging added for tracking down the problem of bad stat output.
  • New variables fixed.
  • Error messages improved.
  • Fix error output.
  • Version 10.11.1 was not split up correctly, fixed.
  • myEnv updated to fix bad MySQL detection on Ubuntu from repository.
  • Old database version error fixed if connection to API does not work.
  • Disable smart module by default to avoid error messages.
  • Removed /etc/fpmmm folder everywhere.
  • rc made unique, tests fixed.
  • Default path locations brought up to date.
  • DbPidFile location is new under /run.
  • All tests for MariaDB 10.11 passed.
  • CacheFileBase bug fixed.
  • FreeDSx/SNMP upgraded from 0.4 to 0.5.
  • DB connection handling improved and made more OO style.
  • Configuration file parser made more stable for syntax errors.
  • Error log logged to systemd message improved.
  • Option --version is now done before check options.
  • PHP requirement version specified.

Templates
  • Templates improved.
  • Link for triggers fixed.
  • Working period added to all graphs.
  • Zabbix 6 templates added to Makefile.
  • Zabbix 6.0 templates added (6.0.21) and renamed.

Agent
  • All variables from templates removed, test added for fpmmm MRRELEASE.
  • Fix MRRLEASE tag in fpmmm template fixed again.
  • Function writeDataToCache improved for tracking customer problems.
  • CacheFile is now protected with flock, this should resolve issues with lost brman items.
  • fpmmm version problem error message improved.
  • Disabled = True is not recognized correctly and no error was thrown. This is fixed now.
  • Messages were not handled correctly with SNMP output. This is fixed now.
  • mdstat message removed from error log.
  • Make error messages around sending data nicer.
  • apt-get/yum messages replaced by dnf/apt.
  • Error message made more clear if php-cli package is missing.

Server
  • Trigger too many filesystem locks set from MULTIPLE to SINGLE, threshold increased from 10000 to 16384 (mariadb MaxNOFiles) and message improved.
  • Working time added to server graphs.
  • iostat items remove from server template.
  • Available disks are now reported with space in between.
  • Disk sda5 removed from template, network interfaces enp4s0f1 and bond0 added to template, disks vdb1 and md1 added to template.
  • CPU usage details removed, guest_nice item added.
  • Swapping items added.
  • Typo in host screen fixed.

Galera
  • Galera group replication latency added.
  • In addition to old wsrep_causal_reads the new wsrep_sync_wait variable was added including the trigger.
  • Non Galera node is not detected as such and gives ugly error message, fixed.

InnoDB
  • Item innodb log write requests and innodb log writes fixed in innodb template.
  • Item InnoDB Trx Log bytes written renamed to InnoDB Log bytes written, graph InnoDB Log Activity removed because it is redundant now.
  • Item innodb_redo_log_capacity added for MySQL 8.0.
  • InnoDB buffer pool wait free trigger has wrong filter for item. fixed.
  • InnoDB Buffer Pool wait for free pages trigger added link to item value.
  • innodb_file_format also removed from template.
  • InnoDB buffer pool wait free trigger added.
  • FromDual.MySQL.innodb.Innodb_data_pending_fsyncs changed from absolute values to change_per_second to make graph useful.
  • Links fixed in triggers for innodb module.

MySQL
  • DB is soon out of support message downgraded from average to warning.
  • Item name fixed.
  • Link for table open cache trigger adjusted.
  • Trigger for mysql/mariadb support ends was changed from multiple to single to reduce noise.
  • com_call_procedure status counter fixed in module and template.
  • storage_engine item remove from template, processlist item waiting for table level lock fixed.
  • TOC was updated in template and improved and cleaned-up.
  • Unlock table item is not collected any more and trigger was removed, caused useless alerts.
  • Modern TOC handling implemented.

Process
  • Process module refactored, more logging and tests added.
  • Bug in process module fixed: /proc/PID/stat was not parsed correctly.

Security
  • Links fixed in triggers for security module.

Master
  • Binlog event count and binlog avg event size removed from master template because we cannot calculate those values.

Backup
  • Backup template duration URL fixed.

Packaging
  • Packages added for Debian 10, Debian 11, Debian 12, Ubuntu 20.04, Ubuntu 22.04, Redhat 8 and Redhat 9.
  • Package bug with fpmmm.ini fixed.
  • Debian build version increased and package build config error fixed again.
  • Debian package revision introduced.
  • Bug in config copy during postinst fixed.
  • Package installation error overwriting fpmmm.conf fixed.

For subscriptions of commercial use of fpmmm please get in contact with us.

Taxonomy upgrade extras: performancemonitormonitoringfpmmmmaasreleaseobservation

FromDual Performance Monitor for MariaDB 2.1.0 has been released

Shinguz - Sat, 2023-12-09 18:27

FromDual has the pleasure to announce the release of the new version 2.1.0 of its popular Database Performance Monitor for MariaDB and Galera Cluster fpmmm.

The FromDual Performance Monitor for MariaDB (fpmmm) enables DBAs and System Administrators to monitor and understand what is going on inside their MariaDB database instances and on the machines where the databases reside.

More detailed information you can find in the fpmmm Installation Guide.

Download

The new FromDual Performance Monitor for MariaDB (fpmmm) can be downloaded from here or you can use our FromDual repositories. How to install and use fpmmm is documented in the fpmmm Installation Guide.

In the inconceivable case that you find a bug in the FromDual Performance Monitor for MariaDB please report it to the FromDual Bug-tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to us.

Monitoring as a Service (MaaS)

You do not want to set-up your Database monitoring yourself? No problem: Choose our MariaDB Monitoring as a Service (Maas) program to safe time and costs!

Installation of Performance Monitor 2.1.0

A complete guide on how to install FromDual Performance Monitor you can find in the fpmmm Installation Guide.

Upgrade of fpmmm tarball from 1.x to 2.1.0

There are some changes in the configuration file (fpmmm.conf):

  • The access rights should be change as follows: chmod 600 /etc/fpmmm.conf
  • The key Methode was spelled wrong in the configuration file. It was renamed to Method.
  • The key PidFile is ambiguous which could lead to problems and bugs. Thus it was changed to either MyPidFile for fpmmm and DbPidFile for the database.

Upgrade with DEB/RPM packages should happen automatically. For tarballs follow this:

shell> cd /opt shell> tar xf /download/fpmmm-2.1.0.tar.gz shell> rm -f fpmmm shell> ln -s fpmmm-2.1.0 fpmmm
Changes in FromDual Performance Monitor for MariaDB 2.1.0

This release contains new features and various bug fixes.

You can verify your current FromDual Performance Monitor for MariaDB version with the following command:

shell> /opt/fpmmm/bin/fpmmm --version
General
  • Comments and some more debugging information added.
  • All needed logging added for tracking down the problem of bad stat output.
  • New variables fixed.
  • Error messages improved.
  • Fix error output.
  • Version 10.11.1 was not split up correctly, fixed.
  • myEnv updated to fix bad MySQL detection on Ubuntu from repository.
  • Old database version error fixed if connection to API does not work.
  • Disable smart module by default to avoid error messages.
  • Removed /etc/fpmmm folder everywhere.
  • rc made unique, tests fixed.
  • Default path locations brought up to date.
  • DbPidFile location is new under /run.
  • All tests for MariaDB 10.11 passed.
  • CacheFileBase bug fixed.
  • FreeDSx/SNMP upgraded from 0.4 to 0.5.
  • DB connection handling improved and made more OO style.
  • Configuration file parser made more stable for syntax errors.
  • Error log logged to systemd message improved.
  • Option --version is now done before check options.
  • PHP requirement version specified.

Templates
  • Templates improved.
  • Link for triggers fixed.
  • Working period added to all graphs.
  • Zabbix 6 templates added to Makefile.
  • Zabbix 6.0 templates added (6.0.21) and renamed.

Agent
  • All variables from templates removed, test added for fpmmm MRRELEASE.
  • Fix MRRLEASE tag in fpmmm template fixed again.
  • Function writeDataToCache improved for tracking customer problems.
  • CacheFile is now protected with flock, this should resolve issues with lost brman items.
  • fpmmm version problem error message improved.
  • Disabled = True is not recognized correctly and no error was thrown. This is fixed now.
  • Messages were not handled correctly with SNMP output. This is fixed now.
  • mdstat message removed from error log.
  • Make error messages around sending data nicer.
  • apt-get/yum messages replaced by dnf/apt.
  • Error message made more clear if php-cli package is missing.

Server
  • Trigger too many filesystem locks set from MULTIPLE to SINGLE, threshold increased from 10000 to 16384 (mariadb MaxNOFiles) and message improved.
  • Working time added to server graphs.
  • iostat items remove from server template.
  • Available disks are now reported with space in between.
  • Disk sda5 removed from template, network interfaces enp4s0f1 and bond0 added to template, disks vdb1 and md1 added to template.
  • CPU usage details removed, guest_nice item added.
  • Swapping items added.
  • Typo in host screen fixed.

Galera
  • Galera group replication latency added.
  • In addition to old wsrep_causal_reads the new wsrep_sync_wait variable was added including the trigger.
  • Non Galera node is not detected as such and gives ugly error message, fixed.

InnoDB
  • Item innodb log write requests and innodb log writes fixed in innodb template.
  • Item InnoDB Trx Log bytes written renamed to InnoDB Log bytes written, graph InnoDB Log Activity removed because it is redundant now.
  • Item innodb_redo_log_capacity added for MySQL 8.0.
  • InnoDB buffer pool wait free trigger has wrong filter for item. fixed.
  • InnoDB Buffer Pool wait for free pages trigger added link to item value.
  • innodb_file_format also removed from template.
  • InnoDB buffer pool wait free trigger added.
  • FromDual.MySQL.innodb.Innodb_data_pending_fsyncs changed from absolute values to change_per_second to make graph useful.
  • Links fixed in triggers for innodb module.

MySQL
  • DB is soon out of support message downgraded from average to warning.
  • Item name fixed.
  • Link for table open cache trigger adjusted.
  • Trigger for mysql/mariadb support ends was changed from multiple to single to reduce noise.
  • com_call_procedure status counter fixed in module and template.
  • storage_engine item remove from template, processlist item waiting for table level lock fixed.
  • TOC was updated in template and improved and cleaned-up.
  • Unlock table item is not collected any more and trigger was removed, caused useless alerts.
  • Modern TOC handling implemented.

Process
  • Process module refactored, more logging and tests added.
  • Bug in process module fixed: /proc/PID/stat was not parsed correctly.

Security
  • Links fixed in triggers for security module.

Master
  • Binlog event count and binlog avg event size removed from master template because we cannot calculate those values.

Backup
  • Backup template duration URL fixed.

Packaging
  • Packages added for Debian 10, Debian 11, Debian 12, Ubuntu 20.04, Ubuntu 22.04, Redhat 8 and Redhat 9.
  • Package bug with fpmmm.ini fixed.
  • Debian build version increased and package build config error fixed again.
  • Debian package revision introduced.
  • Bug in config copy during postinst fixed.
  • Package installation error overwriting fpmmm.conf fixed.

For subscriptions of commercial use of fpmmm please get in contact with us.

Taxonomy upgrade extras: performancemonitormonitoringfpmmmmaasreleaseobservation

InnoDB Deadlock on SELECT? Not possible! Or Is It?

Shinguz - Sat, 2023-11-25 16:58
Introduction

Two points in advance:

  1. A deadlock is a state in which two different transactions are no longer able to continue working because each transaction holds a lock that the other transaction would need. Because both transactions are now waiting for the other transaction to release their locks, neither transaction will release their respective locks. And that would last forever. To avoid this, the MariaDB instance intervenes and kills the transaction that has done less work. The application then receives a deadlock error message of this type:
    ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
  2. A general mantra in the MariaDB ecosystem is that a SELECT does not cause locks (exception: FOR UPDATE or LOCK IN SHARE MODE) and therefore cannot be part of a deadlock.

The problem

A long-standing customer comes to the FromDual remote DBA team with a request to explain a deadlock situation:

Hello FromDual Team,
I need your expertise on the subject of deadlocks.
When would it suit you?


The situation is as follows: Transaction 1 consists of a simple INSERT. Transaction 2 consists of a SELECT. This should NOT actually cause a deadlock!

We first check the following points

  • Are all tables affected by these queries properly indexed? Yes, they are. The queries are all running perfectly!
  • Is the SELECT query possibly part of a larger transaction (NOT an auto-commit transaction) and therefore not the actual cause of the deadlock? No, it is not. They are auto-commit transactions.

What now? What else needs to be said for clarification: The SELECT is sent with a very high cadence, i.e. approx. every 5 ms!

It is clear that the INSERT generates locks. It is also displayed. But why does the SELECT command generate locks? These are also displayed!

So we try to break the problem down into individual steps.

The approach

The query looks like this:

SQL> SET @id = (SELECT id FROM test WHERE id = 3);

If we pack this query into an explicit transaction, we can even see the locks:

SQL> START TRANSACTION; SQL> SET @id = (SELECT id FROM test WHERE id = 3);

and in a second session:

SQL> SELECT * FROM information_schema.INNODB_TRX\G *************************** 1. row *************************** trx_id: 0 trx_state: RUNNING trx_started: 2023-11-19 15:27:09 trx_requested_lock_id: NULL trx_wait_started: NULL trx_weight: 2 trx_mysql_thread_id: 3765 trx_query: NULL trx_operation_state: trx_tables_in_use: 0 trx_tables_locked: 1 trx_lock_structs: 2 trx_lock_memory_bytes: 1128 trx_rows_locked: 1 trx_rows_modified: 0 trx_concurrency_tickets: 0 trx_isolation_level: REPEATABLE READ trx_unique_checks: 1 trx_foreign_key_checks: 1 trx_last_foreign_key_error: NULL trx_is_read_only: 0 trx_autocommit_non_locking: 0

Unfortunately, we cannot see what kind of lock (IS) it is, because the view INNODB_LOCKS is empty.

The solution

If we do the same experiment with "normal" SELECTs:

SQL> START TRANSACTION; SELECT id FROM test WHERE id = 3;

or

SQL> START TRANSACTION; SELECT id INTO @id FROM test WHERE id = 3;

we do NOT see any locks:

SQL> SELECT * FROM information_schema.INNODB_TRX\G *************************** 1. row *************************** trx_id: 0 trx_state: RUNNING trx_started: 2023-11-19 15:31:35 trx_requested_lock_id: NULL trx_wait_started: NULL trx_weight: 0 trx_mysql_thread_id: 3765 trx_query: NULL trx_operation_state: trx_tables_in_use: 0 trx_tables_locked: 0 trx_lock_structs: 0 trx_lock_memory_bytes: 1128 trx_rows_locked: 0 trx_rows_modified: 0 trx_concurrency_tickets: 0 trx_isolation_level: REPEATABLE READ trx_unique_checks: 1 trx_foreign_key_checks: 1 trx_last_foreign_key_error: NULL trx_is_read_only: 0 trx_autocommit_non_locking: 0

So it seems that the construct SET @id = (...) causes this IS lock. The customer rewrites his application and shortly afterwards we receive the following message:

Hello FromDual team,
Your tip was spot on.
No more deadlocks since Friday lunchtime.
Thank you and have a nice weekend.


Further clarified questions

MySQL 8.0 behaves the same? Yes, exactly the same.

Addendum

My dear colleague Matthias gave me a follow-up idea: What about MariaDB Stored Procedures and Stored Functions?

The two tests here:

DELIMITER // CREATE OR REPLACE PROCEDURE locktestsp (INOUT id INT) BEGIN SELECT id INTO id FROM test WHERE id = id LIMIT 1; END; // DELIMITER ; SET @id = 3; START TRANSACTION; CALL locktestsp(@id); SELECT @id; SELECT trx_tables_locked, trx_lock_structs, trx_rows_locked FROM information_schema.INNODB_TRX; +-------------------+------------------+-----------------+ | trx_tables_locked | trx_lock_structs | trx_rows_locked | +-------------------+------------------+-----------------+ | 0 | 0 | 0 | +-------------------+------------------+-----------------+

and here:

DELIMITER // CREATE OR REPLACE FUNCTION locktestsf (IN id INT) RETURNS CHAR(50) DETERMINISTIC BEGIN SELECT id INTO id FROM test WHERE id = id LIMIT 1; RETURN id; END; // DELIMITER ; START TRANSACTION; SELECT locktestsf(3); SELECT trx_tables_locked, trx_lock_structs, trx_rows_locked FROM information_schema.INNODB_TRX; +-------------------+------------------+-----------------+ | trx_tables_locked | trx_lock_structs | trx_rows_locked | +-------------------+------------------+-----------------+ | 0 | 0 | 0 | +-------------------+------------------+-----------------+
Translated by deepl.com Taxonomy upgrade extras: selectdeadlock

InnoDB Deadlock on SELECT? Not possible! Or Is It?

Shinguz - Sat, 2023-11-25 16:58
Introduction

Two points in advance:

  1. A deadlock is a state in which two different transactions are no longer able to continue working because each transaction holds a lock that the other transaction would need. Because both transactions are now waiting for the other transaction to release their locks, neither transaction will release their respective locks. And that would last forever. To avoid this, the MariaDB instance intervenes and kills the transaction that has done less work. The application then receives a deadlock error message of this type:
    ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
  2. A general mantra in the MariaDB ecosystem is that a SELECT does not cause locks (exception: FOR UPDATE or LOCK IN SHARE MODE) and therefore cannot be part of a deadlock.

The problem

A long-standing customer comes to the FromDual remote DBA team with a request to explain a deadlock situation:

Hello FromDual Team,
I need your expertise on the subject of deadlocks.
When would it suit you?


The situation is as follows: Transaction 1 consists of a simple INSERT. Transaction 2 consists of a SELECT. This should NOT actually cause a deadlock!

We first check the following points

  • Are all tables affected by these queries properly indexed? Yes, they are. The queries are all running perfectly!
  • Is the SELECT query possibly part of a larger transaction (NOT an auto-commit transaction) and therefore not the actual cause of the deadlock? No, it is not. They are auto-commit transactions.

What now? What else needs to be said for clarification: The SELECT is sent with a very high cadence, i.e. approx. every 5 ms!

It is clear that the INSERT generates locks. It is also displayed. But why does the SELECT command generate locks? These are also displayed!

So we try to break the problem down into individual steps.

The approach

The query looks like this:

SQL> SET @id = (SELECT id FROM test WHERE id = 3);

If we pack this query into an explicit transaction, we can even see the locks:

SQL> START TRANSACTION; SQL> SET @id = (SELECT id FROM test WHERE id = 3);

and in a second session:

SQL> SELECT * FROM information_schema.INNODB_TRX\G *************************** 1. row *************************** trx_id: 0 trx_state: RUNNING trx_started: 2023-11-19 15:27:09 trx_requested_lock_id: NULL trx_wait_started: NULL trx_weight: 2 trx_mysql_thread_id: 3765 trx_query: NULL trx_operation_state: trx_tables_in_use: 0 trx_tables_locked: 1 trx_lock_structs: 2 trx_lock_memory_bytes: 1128 trx_rows_locked: 1 trx_rows_modified: 0 trx_concurrency_tickets: 0 trx_isolation_level: REPEATABLE READ trx_unique_checks: 1 trx_foreign_key_checks: 1 trx_last_foreign_key_error: NULL trx_is_read_only: 0 trx_autocommit_non_locking: 0

Unfortunately, we cannot see what kind of lock (IS) it is, because the view INNODB_LOCKS is empty.

The solution

If we do the same experiment with "normal" SELECTs:

SQL> START TRANSACTION; SELECT id FROM test WHERE id = 3;

or

SQL> START TRANSACTION; SELECT id INTO @id FROM test WHERE id = 3;

we do NOT see any locks:

SQL> SELECT * FROM information_schema.INNODB_TRX\G *************************** 1. row *************************** trx_id: 0 trx_state: RUNNING trx_started: 2023-11-19 15:31:35 trx_requested_lock_id: NULL trx_wait_started: NULL trx_weight: 0 trx_mysql_thread_id: 3765 trx_query: NULL trx_operation_state: trx_tables_in_use: 0 trx_tables_locked: 0 trx_lock_structs: 0 trx_lock_memory_bytes: 1128 trx_rows_locked: 0 trx_rows_modified: 0 trx_concurrency_tickets: 0 trx_isolation_level: REPEATABLE READ trx_unique_checks: 1 trx_foreign_key_checks: 1 trx_last_foreign_key_error: NULL trx_is_read_only: 0 trx_autocommit_non_locking: 0

So it seems that the construct SET @id = (...) causes this IS lock. The customer rewrites his application and shortly afterwards we receive the following message:

Hello FromDual team,
Your tip was spot on.
No more deadlocks since Friday lunchtime.
Thank you and have a nice weekend.


Further clarified questions

MySQL 8.0 behaves the same? Yes, exactly the same.

Addendum

My dear colleague Matthias gave me a follow-up idea: What about MariaDB Stored Procedures and Stored Functions?

The two tests here:

DELIMITER // CREATE OR REPLACE PROCEDURE locktestsp (INOUT id INT) BEGIN SELECT id INTO id FROM test WHERE id = id LIMIT 1; END; // DELIMITER ; SET @id = 3; START TRANSACTION; CALL locktestsp(@id); SELECT @id; SELECT trx_tables_locked, trx_lock_structs, trx_rows_locked FROM information_schema.INNODB_TRX; +-------------------+------------------+-----------------+ | trx_tables_locked | trx_lock_structs | trx_rows_locked | +-------------------+------------------+-----------------+ | 0 | 0 | 0 | +-------------------+------------------+-----------------+

and here:

DELIMITER // CREATE OR REPLACE FUNCTION locktestsf (IN id INT) RETURNS CHAR(50) DETERMINISTIC BEGIN SELECT id INTO id FROM test WHERE id = id LIMIT 1; RETURN id; END; // DELIMITER ; START TRANSACTION; SELECT locktestsf(3); SELECT trx_tables_locked, trx_lock_structs, trx_rows_locked FROM information_schema.INNODB_TRX; +-------------------+------------------+-----------------+ | trx_tables_locked | trx_lock_structs | trx_rows_locked | +-------------------+------------------+-----------------+ | 0 | 0 | 0 | +-------------------+------------------+-----------------+
Translated by deepl.com

Upgrading MariaDB to the pre-latest minor release

Shinguz - Wed, 2023-02-08 13:45

Scenario: MariaDB Community Server is release every 3 months. The exact date is not known but it is typically about 2 to 3 weeks after the MySQL/Oracle CPU.

We upgrade regularly customer systems, also their MariaDB Galera Cluster systems. Because we made in the past some very bad experience upgrading production MariaDB Galera Clusters short after release date we became a bit more careful. Our new policy is: We wait a few weeks before installing a new release.

This week we had the situation that on Monday a new MariaDB release came out and on Wednesday we had to do the Cluster upgrade. So only 2 days in between. Too short time for my taste. But we did not want to omit the upgrade because it was already more than 6 months since the last upgrade...

So we decided to install the pre-latest MariaDB release. Technically it means: Customer was on 10.6.9, 10.6.12 is out (and already available in the repositories) but we want to upgrade to 10.6.11. Additionally we also wanted to upgrade the O/S (Debian 11). So how to do this quite complex task? In short:

  • Set all MariaDB related packages to hold.
  • Upgrade all other packages including a reboot of the machine.
  • Unhold all MariaDB related packages.
  • Install specific MariaDB packages which are not the newest one.

Set MariaDB related packages to hold dpkg -l | grep mariadb ii libdbd-mariadb-perl 1.21-3 amd64 Perl5 database interface to the MariaDB/MySQL databases ii libmariadb3:amd64 1:10.6.9+maria~deb11 amd64 MariaDB database client library ii libmariadb3-compat 1:10.6.9+maria~deb11 amd64 MariaDB database client library MySQL compat package ii libmariadbclient18 1:10.6.9+maria~deb11 amd64 Virtual package to satisfy external libmariadbclient18 depends ii mariadb-client-10.6 1:10.6.9+maria~deb11 amd64 MariaDB database client binaries ii mariadb-client-core-10.6 1:10.6.9+maria~deb11 amd64 MariaDB database core client binaries ii mariadb-common 1:10.6.9+maria~deb11 all MariaDB common configuration files ii mariadb-server-10.6 1:10.6.9+maria~deb11 amd64 MariaDB database server binaries ii mariadb-server-core-10.6 1:10.6.9+maria~deb11 amd64 MariaDB database core server files apt-mark hold galera-4 libmariadb3-compat libmariadb3 libmariadbclient18 mariadb-client-10.6 mariadb-client-core-10.6 mariadb-common mariadb-server-10.6 mariadb-server-core-10.6 mysql-common apt-mark showhold
Upgrade al other O/S dependent packages apt update apt list --upgradable Listing... Done galera-4/unknown 26.4.14-deb11 amd64 [upgradable from: 26.4.11-0+deb11u1] libmariadb3-compat/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] libmariadb3/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] libmariadbclient18/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] libssl1.1/stable-security 1.1.1n-0+deb11u4 amd64 [upgradable from: 1.1.1n-0+deb11u3] mariadb-client-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] mariadb-client-core-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] mariadb-common/unknown 1:10.6.12+maria~deb11 all [upgradable from: 1:10.6.9+maria~deb11] mariadb-server-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] mariadb-server-core-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] mysql-common/unknown 1:10.6.12+maria~deb11 all [upgradable from: 5.8+1.0.7] openssl/stable-security 1.1.1n-0+deb11u4 amd64 [upgradable from: 1.1.1n-0+deb11u3] apt upgrade The following packages have been kept back: galera-4 libmariadb3 libmariadb3-compat libmariadbclient18 mariadb-client-10.6 mariadb-client-core-10.6 mariadb-common mariadb-server-10.6 mariadb-server-core-10.6 mysql-common The following packages will be upgraded: libssl1.1 openssl 2 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. cat /var/run/reboot-required # reboot [-f]
Unhold MariaDB related packages apt-mark unhold galera-4 libmariadb3-compat libmariadb3 libmariadbclient18 mariadb-client-10.6 mariadb-client-core-10.6 mariadb-common mariadb-server-10.6 mariadb-server-core-10.6 mysql-common apt-mark showhold
Install specific MariaDB version packages apt list -a galera-4 apt install galera-4=26.4.13-bullseye libmariadb3-compat=1:10.6.11+maria~deb11 libmariadb3=1:10.6.11+maria~deb11 libmariadbclient18=1:10.6.11+maria~deb11 mariadb-client-10.6=1:10.6.11+maria~deb11 mariadb-client-core-10.6=1:10.6.11+maria~deb11 mariadb-common=1:10.6.11+maria~deb11 mariadb-server-10.6=1:10.6.11+maria~deb11 mariadb-server-core-10.6=1:10.6.11+maria~deb11 mysql-common=1:10.6.11+maria~deb11 # To avoid 2 database restarts we can also do the reboot here systemctl restart mariadb mariadb-upgrade --user=root apt list --upgradable Listing... Done galera-4/unknown 26.4.14-deb11 amd64 [upgradable from: 26.4.13-bullseye] libmariadb3-compat/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] libmariadb3/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] libmariadbclient18/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mariadb-client-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mariadb-client-core-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mariadb-common/unknown 1:10.6.12+maria~deb11 all [upgradable from: 1:10.6.11+maria~deb11] mariadb-server-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mariadb-server-core-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mysql-common/unknown 1:10.6.12+maria~deb11 all [upgradable from: 1:10.6.11+maria~deb11]
Taxonomy upgrade extras: debianpackageupgradelocklockingholdpin

Upgrading MariaDB to the pre-latest minor release

Shinguz - Wed, 2023-02-08 13:45

Scenario: MariaDB Community Server is release every 3 months. The exact date is not known but it is typically about 2 to 3 weeks after the MySQL/Oracle CPU.

We upgrade regularly customer systems, also their MariaDB Galera Cluster systems. Because we made in the past some very bad experience upgrading production MariaDB Galera Clusters short after release date we became a bit more careful. Our new policy is: We wait a few weeks before installing a new release.

This week we had the situation that on Monday a new MariaDB release came out and on Wednesday we had to do the Cluster upgrade. So only 2 days in between. Too short time for my taste. But we did not want to omit the upgrade because it was already more than 6 months since the last upgrade...

So we decided to install the pre-latest MariaDB release. Technically it means: Customer was on 10.6.9, 10.6.12 is out (and already available in the repositories) but we want to upgrade to 10.6.11. Additionally we also wanted to upgrade the O/S (Debian 11). So how to do this quite complex task? In short:

  • Set all MariaDB related packages to hold.
  • Upgrade all other packages including a reboot of the machine.
  • Unhold all MariaDB related packages.
  • Install specific MariaDB packages which are not the newest one.

Set MariaDB related packages to hold dpkg -l | grep mariadb ii libdbd-mariadb-perl 1.21-3 amd64 Perl5 database interface to the MariaDB/MySQL databases ii libmariadb3:amd64 1:10.6.9+maria~deb11 amd64 MariaDB database client library ii libmariadb3-compat 1:10.6.9+maria~deb11 amd64 MariaDB database client library MySQL compat package ii libmariadbclient18 1:10.6.9+maria~deb11 amd64 Virtual package to satisfy external libmariadbclient18 depends ii mariadb-client-10.6 1:10.6.9+maria~deb11 amd64 MariaDB database client binaries ii mariadb-client-core-10.6 1:10.6.9+maria~deb11 amd64 MariaDB database core client binaries ii mariadb-common 1:10.6.9+maria~deb11 all MariaDB common configuration files ii mariadb-server-10.6 1:10.6.9+maria~deb11 amd64 MariaDB database server binaries ii mariadb-server-core-10.6 1:10.6.9+maria~deb11 amd64 MariaDB database core server files apt-mark hold galera-4 libmariadb3-compat libmariadb3 libmariadbclient18 mariadb-client-10.6 mariadb-client-core-10.6 mariadb-common mariadb-server-10.6 mariadb-server-core-10.6 mysql-common apt-mark showhold
Upgrade al other O/S dependent packages apt update apt list --upgradable Listing... Done galera-4/unknown 26.4.14-deb11 amd64 [upgradable from: 26.4.11-0+deb11u1] libmariadb3-compat/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] libmariadb3/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] libmariadbclient18/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] libssl1.1/stable-security 1.1.1n-0+deb11u4 amd64 [upgradable from: 1.1.1n-0+deb11u3] mariadb-client-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] mariadb-client-core-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] mariadb-common/unknown 1:10.6.12+maria~deb11 all [upgradable from: 1:10.6.9+maria~deb11] mariadb-server-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] mariadb-server-core-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.9+maria~deb11] mysql-common/unknown 1:10.6.12+maria~deb11 all [upgradable from: 5.8+1.0.7] openssl/stable-security 1.1.1n-0+deb11u4 amd64 [upgradable from: 1.1.1n-0+deb11u3] apt upgrade The following packages have been kept back: galera-4 libmariadb3 libmariadb3-compat libmariadbclient18 mariadb-client-10.6 mariadb-client-core-10.6 mariadb-common mariadb-server-10.6 mariadb-server-core-10.6 mysql-common The following packages will be upgraded: libssl1.1 openssl 2 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. cat /var/run/reboot-required # reboot [-f]
Unhold MariaDB related packages apt-mark unhold galera-4 libmariadb3-compat libmariadb3 libmariadbclient18 mariadb-client-10.6 mariadb-client-core-10.6 mariadb-common mariadb-server-10.6 mariadb-server-core-10.6 mysql-common apt-mark showhold
Install specific MariaDB version packages apt list -a galera-4 apt install galera-4=26.4.13-bullseye libmariadb3-compat=1:10.6.11+maria~deb11 libmariadb3=1:10.6.11+maria~deb11 libmariadbclient18=1:10.6.11+maria~deb11 mariadb-client-10.6=1:10.6.11+maria~deb11 mariadb-client-core-10.6=1:10.6.11+maria~deb11 mariadb-common=1:10.6.11+maria~deb11 mariadb-server-10.6=1:10.6.11+maria~deb11 mariadb-server-core-10.6=1:10.6.11+maria~deb11 mysql-common=1:10.6.11+maria~deb11 # To avoid 2 database restarts we can also do the reboot here systemctl restart mariadb mariadb-upgrade --user=root apt list --upgradable Listing... Done galera-4/unknown 26.4.14-deb11 amd64 [upgradable from: 26.4.13-bullseye] libmariadb3-compat/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] libmariadb3/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] libmariadbclient18/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mariadb-client-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mariadb-client-core-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mariadb-common/unknown 1:10.6.12+maria~deb11 all [upgradable from: 1:10.6.11+maria~deb11] mariadb-server-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mariadb-server-core-10.6/unknown 1:10.6.12+maria~deb11 amd64 [upgradable from: 1:10.6.11+maria~deb11] mysql-common/unknown 1:10.6.12+maria~deb11 all [upgradable from: 1:10.6.11+maria~deb11]
Taxonomy upgrade extras: debianpackageupgradelocklockingholdpin

FromDual Ops Center 1.2.1 for MariaDB, MySQL and compatible databases has been released

Shinguz - Tue, 2023-02-07 17:08

FromDual has the pleasure to announce the release of the new version 1.2.1 of its popular FromDual Ops Center focmm, a Graphical User Interface (GUI) for MariaDB, MySQL and compatible databases.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBA's and System Administrators to better manage their MariaDB and MySQL databases and Galera Cluster farms. Ops Center makes DBA and Admins life easier!

The main task of Ops Center is to support you in your daily MariaDB and MySQL operation tasks. More information about FromDual Ops Center you can find under General Information.

Download

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from our download site. How to install and use focmm is documented in the Ops Center User Guide.

In the inconceivable case that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center 1.2.1

Information on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.9.x to 1.2.1

Upgrade from 0.9.x to 1.2.1 should happen automatically. Please do a backup of your Ops Center database instance before you upgrade! Please also check Upgrading.

Important: In some cases the folder focmm/tmp/start_jobs.lock is missing. In this case jobs are not started. Please check the log file under focmm/log/start_jobs.log and create the folder accordingly. Further a file named pid should be located in this folder. Create also this file if it is not there.

Changes in Ops Center 1.2.1 Machine
  • ssh "Suspect machine" message fix advice command is now done with the right user and file.
  • Check error fixed and output made nicer.
  • Instance link added in machine overview and code clean-up.
  • Gather machine information added after add instance and before create instance to make sure O/S and distribution information is always there when instance is added or created.
  • Code clean-up in machine refresh.
  • Resource Group tag fixed, and code clean-up in machine show settings.

Instance
  • Repository connection was closed too early in starting and stopping instance. So start/stop failed. Bug fixed.
  • Case is caught properly now where node is started but galera plugin was not activated.
  • Bug in deploy configuration fixed. When " was added the configuration file was cut.
  • Bug in stopping instance fixed. Instance could not be stopped any more.

Cluster
  • Configuration wsrep_on = on is for MariaDB Galera Cluster and made now default. It was missing in some cases.

Load Balancer
  • GLB error handling and error messages improved.
  • Load balancer naming is enforced to unique now and load balancer configuration deployment message added.

Virtual IP (VIP)/Floating IP
  • No changes.

Tools
  • Job: Variable was not initialized correctly (bin/*), fixed.

Configuration
  • No changes.
Database-as-a-Service (DBaaS)
  • More debug information added to catch error in pricing calculations.

Building and Packaging
  • Debian compat level increased from 9 to 13.

Themes / UI
  • Library jquery updated from 3.6.1 to 3.6.3.
  • Field titles made unique Resource Group.

General
  • Some tests improved and frags made more robust.
  • Bugs in catch section fixed.
  • _SERVER[PHP_SELF] in some cases lead to wrong destination, fixed.

Repository
  • No changes.

Taxonomy upgrade extras: OperationsreleaseFromDual Ops Centerops centerdbaasfocmm

FromDual Ops Center 1.2.1 for MariaDB, MySQL and compatible databases has been released

Shinguz - Tue, 2023-02-07 17:08

FromDual has the pleasure to announce the release of the new version 1.2.1 of its popular FromDual Ops Center focmm, a Graphical User Interface (GUI) for MariaDB, MySQL and compatible databases.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBA's and System Administrators to better manage their MariaDB and MySQL databases and Galera Cluster farms. Ops Center makes DBA and Admins life easier!

The main task of Ops Center is to support you in your daily MariaDB and MySQL operation tasks. More information about FromDual Ops Center you can find under General Information.

Download

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from our download site. How to install and use focmm is documented in the Ops Center User Guide.

In the inconceivable case that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center 1.2.1

Information on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.9.x to 1.2.1

Upgrade from 0.9.x to 1.2.1 should happen automatically. Please do a backup of your Ops Center database instance before you upgrade! Please also check Upgrading.

Changes in Ops Center 1.2.1 Machine
  • ssh "Suspect machine" message fix advice command is now done with the right user and file.
  • Check error fixed and output made nicer.
  • Instance link added in machine overview and code clean-up.
  • Gather machine information added after add instance and before create instance to make sure O/S and distribution information is always there when instance is added or created.
  • Code clean-up in machine refresh.
  • Resource Group tag fixed, and code clean-up in machine show settings.

Instance
  • Repository connection was closed too early in starting and stopping instance. So start/stop failed. Bug fixed.
  • Case is caught properly now where node is started but galera plugin was not activated.
  • Bug in deploy configuration fixed. When " was added the configuration file was cut.
  • Bug in stopping instance fixed. Instance could not be stopped any more.

Cluster
  • Configuration wsrep_on = on is for MariaDB Galera Cluster and made now default. It was missing in some cases.

Load Balancer
  • GLB error handling and error messages improved.
  • Load balancer naming is enforced to unique now and load balancer configuration deployment message added.

Virtual IP (VIP)/Floating IP
  • No changes.

Tools
  • Job: Variable was not initialized correctly (bin/*), fixed.

Configuration
  • No changes.
Database-as-a-Service (DBaaS)
  • More debug information added to catch error in pricing calculations.

Building and Packaging
  • Debian compat level increased from 9 to 13.

Themes / UI
  • Library jquery updated from 3.6.1 to 3.6.3.
  • Field titles made unique Resource Group.

General
  • Some tests improved and frags made more robust.
  • Bugs in catch section fixed.
  • _SERVER[PHP_SELF] in some cases lead to wrong destination, fixed.

Repository
  • No changes.

Taxonomy upgrade extras: OperationsreleaseFromDual Ops Centerops centerdbaasfocmm

FromDual Ops Center 1.2.0 for MariaDB, MySQL and compatible databases has been released

Shinguz - Thu, 2023-01-19 12:01

FromDual has the pleasure to announce the release of the new version 1.2.0 of its popular FromDual Ops Center focmm, a Graphical User Interface (GUI) for MariaDB, MySQL and compatible databases.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBA's and System Administrators to better manage their MariaDB and MySQL databases and Galera Cluster farms. Ops Center makes DBA and Admins life easier!

The main task of Ops Center is to support you in your daily MariaDB and MySQL operation tasks. More information about FromDual Ops Center you can find under General Information.

Download

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from our download site. How to install and use focmm is documented in the Ops Center User Guide.

In the inconceivable case that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center 1.2.0

Information on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.9.x to 1.2.0

Upgrade from 0.9.x to 1.2.0 should happen automatically. Please do a backup of your Ops Center database instance before you upgrade! Please also check Upgrading.

Changes in Ops Center 1.2.0 Machine
  • Delete machine error message improved.
  • Add machine made more user friendly.
  • Delete machine warning message made more verbose.
  • Error handling improved and error messages made more verbose in gathering machine information.

Instance
  • Create instance: InnoDB buffer pool size estimation fixed.
  • Remove -core- packages from installation selection.
  • Access keys added to show instance.
  • Instance and machine names are sorted in selection now.
  • Error handling improved in performance views.
  • Restart instance bug fixed,
  • Status cluster_conf_id added.
  • Galera node bootstrap is shown correctly now.
  • Create schema can do character set now.
  • Start instance refactored.
  • Create instance is also working for Debian now.
  • Restart instance implemented.
  • Repository instance cannot be stopped any more to avoid system outages.
  • Processlist rewritten to P_S.threads.
  • Some more performance views added.
  • Delete instance warning message made more verbose.
  • Create instance: Machine names are sorted now alphabetically.
  • Query Cache remove in create instance because MySQL 8.0 does not support it any more and it is by default off in MariaDB now.
  • Instance backup remembers brman path now.

Cluster
  • Galera safe_to_bootstrap is working now and can be forced.
  • Cluster weight, node weight and segment added for Galera.
  • More information in error message when saving cluster.
  • Master/Slave features are not shown any more if Galera Cluster is chosen.
  • Cluster type cannot be changed any more.
  • Galera disable buttons if instance is stopped.
  • Galera node bootstrap functionality implemented.
  • Galera Cluster configuration deployment implemented.
  • Galera Cluster does not show M/S features any more.
  • Replication operations: SQL thread state was not displayed correctly.
  • GTID related information added to replication operations.
  • Delete instance from cluster made a bit more user friendly.

Load Balancer
  • Some minor bugs in load balancer and VIP failover fixed.
  • Galera Load Balancer balancing policy fixed.
  • Page made more user friendly.
  • Galera Load Balancer restart implemented.
  • Operations for MariaDB MaxScale implemented.
  • Load Balancer socket variable normalized.
  • Delete Load Balancer implemented.
  • Galera Load Balancer configuration is backuped before saving.
  • Galera Load Balancer Load Balancer policy change implemented.
  • Load Balancer policy added.
  • Change weight of load balancer back-end added.
  • Galera Load Balancer configuration persist added.
  • Galera Load Balancer load balancer configuration file parsed and displayed under settings.
  • Galera Load Balancer drain and undrain back-end added.
  • Galera Load Balancer version added to operations.
  • Galera Load Balancer checks refresh every 10 seconds.
  • Galera Load Balancer start and stop implemented
  • Statistics for Galera Load Balancer added.
  • Operations overview for Galera Load Balancer added.
  • Operations menu enabled.

Virtual IP (VIP)/Floating IP
  • VIP failover problem fixed.
  • Bitmask /32 is wrong, was changed to /24.
  • VIP failover made more robust.
  • Deploy standard keepalived configuration implemented.
  • keepalived version added.
  • keepalived failover works fine.
  • keepalived failover is working now.
  • vip keepalived failover started.
  • keepalived stop and start added.
  • Checks for keepalived added.
  • keepalived flag for vip added.
  • VIP can now be kept under keepalived.

Tools
  • Back button remove from crontab, makes no sense here.
  • Read in readJobs fixed and some error messages improved.
  • Display limited to 100 jobs because of memory issues.
  • Bug with remaining my_exec.stderr.* files found and fixed. PID 0 kill was caught, mkdir locking replaced by flock locking.

Configuration
  • No changes.
Database-as-a-Service (DBaaS)
  • No changes.

Building and Packaging
  • .ssh directory is created also on RPM.
  • MariaDB added as default for Rocky8.
  • Syntax error in installation script fixed.
  • On Rocky9 httpd and mysqld is now started during package installation.
  • php-mysql replaced by php-mysqlnd for RHEL package.
  • RPM package building added.
  • Directory .ssh for repository user is created during installation.
  • Package lsb-release added to Debian.
  • Install error message made better for Debian.

Themes / UI
  • jquery updated from 3.6.0 to 3.6.1 and jquery ui themes from 1.12.1 to 1.13.2

General
  • Changed copyright year from 2022 to 2023.
  • Some PHP 8.1 deprecations fixed.
  • Repository DB handle added to all writecheck functions.
  • PHP function exec replaced by my_exec where possible (ssh, scp).
  • whoami fixed, PATH added to my_exec because of Rocky8.
  • Function checkBinary also made working locally on Rocky8.
  • Bug in testEmail fixed.
  • Distribution Debian GNU/Linux is now supported everywhere.
  • Apache mod_rewrite enabled by default.
  • Add Apache SSL to installation.
  • Menu is now controllable via keys.
  • FromDual software versions is added.
  • Error message more verbose in case sendmail is not installed (Debian).
  • myEnv library updated including distro clean-up.
  • Better IP guessing during installation.

Repository
  • Create repository fixed for mariadb 5.5.
  • Code clean-up for focmm configuration file operations.

Taxonomy upgrade extras: OperationsreleaseFromDual Ops Centerops centerdbaasfocmm

FromDual Ops Center 1.2.0 for MariaDB, MySQL and compatible databases has been released

Shinguz - Thu, 2023-01-19 12:01

FromDual has the pleasure to announce the release of the new version 1.2.0 of its popular FromDual Ops Center focmm, a Graphical User Interface (GUI) for MariaDB, MySQL and compatible databases.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBA's and System Administrators to better manage their MariaDB and MySQL databases and Galera Cluster farms. Ops Center makes DBA and Admins life easier!

The main task of Ops Center is to support you in your daily MariaDB and MySQL operation tasks. More information about FromDual Ops Center you can find under General Information.

Download

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from our download site. How to install and use focmm is documented in the Ops Center User Guide.

In the inconceivable case that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center 1.2.0

Information on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.9.x to 1.2.0

Upgrade from 0.9.x to 1.2.0 should happen automatically. Please do a backup of your Ops Center database instance before you upgrade! Please also check Upgrading.

Changes in Ops Center 1.2.0 Machine
  • Delete machine error message improved.
  • Add machine made more user friendly.
  • Delete machine warning message made more verbose.
  • Error handling improved and error messages made more verbose in gathering machine information.

Instance
  • Create instance: InnoDB buffer pool size estimation fixed.
  • Remove -core- packages from installation selection.
  • Access keys added to show instance.
  • Instance and machine names are sorted in selection now.
  • Error handling improved in performance views.
  • Restart instance bug fixed,
  • Status cluster_conf_id added.
  • Galera node bootstrap is shown correctly now.
  • Create schema can do character set now.
  • Start instance refactored.
  • Create instance is also working for Debian now.
  • Restart instance implemented.
  • Repository instance cannot be stopped any more to avoid system outages.
  • Processlist rewritten to P_S.threads.
  • Some more performance views added.
  • Delete instance warning message made more verbose.
  • Create instance: Machine names are sorted now alphabetically.
  • Query Cache remove in create instance because MySQL 8.0 does not support it any more and it is by default off in MariaDB now.
  • Instance backup remembers brman path now.

Cluster
  • Galera safe_to_bootstrap is working now and can be forced.
  • Cluster weight, node weight and segment added for Galera.
  • More information in error message when saving cluster.
  • Master/Slave features are not shown any more if Galera Cluster is chosen.
  • Cluster type cannot be changed any more.
  • Galera disable buttons if instance is stopped.
  • Galera node bootstrap functionality implemented.
  • Galera Cluster configuration deployment implemented.
  • Galera Cluster does not show M/S features any more.
  • Replication operations: SQL thread state was not displayed correctly.
  • GTID related information added to replication operations.
  • Delete instance from cluster made a bit more user friendly.

Load Balancer
  • Some minor bugs in load balancer and VIP failover fixed.
  • Galera Load Balancer balancing policy fixed.
  • Page made more user friendly.
  • Galera Load Balancer restart implemented.
  • Operations for MariaDB MaxScale implemented.
  • Load Balancer socket variable normalized.
  • Delete Load Balancer implemented.
  • Galera Load Balancer configuration is backuped before saving.
  • Galera Load Balancer Load Balancer policy change implemented.
  • Load Balancer policy added.
  • Change weight of load balancer back-end added.
  • Galera Load Balancer configuration persist added.
  • Galera Load Balancer load balancer configuration file parsed and displayed under settings.
  • Galera Load Balancer drain and undrain back-end added.
  • Galera Load Balancer version added to operations.
  • Galera Load Balancer checks refresh every 10 seconds.
  • Galera Load Balancer start and stop implemented
  • Statistics for Galera Load Balancer added.
  • Operations overview for Galera Load Balancer added.
  • Operations menu enabled.

Virtual IP (VIP)/Floating IP
  • VIP failover problem fixed.
  • Bitmask /32 is wrong, was changed to /24.
  • VIP failover made more robust.
  • Deploy standard keepalived configuration implemented.
  • keepalived version added.
  • keepalived failover works fine.
  • keepalived failover is working now.
  • vip keepalived failover started.
  • keepalived stop and start added.
  • Checks for keepalived added.
  • keepalived flag for vip added.
  • VIP can now be kept under keepalived.

Tools
  • Back button remove from crontab, makes no sense here.
  • Read in readJobs fixed and some error messages improved.
  • Display limited to 100 jobs because of memory issues.
  • Bug with remaining my_exec.stderr.* files found and fixed. PID 0 kill was caught, mkdir locking replaced by flock locking.

Configuration
  • No changes.
Database-as-a-Service (DBaaS)
  • No changes.

Building and Packaging
  • .ssh directory is created also on RPM.
  • MariaDB added as default for Rocky8.
  • Syntax error in installation script fixed.
  • On Rocky9 httpd and mysqld is now started during package installation.
  • php-mysql replaced by php-mysqlnd for RHEL package.
  • RPM package building added.
  • Directory .ssh for repository user is created during installation.
  • Package lsb-release added to Debian.
  • Install error message made better for Debian.

Themes / UI
  • jquery updated from 3.6.0 to 3.6.1 and jquery ui themes from 1.12.1 to 1.13.2

General
  • Changed copyright year from 2022 to 2023.
  • Some PHP 8.1 deprecations fixed.
  • Repository DB handle added to all writecheck functions.
  • PHP function exec replaced by my_exec where possible (ssh, scp).
  • whoami fixed, PATH added to my_exec because of Rocky8.
  • Function checkBinary also made working locally on Rocky8.
  • Bug in testEmail fixed.
  • Distribution Debian GNU/Linux is now supported everywhere.
  • Apache mod_rewrite enabled by default.
  • Add Apache SSL to installation.
  • Menu is now controllable via keys.
  • FromDual software versions is added.
  • Error message more verbose in case sendmail is not installed (Debian).
  • myEnv library updated including distro clean-up.
  • Better IP guessing during installation.

Repository
  • Create repository fixed for mariadb 5.5.
  • Code clean-up for focmm configuration file operations.

Taxonomy upgrade extras: OperationsreleaseFromDual Ops Centerops centerdbaasfocmm

Comparing Optimizer Results

Shinguz - Fri, 2022-11-18 10:59

A while ago I read a paper about Optimizer Benchmarks: How Good Are Query Optimizers, Really? by Viktor Leis from Fakultät für Informatik of the Technische Universität München.

Because we have from time to time problems with slow queries of customers especially after upgrading to new database releases it is interesting for me, how different optimizers cope with a query. Sometimes it is not clear to me why the query is slow or how I can make it faster. So the mentioned paper inspired me to compare the same query with the same dataset among different optimizers or optimizer versions. The most recent query from our customer we were testing against MariaDB 10.6, 10.9, 10.10, MySQL 8.0 and PostgreSQL 15.1. The test data

The table we were using is our generic test table which many people already know from our MariaDB and MySQL trainings:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(128) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(), PRIMARY KEY (`id`) );
Loading the data from MySQL

How to dump the data from MySQL to load them into PostgreSQL I have already described earlier. This is about how to create the table and load the data into PostgreSQL:

# sudo su - postgres # psql # postgres=# SELECT VERSION(); # postgres=# \l # postgres=# CREATE DATABASE test; CREATE DATABASE postgres=# \c test postgres=# CREATE TABLE test ( id SERIAL NOT NULL PRIMARY KEY, data VARCHAR(128) DEFAULT NULL, ts TIMESTAMP NOT NULL ); test=# \dt+ List of relations Schema | Name | Type | Owner | Persistence | Access method | Size | Description --------+----------+-------+----------+-------------+---------------+---------+------------- public | test | table | postgres | permanent | heap | 0 bytes | # psql test < /tmp/test_dump.sql
The Query

The query we had problems with looked something like this:

WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data -- MariaDB 10.6: 0.66s -- MariaDB 10.9: 0.37s -- Competing product: 0.70s ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value -- MariaDB 10.6: 2.04s -- MariaDB 10.9: 0.79s -- Competing product: 0.70s UNION SELECT 'BBB' AS label, (SELECT c FROM t WHERE pn = 'ab2516b576d7463f106b5671bdac6d32')/((SELECT c FROM t WHERE pn = 'ab28e05f4fa736f4738a2ff6cf494f04')+(SELECT c FROM t WHERE pn = 'ab31ab6a7d52fee49990d63dee58371b'))*100 AS value -- MariaDB 10.6: 3.54s -- MariaDB 10.9: 1.38s -- Competing product: 0.70s UNION SELECT 'CCC' AS label, (SELECT c FROM t WHERE pn = 'ab390b211d7bb3cc19fc1f808fb1bc3a')/(SELECT c FROM t WHERE pn = 'ab45af9e4ca10d3a618d40ec3f230349')*100 AS value -- MariaDB 10.6: 4.58s -- MariaDB 10.9: 1.76s -- Competing product: 0.70s UNION SELECT 'DDD' AS label, ((SELECT c FROM t WHERE pn = 'ab531756624d8287d38eb5fe5809bdb4')+(SELECT c FROM t WHERE pn = 'ab5aaa230cc64c15a61f908aea1c3f9c'))/((SELECT c FROM t WHERE pn = 'ab60f7712cd176b6ac26040d69be0e11')+(SELECT c FROM t WHERE pn = 'ab674e464997f4916e7ed23942c53d23'))*100 AS value UNION SELECT 'EEE' AS label, (SELECT c FROM t WHERE pn = 'ab6cc3781441a08d67eb8039521c63bb')/((SELECT c FROM t WHERE pn = 'ab7612bca85db4f6c4638a658fbee646')+(SELECT c FROM t WHERE pn = 'ab7811882f520c2051c6761edf4c306f'))*100 AS value UNION SELECT 'FFF' AS label, (SELECT c FROM t WHERE pn = 'ab7f0bd3758fbdac0ff6abb24db166ad')/(SELECT c FROM t WHERE pn = 'ab87d7d598ad47ca5db3bf9e6cbdee48')*100 AS value -- MariaDB 10.6: 9.15s -- MariaDB 10.9: 3.51s -- Competing product: 0.71s ;
The results

Timing is enabled in PostgreSQL as follows:

postgres=# \timing

Then we were running the different queries against the different databases and versions:

 Q1Q2Q3Q4Q5Q6Q7Q8 MariaDB 10.6.110.14 s0.32 s0.79 s1.39 s1.78 s3.56 s3.32 s0.46 s MariaDB 10.9.40.14 s0.33 s0.80 s1.40 s1.80 s3.60 s3.40 s0.46 s MariaDB 10.10.20.14 s0.33 s0.80 s1.39 s1.78 s3.55 s3.40 s0.45 s MySQL 8.0.310.03 s0.77 s0.77 s0.77 s0.77 s0.77 s1.31 s1.39 s/0.44 s PostgreSQL 15.10.03 s0.04 s0.04 s0.04 s0.04 s0.04 s0.65 s0.35 s
Other sources:
The Queries Query 1: SELECT COUNT(*) FROM test;
Query 2 SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data;
Query 3 WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value;
Query 4 WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value UNION SELECT 'BBB' AS label, (SELECT c FROM t WHERE pn = 'ab2516b576d7463f106b5671bdac6d32')/((SELECT c FROM t WHERE pn = 'ab28e05f4fa736f4738a2ff6cf494f04')+(SELECT c FROM t WHERE pn = 'ab31ab6a7d52fee49990d63dee58371b'))*100 AS value;
Query 5 WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value UNION SELECT 'BBB' AS label, (SELECT c FROM t WHERE pn = 'ab2516b576d7463f106b5671bdac6d32')/((SELECT c FROM t WHERE pn = 'ab28e05f4fa736f4738a2ff6cf494f04')+(SELECT c FROM t WHERE pn = 'ab31ab6a7d52fee49990d63dee58371b'))*100 AS value UNION SELECT 'CCC' AS label, (SELECT c FROM t WHERE pn = 'ab390b211d7bb3cc19fc1f808fb1bc3a')/(SELECT c FROM t WHERE pn = 'ab45af9e4ca10d3a618d40ec3f230349')*100 AS value;
Query 6 WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value UNION SELECT 'BBB' AS label, (SELECT c FROM t WHERE pn = 'ab2516b576d7463f106b5671bdac6d32')/((SELECT c FROM t WHERE pn = 'ab28e05f4fa736f4738a2ff6cf494f04')+(SELECT c FROM t WHERE pn = 'ab31ab6a7d52fee49990d63dee58371b'))*100 AS value UNION SELECT 'CCC' AS label, (SELECT c FROM t WHERE pn = 'ab390b211d7bb3cc19fc1f808fb1bc3a')/(SELECT c FROM t WHERE pn = 'ab45af9e4ca10d3a618d40ec3f230349')*100 AS value UNION SELECT 'DDD' AS label, ((SELECT c FROM t WHERE pn = 'ab531756624d8287d38eb5fe5809bdb4')+(SELECT c FROM t WHERE pn = 'ab5aaa230cc64c15a61f908aea1c3f9c'))/((SELECT c FROM t WHERE pn = 'ab60f7712cd176b6ac26040d69be0e11')+(SELECT c FROM t WHERE pn = 'ab674e464997f4916e7ed23942c53d23'))*100 AS value UNION SELECT 'EEE' AS label, (SELECT c FROM t WHERE pn = 'ab6cc3781441a08d67eb8039521c63bb')/((SELECT c FROM t WHERE pn = 'ab7612bca85db4f6c4638a658fbee646')+(SELECT c FROM t WHERE pn = 'ab7811882f520c2051c6761edf4c306f'))*100 AS value UNION SELECT 'FFF' AS label, (SELECT c FROM t WHERE pn = 'ab7f0bd3758fbdac0ff6abb24db166ad')/(SELECT c FROM t WHERE pn = 'ab87d7d598ad47ca5db3bf9e6cbdee48')*100 AS value;
Query 7

The table:

MariaDB/MySQLPostgreSQL CREATE TABLE `queue_destinations` ( `id` int(11) NOT NULL AUTO_INCREMENT, `queue` int(11) NOT NULL, `dtype` varchar(100) NOT NULL, `dnumber` varchar(255) NOT NULL, `available` smallint(6) NOT NULL DEFAULT 1, `priority` smallint(6) NOT NULL DEFAULT 1, `lasttime` bigint(20) NOT NULL DEFAULT 0, `nexttime` bigint(20) NOT NULL DEFAULT 0, `active_call` varchar(100) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `queue` (`queue`,`dtype`,`dnumber`), KEY `dtype` (`dtype`,`dnumber`), KEY `available` (`available`), KEY `priority` (`priority`), KEY `lasttime` (`lasttime`), KEY `nexttime` (`nexttime`), KEY `active_call` (`active_call`) ); CREATE TABLE queue_destinations ( id SERIAL NOT NULL PRIMARY KEY, queue int NOT NULL, dtype varchar(100) NOT NULL, dnumber varchar(255) NOT NULL, available smallint NOT NULL DEFAULT 1, priority smallint NOT NULL DEFAULT 1, lasttime bigint NOT NULL DEFAULT 0, nexttime bigint NOT NULL DEFAULT 0, active_call varchar(100) NOT NULL, CONSTRAINT queue UNIQUE (queue, dtype, dnumber) ); CREATE INDEX dtype_idx ON queue_destinations (dtype,dnumber); CREATE INDEX available ON queue_destinations (available); CREATE INDEX priority ON queue_destinations (priority); CREATE INDEX lasttime ON queue_destinations (lasttime); CREATE INDEX nexttime ON queue_destinations (nexttime); CREATE INDEX active_call ON queue_destinations (active_call);

The Query:

SELECT * FROM queue_destinations a WHERE queue = 45393 AND available = 1 AND nexttime <= 1669284432 AND active_call = '' AND ( SELECT COUNT(*) FROM queue_destinations b WHERE b.dnumber = a.dnumber AND active_call != '' ) = 0 ORDER BY priority DESC, lasttime FOR UPDATE ;

The Query Execution Plan:

+------+--------------------+-------+------+--------------------------+-------+---------+-------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+--------------------+-------+------+--------------------------+-------+---------+-------+--------+-----------------------------+ | 1 | PRIMARY | a | ref | queue,available,nexttime | queue | 4 | const | 5 | Using where; Using filesort | | 2 | DEPENDENT SUBQUERY | b | ALL | NULL | NULL | NULL | NULL | 955596 | Using where | +------+--------------------+-------+------+--------------------------+-------+---------+-------+--------+-----------------------------+ -> Sort: a.priority DESC, a.lasttime (cost=1.26 rows=5) -> Filter: ((a.active_call = '') and (a.available = 1) and (a.nexttime <= 1669284432) and ((select #2) = 0)) -> Index lookup on a using queue (queue=45393) -> Select #2 (subquery in condition; dependent) -> Aggregate: count(0) (cost=30637.14 rows=1) -> Filter: ((b.dnumber = a.dnumber) and (b.active_call <> '')) (cost=21409.85 rows=92273) -> Table scan on b (cost=21409.85 rows=1025255) LockRows (cost=395076.59..395076.61 rows=1 width=101) -> Sort (cost=395076.59..395076.60 rows=1 width=101) Sort Key: a.priority DESC, a.lasttime -> Index Scan using queue on queue_destinations a (cost=0.42..395076.58 rows=1 width=101) Index Cond: (queue = 45393) Filter: ((nexttime <= 1669284432) AND (available = 1) AND ((active_call)::text = ''::text) AND ((SubPlan 1) = 0)) SubPlan 1 -> Aggregate (cost=32918.64..32918.65 rows=1 width=8) -> Seq Scan on queue_destinations b (cost=0.00..32918.64 rows=1 width=0) Filter: (((active_call)::text <> ''::text) AND ((dnumber)::text = (a.dnumber)::text)) JIT: Functions: 12 Options: Inlining false, Optimization false, Expressions true, Deforming true
Query 8 SELECT * FROM test WHERE data IS NULL; MariaDB 10.x: +------+-------------+-------+------+---------------+------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+-------------+-------+------+---------------+------+---------+------+---------+-------------+ | 1 | SIMPLE | test | ALL | data | NULL | NULL | NULL | 1047013 | Using where | +------+-------------+-------+------+---------------+------+---------+------+---------+-------------+ MySQL 8 (QEP is wrong!): +----+-------------+-------+------------+------+---------------+------+---------+-------+--------+----------+-----------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+---------------+------+---------+-------+--------+----------+-----------------------+ | 1 | SIMPLE | test | NULL | ref | data | data | 515 | const | 523506 | 100.00 | Using index condition | +----+-------------+-------+------------+------+---------------+------+---------+-------+--------+----------+-----------------------+ PostgreSQL 15: Seq Scan on test (cost=0.00..25952.76 rows=1047632 width=44) Filter: (data IS NULL)

99.9% of rows are NULL in column data. This test is coming from: MariaDB indexing of NULL values

Conclusion

This little tests showed some results I have not expected:

  • MariaDB Optimizer does some bad things on this query. OK, this was expected. Bug is reported: MDEV-30017.
  • The originally very bad execution times from MariaDB 10.6 (see timing notes in the first query) were not reproducible any more the next day. I should consider more to run ANALYZE TABLE. I should have know this because it is not the first time I experienced this.
  • We have a lot of variations between different measurements. This I should investigate a bit more later...
  • In the MariaDB Server Fest 2022 yesterday I heard with MariaDB 11 Optimizer everything will become much better... Wait! there is something more I do not understand and which I did not expect:
  • PostgreSQL response time is constantly dramatic better for these queries. I checked the docu but did not find anything beside their normal cache which would explain this (something similar to the MariaDB Query Cache for example). If somebody has a clue why they manage to respond so fast or why we respond so slow I would be happy for a hint...

Taxonomy upgrade extras: postgresqlOptimizerperformanceresponse timelatency

Comparing Optimizer Results

Shinguz - Fri, 2022-11-18 10:59

A while ago I read a paper about Optimizer Benchmarks: How Good Are Query Optimizers, Really? by Viktor Leis from Fakultät für Informatik of the Technische Universität München.

Because we have from time to time problems with slow queries of customers especially after upgrading to new database releases it is interesting for me, how different optimizers cope with a query. Sometimes it is not clear to me why the query is slow or how I can make it faster. So the mentioned paper inspired me to compare the same query with the same dataset among different optimizers or optimizer versions. The most recent query from our customer we were testing against MariaDB 10.6, 10.9, 10.10, MySQL 8.0 and PostgreSQL 15.1. The test data

The table we were using is our generic test table which many people already know from our MariaDB and MySQL trainings:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(128) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(), PRIMARY KEY (`id`) );
Loading the data from MySQL

How to dump the data from MySQL to load them into PostgreSQL I have already described earlier. This is about how to create the table and load the data into PostgreSQL:

# sudo su - postgres # psql # postgres=# SELECT VERSION(); # postgres=# \l # postgres=# CREATE DATABASE test; CREATE DATABASE postgres=# \c test postgres=# CREATE TABLE test ( id SERIAL NOT NULL PRIMARY KEY, data VARCHAR(128) DEFAULT NULL, ts TIMESTAMP NOT NULL ); test=# \dt+ List of relations Schema | Name | Type | Owner | Persistence | Access method | Size | Description --------+----------+-------+----------+-------------+---------------+---------+------------- public | test | table | postgres | permanent | heap | 0 bytes | # psql test < /tmp/test_dump.sql
The Query

The query we had problems with looked something like this:

WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data -- MariaDB 10.6: 0.66s -- MariaDB 10.9: 0.37s -- Competing product: 0.70s ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value -- MariaDB 10.6: 2.04s -- MariaDB 10.9: 0.79s -- Competing product: 0.70s UNION SELECT 'BBB' AS label, (SELECT c FROM t WHERE pn = 'ab2516b576d7463f106b5671bdac6d32')/((SELECT c FROM t WHERE pn = 'ab28e05f4fa736f4738a2ff6cf494f04')+(SELECT c FROM t WHERE pn = 'ab31ab6a7d52fee49990d63dee58371b'))*100 AS value -- MariaDB 10.6: 3.54s -- MariaDB 10.9: 1.38s -- Competing product: 0.70s UNION SELECT 'CCC' AS label, (SELECT c FROM t WHERE pn = 'ab390b211d7bb3cc19fc1f808fb1bc3a')/(SELECT c FROM t WHERE pn = 'ab45af9e4ca10d3a618d40ec3f230349')*100 AS value -- MariaDB 10.6: 4.58s -- MariaDB 10.9: 1.76s -- Competing product: 0.70s UNION SELECT 'DDD' AS label, ((SELECT c FROM t WHERE pn = 'ab531756624d8287d38eb5fe5809bdb4')+(SELECT c FROM t WHERE pn = 'ab5aaa230cc64c15a61f908aea1c3f9c'))/((SELECT c FROM t WHERE pn = 'ab60f7712cd176b6ac26040d69be0e11')+(SELECT c FROM t WHERE pn = 'ab674e464997f4916e7ed23942c53d23'))*100 AS value UNION SELECT 'EEE' AS label, (SELECT c FROM t WHERE pn = 'ab6cc3781441a08d67eb8039521c63bb')/((SELECT c FROM t WHERE pn = 'ab7612bca85db4f6c4638a658fbee646')+(SELECT c FROM t WHERE pn = 'ab7811882f520c2051c6761edf4c306f'))*100 AS value UNION SELECT 'FFF' AS label, (SELECT c FROM t WHERE pn = 'ab7f0bd3758fbdac0ff6abb24db166ad')/(SELECT c FROM t WHERE pn = 'ab87d7d598ad47ca5db3bf9e6cbdee48')*100 AS value -- MariaDB 10.6: 9.15s -- MariaDB 10.9: 3.51s -- Competing product: 0.71s ;
The results

Timing is enabled in PostgreSQL as follows:

postgres=# \timing

Then we were running the different queries against the different databases and versions:

 Q1Q2Q3Q4Q5Q6 MariaDB 10.6.110.14 s0.32 s0.79 s1.39 s1.78 s3.56 s MariaDB 10.9.40.14 s0.33 s0.80 s1.40 s1.80 s3.60 s MariaDB 10.10.20.14 s0.33 s0.80 s1.39 s1.78 s3.55 s MySQL 8.0.310.03 s0.77 s0.77 s0.77 s0.77 s0.77 s PostgreSQL 15.10.03 s0.04 s0.04 s0.04 s0.04 s0.04 s
The Queries Query 1: SELECT COUNT(*) FROM test;
Query 2 SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data;
Query 3 WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value;
Query 4 WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value UNION SELECT 'BBB' AS label, (SELECT c FROM t WHERE pn = 'ab2516b576d7463f106b5671bdac6d32')/((SELECT c FROM t WHERE pn = 'ab28e05f4fa736f4738a2ff6cf494f04')+(SELECT c FROM t WHERE pn = 'ab31ab6a7d52fee49990d63dee58371b'))*100 AS value;
Query 5 WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value UNION SELECT 'BBB' AS label, (SELECT c FROM t WHERE pn = 'ab2516b576d7463f106b5671bdac6d32')/((SELECT c FROM t WHERE pn = 'ab28e05f4fa736f4738a2ff6cf494f04')+(SELECT c FROM t WHERE pn = 'ab31ab6a7d52fee49990d63dee58371b'))*100 AS value UNION SELECT 'CCC' AS label, (SELECT c FROM t WHERE pn = 'ab390b211d7bb3cc19fc1f808fb1bc3a')/(SELECT c FROM t WHERE pn = 'ab45af9e4ca10d3a618d40ec3f230349')*100 AS value;
Query 6 WITH t AS ( SELECT SUM(id)+0.0 AS c, data AS pn FROM test WHERE ts BETWEEN '2022-11-22 00:00:00' AND '2022-11-22 23:59:59' AND data LIKE 'ab%' GROUP BY data ORDER BY data ) SELECT 'AAA' AS label, ((SELECT c FROM t WHERE pn = 'ab06d9ca0e5eb27e5bb2e5c0f345b782')+(SELECT c FROM t WHERE pn = 'ab0f8e861348995d1cbf9ec21d4bf52d'))/((SELECT c FROM t WHERE pn = 'ab134910f6808802cf44b587fb825ab5')+(SELECT c FROM t WHERE pn = 'ab1d6682feb41a97054e8c634ecd5552'))*100 AS value UNION SELECT 'BBB' AS label, (SELECT c FROM t WHERE pn = 'ab2516b576d7463f106b5671bdac6d32')/((SELECT c FROM t WHERE pn = 'ab28e05f4fa736f4738a2ff6cf494f04')+(SELECT c FROM t WHERE pn = 'ab31ab6a7d52fee49990d63dee58371b'))*100 AS value UNION SELECT 'CCC' AS label, (SELECT c FROM t WHERE pn = 'ab390b211d7bb3cc19fc1f808fb1bc3a')/(SELECT c FROM t WHERE pn = 'ab45af9e4ca10d3a618d40ec3f230349')*100 AS value UNION SELECT 'DDD' AS label, ((SELECT c FROM t WHERE pn = 'ab531756624d8287d38eb5fe5809bdb4')+(SELECT c FROM t WHERE pn = 'ab5aaa230cc64c15a61f908aea1c3f9c'))/((SELECT c FROM t WHERE pn = 'ab60f7712cd176b6ac26040d69be0e11')+(SELECT c FROM t WHERE pn = 'ab674e464997f4916e7ed23942c53d23'))*100 AS value UNION SELECT 'EEE' AS label, (SELECT c FROM t WHERE pn = 'ab6cc3781441a08d67eb8039521c63bb')/((SELECT c FROM t WHERE pn = 'ab7612bca85db4f6c4638a658fbee646')+(SELECT c FROM t WHERE pn = 'ab7811882f520c2051c6761edf4c306f'))*100 AS value UNION SELECT 'FFF' AS label, (SELECT c FROM t WHERE pn = 'ab7f0bd3758fbdac0ff6abb24db166ad')/(SELECT c FROM t WHERE pn = 'ab87d7d598ad47ca5db3bf9e6cbdee48')*100 AS value;
Conclusion

This little tests showed some results I have not expected:

  • MariaDB Optimizer does some bad things on this query. OK, this was expected. Bug is reported: MDEV-30017.
  • The originally very bad execution times from MariaDB 10.6 (see timing notes in the first query) were not reproducible any more the next day. I should consider more to run ANALYZE TABLE. I should have know this because it is not the first time I experienced this.
  • We have a lot of variations between different measurements. This I should investigate a bit more later...
  • In the MariaDB Server Fest 2022 yesterday I heard with MariaDB 11 Optimizer everything will become much better... Wait! there is something more I do not understand and which I did not expect:
  • PostgreSQL response time is constantly dramatic better for these queries. I checked the docu but did not find anything beside their normal cache which would explain this (something similar to the MariaDB Query Cache for example). If somebody has a clue why they manage to respond so fast or why we respond so slow I would be happy for a hint...

Taxonomy upgrade extras: postgresqlOptimizerperformanceresponse timelatency

Migration of your data from one database to another

Shinguz - Thu, 2022-11-17 17:01

Before you consider migrating your data from MySQL to another database you have to know which objects have to be migrated.

With this query you will find the objects to consider:

SELECT TABLE_SCHEMA AS `SCHEMA`, IF(TABLE_TYPE = 'BASE TABLE', 'TABLE', TABLE_TYPE) AS OBJECT_TYPE, TABLE_NAME AS `OBJECT_NAME` , IFNULL(ENGINE, '') AS ENGINE, IFNULL(TABLE_ROWS, '') AS `ROWS` , IFNULL(DATA_LENGTH, '') AS DATA_SIZE, IFNULL(INDEX_LENGTH, '') AS INDEX_SIZE FROM information_schema.tables WHERE TABLE_SCHEMA NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys') UNION SELECT '', IF(is_role = 'Y', 'ROLE', 'USER'), CONCAT("'", user, "'", '@', "'", host, "'") AS OBJECT_TYPE, '', '', '', '' FROM mysql.user UNION SELECT db, type, name, '', '', '', '' FROM mysql.proc WHERE db NOT IN ('sys', 'information_schema', 'performance_schema', 'mysql') UNION SELECT db, 'EVENT', name, '', '', '', '' FROM mysql.event UNION SELECT trigger_schema, 'TRIGGER', trigger_name, '', '', '', '' FROM information_schema.triggers UNION SELECT db, 'EVENT', name, '', '', '', '' FROM mysql.event ORDER BY `SCHEMA` ASC, OBJECT_TYPE ASC, OBJECT_NAME ASC ; +--------+-------------+---------------------------+--------+---------+-----------+------------+ | SCHEMA | OBJECT_TYPE | OBJECT_NAME | ENGINE | ROWS | DATA_SIZE | INDEX_SIZE | +--------+-------------+---------------------------+--------+---------+-----------+------------+ | | ROLE | 'test_r'@'' | | | | | | | USER | 'app'@'%' | | | | | | | USER | 'app'@'127.0.0.1' | | | | | | | USER | 'focmm'@'127.0.0.1' | | | | | | | USER | 'test'@'localhost' | | | | | | sbtest | TABLE | sbtest1 | InnoDB | 9680 | 2637824 | 163840 | | test | EVENT | myevent | | | | | | test | FUNCTION | format_time | | | | | | test | PROCEDURE | diagnostics | | | | | | test | TABLE | dt | InnoDB | 6 | 16384 | 0 | | test | TABLE | test | InnoDB | 1045044 | 63520768 | 0 | | test | TRIGGER | test_trigger | | | | | | test | VIEW | test_v | | | | | +--------+-------------+---------------------------+--------+---------+-----------+------------+

An easy way to dump all the object definitions (except users and roles) is the following command:

mysqldump --user=root --no-data --triggers --routines --events test > /tmp/test_structure_dump.sql

If you want to dump your data for importing them into another SQL database this command can help:

mysqldump --user=root --skip-extended-insert --skip-lock-tables --no-create-info \ --where='id = id ' --skip-add-locks --skip-comments --skip-quote-names test test \ | grep -v '^/\*' | grep -v ^$ > /tmp/test_dump.sql
Taxonomy upgrade extras: migrationdatabase

Migration of your data from one database to another

Shinguz - Thu, 2022-11-17 17:01

Before you consider migrating your data from MySQL to another database you have to know which objects have to be migrated.

With this query you will find the objects to consider:

SELECT TABLE_SCHEMA AS `SCHEMA`, IF(TABLE_TYPE = 'BASE TABLE', 'TABLE', TABLE_TYPE) AS OBJECT_TYPE, TABLE_NAME AS `OBJECT_NAME` , IFNULL(ENGINE, '') AS ENGINE, IFNULL(TABLE_ROWS, '') AS `ROWS` , IFNULL(DATA_LENGTH, '') AS DATA_SIZE, IFNULL(INDEX_LENGTH, '') AS INDEX_SIZE FROM information_schema.tables WHERE TABLE_SCHEMA NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys') UNION SELECT '', IF(is_role = 'Y', 'ROLE', 'USER'), CONCAT("'", user, "'", '@', "'", host, "'") AS OBJECT_TYPE, '', '', '', '' FROM mysql.user UNION SELECT db, type, name, '', '', '', '' FROM mysql.proc WHERE db NOT IN ('sys', 'information_schema', 'performance_schema', 'mysql') UNION SELECT db, 'EVENT', name, '', '', '', '' FROM mysql.event UNION SELECT trigger_schema, 'TRIGGER', trigger_name, '', '', '', '' FROM information_schema.triggers UNION SELECT db, 'EVENT', name, '', '', '', '' FROM mysql.event ORDER BY `SCHEMA` ASC, OBJECT_TYPE ASC, OBJECT_NAME ASC ; +--------+-------------+---------------------------+--------+---------+-----------+------------+ | SCHEMA | OBJECT_TYPE | OBJECT_NAME | ENGINE | ROWS | DATA_SIZE | INDEX_SIZE | +--------+-------------+---------------------------+--------+---------+-----------+------------+ | | ROLE | 'test_r'@'' | | | | | | | USER | 'app'@'%' | | | | | | | USER | 'app'@'127.0.0.1' | | | | | | | USER | 'focmm'@'127.0.0.1' | | | | | | | USER | 'test'@'localhost' | | | | | | sbtest | TABLE | sbtest1 | InnoDB | 9680 | 2637824 | 163840 | | test | EVENT | myevent | | | | | | test | FUNCTION | format_time | | | | | | test | PROCEDURE | diagnostics | | | | | | test | TABLE | dt | InnoDB | 6 | 16384 | 0 | | test | TABLE | test | InnoDB | 1045044 | 63520768 | 0 | | test | TRIGGER | test_trigger | | | | | | test | VIEW | test_v | | | | | +--------+-------------+---------------------------+--------+---------+-----------+------------+

An easy way to dump all the object definitions (except users and roles) is the following command:

mysqldump --user=root --no-data test > /tmp/test_structure_dump.sql

If you want to dump your data for importing them into another SQL database this command can help:

mysqldump --user=root --skip-extended-insert --skip-lock-tables --no-create-info \ --where='id = id ' --skip-add-locks --skip-comments --skip-quote-names test test \ | grep -v '^/\*' | grep -v ^$ > /tmp/test_dump.sql
Taxonomy upgrade extras: migrationdatabase

Linux Container with LXD for focmm unit testing

Shinguz - Thu, 2022-10-13 19:11

Currently we are in the testing phase of the next release of our Ops Center (focmm). This testing is quite complex because it includes testing the interaction of various different components like a MariaDB Galera Cluster, a Galera Load Balancer, a virtual IP, etc.

Recently I was in the Linuxhotel for a Galera Cluster Training and there one of the other trainers was romanticising about Linux Containers and LXD. So I had a short look, if LXD could be useful to simplify our unit testing. Some old knowledge from a previous Docker PoC was quite helpful to start with...

Prepare a LXC container for Galera Load Balancer

This did NOT work as expected because the Debian image was lacking IPv4 addresses and I did not find on the quick how to change that. So I used the Ubuntu 22.04 image.

shell> lxc remote list shell> lxc image list images: ubuntu/22.04 amd64 shell> INSTANCE='qa-glb' shell> lxc launch images:ubuntu/jammy ${INSTANCE} shell> lxc list *glb* +--------+---------+-----------------------+----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+-----------------------+----------------------------------------------+------------+-----------+ | qa-glb | RUNNING | 10.139.158.183 (eth0) | fd42:1730:178f:78c:216:3eff:fe3f:2948 (eth0) | PERSISTENT | 0 | +--------+---------+-----------------------+----------------------------------------------+------------+-----------+

Now we should remember the IPv4 address for later use. I am sure this can be done more elegant but for now this is fine...

Install Galera Load Balancer in the LXC container shell> lxc exec ${INSTANCE} -- /bin/bash container> apt-get update container> apt-get install wget container> wget https://support.fromdual.com/admin/download/glb_1.0.1-Ubuntu12.04-x86_64.deb container> apt-get install ./glb_1.0.1-Ubuntu12.04-x86_64.deb container> rm -f glb_1.0.1-Ubuntu12.04-x86_64.deb

Then we have to add the unit file:

# # /etc/systemd/system/glb.service # [Unit] Description=Galera Load Balancer Service After=network.target [Service] EnvironmentFile=/etc/default/glbd Type=simple ExecStart=/usr/local/sbin/glbd --daemon --threads $THREADS --max_conn $MAX_CONN $OTHER_OPTIONS --control $CONTROL_ADDR $LISTEN_ADDR $DEFAULT_TARGETS [Install] WantedBy=multi-user.target

enable the unit file, configure the Galera Load Balancer and start it:

container> systemctl enable glb # Galera Load Balancer Configuration # Redhat: /etc/sysconfig/glbd # Debian: /etc/default/glbd LISTEN_ADDR="3306" CONTROL_ADDR="10.139.158.183:8011" CONTROL_FIFO="/var/run/glbd.fifo" THREADS="2" MAX_CONN=151 DEFAULT_TARGETS="10.139.158.1:3330:1 10.139.158.1:3331:1 10.139.158.1:3332:1" OTHER_OPTIONS="--round" container> systemctl start glb container> apt-get remove wget
Container testing

To be sure everything works fine we should do some basic tests:

shell> lxc stop ${INSTANCE} shell> lxc start ${INSTANCE} shell> echo getinfo | nc -q 1 10.139.158.183 8011
Unit testing focmm against Galera Load Balancer in the LXC container

And finally we did the unit testing of focmm against the Galera Local Balancer which is in the LXC container:

shell> ./tst/run_all_tests.php --instance=qamariadb106 --module=LoadBalancer shell>/dev/null Environment is: qamariadb106 Logfile is: /tmp/focmm_tst_qamariadb106.log Tests are: module=LoadBalancer and function=all OK stopLoadBalancerRemote OK startLoadBalancerRemote OK restartLoadBalancerRemote OK readLoadBalancerTypes OK createLoadBalancer OK updateLoadBalancer OK readLoadBalancers OK parseGlbConfiguration OK openSocket OK writeReadSocket OK parseGlbGetInfo OK parseGlbGetStats OK changeLoadBalancerBackendWeightRemote OK deleteLoadBalancer
Taxonomy upgrade extras: containerlxclxdtestingunit testinggaleraload balancer

Linux Container with LXD for focmm unit testing

Shinguz - Thu, 2022-10-13 19:11

Currently we are in the testing phase of the next release of our Ops Center (focmm). This testing is quite complex because it includes testing the interaction of various different components like a MariaDB Galera Cluster, a Galera Load Balancer, a virtual IP, etc.

Recently I was in the Linuxhotel for a Galera Cluster Training and there one of the other trainers was romanticising about Linux Containers and LXD. So I had a short look, if LXD could be useful to simplify our unit testing. Some old knowledge from a previous Docker PoC was quite helpful to start with...

Prepare a LXC container for Galera Load Balancer

This did NOT work as expected because the Debian image was lacking IPv4 addresses and I did not find on the quick how to change that. So I used the Ubuntu 22.04 image.

shell> lxc remote list shell> lxc image list images: ubuntu/22.04 amd64 shell> INSTANCE='qa-glb' shell> lxc launch images:ubuntu/jammy ${INSTANCE} shell> lxc list *glb* +--------+---------+-----------------------+----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+-----------------------+----------------------------------------------+------------+-----------+ | qa-glb | RUNNING | 10.139.158.183 (eth0) | fd42:1730:178f:78c:216:3eff:fe3f:2948 (eth0) | PERSISTENT | 0 | +--------+---------+-----------------------+----------------------------------------------+------------+-----------+

Now we should remember the IPv4 address for later use. I am sure this can be done more elegant but for now this is fine...

Install Galera Load Balancer in the LXC container shell> lxc exec ${INSTANCE} -- /bin/bash container> apt-get update container> apt-get install wget container> wget https://support.fromdual.com/admin/download/glb_1.0.1-Ubuntu12.04-x86_64.deb container> apt-get install ./glb_1.0.1-Ubuntu12.04-x86_64.deb container> rm -f glb_1.0.1-Ubuntu12.04-x86_64.deb

Then we have to add the unit file:

# # /etc/systemd/system/glb.service # [Unit] Description=Galera Load Balancer Service After=network.target [Service] EnvironmentFile=/etc/default/glbd Type=simple ExecStart=/usr/local/sbin/glbd --daemon --threads $THREADS --max_conn $MAX_CONN $OTHER_OPTIONS --control $CONTROL_ADDR $LISTEN_ADDR $DEFAULT_TARGETS [Install] WantedBy=multi-user.target

enable the unit file, configure the Galera Load Balancer and start it:

container> systemctl enable glb # Galera Load Balancer Configuration # Redhat: /etc/sysconfig/glbd # Debian: /etc/default/glbd LISTEN_ADDR="3306" CONTROL_ADDR="10.139.158.183:8011" CONTROL_FIFO="/var/run/glbd.fifo" THREADS="2" MAX_CONN=151 DEFAULT_TARGETS="10.139.158.1:3330:1 10.139.158.1:3331:1 10.139.158.1:3332:1" OTHER_OPTIONS="--round" container> systemctl start glb container> apt-get remove wget
Container testing

To be sure everything works fine we should do some basic tests:

shell> lxc stop ${INSTANCE} shell> lxc start ${INSTANCE} shell> echo getinfo | nc -q 1 10.139.158.183 8011
Unit testing focmm against Galera Load Balancer in the LXC container

And finally we did the unit testing of focmm against the Galera Local Balancer which is in the LXC container:

shell> ./tst/run_all_tests.php --instance=qamariadb106 --module=LoadBalancer shell>/dev/null Environment is: qamariadb106 Logfile is: /tmp/focmm_tst_qamariadb106.log Tests are: module=LoadBalancer and function=all OK stopLoadBalancerRemote OK startLoadBalancerRemote OK restartLoadBalancerRemote OK readLoadBalancerTypes OK createLoadBalancer OK updateLoadBalancer OK readLoadBalancers OK parseGlbConfiguration OK openSocket OK writeReadSocket OK parseGlbGetInfo OK parseGlbGetStats OK changeLoadBalancerBackendWeightRemote OK deleteLoadBalancer
Taxonomy upgrade extras: containerlxclxdtestingunit testinggaleraload balancer

Pages

Subscribe to FromDual aggregator - FromDual TechFeed (en)