You are here

Cédric Bruderer

Subscribe to Newsfeed Cédric Bruderer
FromDual RSS feed about MySQL, Galera Cluster, MariaDB and Percona Server
Aktualisiert: 28 min 15 sec ago

Reset MySQL 5.7 password on macOS over the command line

Mon, 2017-01-09 13:17

This one is for all MySQL-DBA's, which are working on macOS. Since the Apple OS has a rather peculiar way of starting and stopping MySQL, compared to Linux, you can run into some issues. These problems occur especially, if you have no access to the GUI.

Preparation

Put skip-grant-tables into the mysqld section of the my.cnf. A my.cnf can be found in /usr/local/mysql/support-files. You MUST work as root for all the following steps.

shell> sudo -s shell> vi /usr/local/mysql/support-files/my-default.cnf ... [mysqld] skip-grant-tables skip-networking ...

Save the configuration file! (In vi this is "[ESC] + :x")

Continue with stopping MySQL:

launchctl unload /Library/LaunchDaemons/com.oracle.oss.mysql.mysqld.plist

Restart MySQL, so skip-grant-tables becomes active:

launchctl load /Library/LaunchDaemons/com.oracle.oss.mysql.mysqld.plist

Reset the password

After MySQL is started again, you can log into the CLI and reset the password:

shell> mysql -u root mysql> FLUSH PRIVILEGES; mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'super-secret-password';

Plan B

If you are not capable of stopping MySQL in a civilised manner, you can use the more rough way. You can send a SIGTERM to the MySQL-Server:

shell> ps -aef | grep mysql | grep -v grep 74 28017 1 0 Fri10AM ?? 5:59.50 /usr/local/mysql/bin/mysqld --user=_mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --plugin-dir=/usr/local/mysql/lib/plugin --log-error=/usr/local/mysql/data/mysqld.local.err --pid-file=/usr/local/mysql/data/mysqld.local.pid

You should receive one line. The second column from the left is the process id. Use this process id to stop the MySQL-Server.

shell> kill -15 [process id]

In this example, the command would look like this:

shell> kill -15 28017

macOS will restart MySQL, since the process has not stopped correctly. The configuration will be read and the changes to the parameters will become effective. Continue with logging in to the CLI.

Conclusion

No matter how secure your MySQL-Password is, it is a lot more important to secure access to the server it self. If your server is not secured by something that prevents access from the internet, it will only take a few minutes for someone with bad intentions to take over your database or worse, the entire server.

Taxonomy upgrade extras: mysqlserver

Reset MySQL 5.7 password on macOS over the command line

Mon, 2017-01-09 13:17

This one is for all MySQL-DBA's, which are working on macOS. Since the Apple OS has a rather peculiar way of starting and stopping MySQL, compared to Linux, you can run into some issues. These problems occur especially, if you have no access to the GUI.

Preparation

Put skip-grant-tables into the mysqld section of the my.cnf. A my.cnf can be found in /usr/local/mysql/support-files. You MUST work as root for all the following steps.

shell> sudo -s shell> vi /usr/local/mysql/support-files/my-default.cnf ... [mysqld] skip-grant-tables skip-networking ...

Save the configuration file! (In vi this is "[ESC] + :x")

Continue with stopping MySQL:

launchctl unload /Library/LaunchDaemons/com.oracle.oss.mysql.mysqld.plist

Restart MySQL, so skip-grant-tables becomes active:

launchctl load /Library/LaunchDaemons/com.oracle.oss.mysql.mysqld.plist

Reset the password

After MySQL is started again, you can log into the CLI and reset the password:

shell> mysql -u root mysql> FLUSH PRIVILEGES; mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'super-secret-password';

Plan B

If you are not capable of stopping MySQL in a civilised manner, you can use the more rough way. You can send a SIGTERM to the MySQL-Server:

shell> ps -aef | grep mysql | grep -v grep 74 28017 1 0 Fri10AM ?? 5:59.50 /usr/local/mysql/bin/mysqld --user=_mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --plugin-dir=/usr/local/mysql/lib/plugin --log-error=/usr/local/mysql/data/mysqld.local.err --pid-file=/usr/local/mysql/data/mysqld.local.pid

You should receive one line. The second column from the left is the process id. Use this process id to stop the MySQL-Server.

shell> kill -15 [process id]

In this example, the command would look like this:

shell> kill -15 28017

macOS will restart MySQL, since the process has not stopped correctly. The configuration will be read and the changes to the parameters will become effective. Continue with logging in to the CLI.

Conclusion

No matter how secure your MySQL-Password is, it is a lot more important to secure access to the server it self. If your server is not secured by something that prevents access from the internet, it will only take a few minutes for someone with bad intentions to take over your database or worse, the entire server.

Taxonomy upgrade extras: mysqlserver

Reset MySQL 5.7 password on macOS over the command line

Mon, 2017-01-09 13:17

This one is for all MySQL-DBA's, which are working on macOS. Since the Apple OS has a rather peculiar way of starting and stopping MySQL, compared to Linux, you can run into some issues. These problems occur especially, if you have no access to the GUI.

Preparation

Put skip-grant-tables into the mysqld section of the my.cnf. A my.cnf can be found in /usr/local/mysql/support-files. You MUST work as root for all the following steps.

shell> sudo -s shell> vi /usr/local/mysql/support-files/my-default.cnf ... [mysqld] skip-grant-tables skip-networking ...

Save the configuration file! (In vi this is "[ESC] + :x")

Continue with stopping MySQL:

launchctl unload /Library/LaunchDaemons/com.oracle.oss.mysql.mysqld.plist

Restart MySQL, so skip-grant-tables becomes active:

launchctl load /Library/LaunchDaemons/com.oracle.oss.mysql.mysqld.plist

Reset the password

After MySQL is started again, you can log into the CLI and reset the password:

shell> mysql -u root mysql> FLUSH PRIVILEGES; mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'super-secret-password';

Plan B

If you are not capable of stopping MySQL in a civilised manner, you can use the more rough way. You can send a SIGTERM to the MySQL-Server:

shell> ps -aef | grep mysql | grep -v grep 74 28017 1 0 Fri10AM ?? 5:59.50 /usr/local/mysql/bin/mysqld --user=_mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --plugin-dir=/usr/local/mysql/lib/plugin --log-error=/usr/local/mysql/data/mysqld.local.err --pid-file=/usr/local/mysql/data/mysqld.local.pid

You should receive one line. The second column from the left is the process id. Use this process id to stop the MySQL-Server.

shell> kill -15 [process id]

In this example, the command would look like this:

shell> kill -15 28017

macOS will restart MySQL, since the process has not stopped correctly. The configuration will be read and the changes to the parameters will become effective. Continue with logging in to the CLI.

Conclusion

No matter how secure your MySQL-Password is, it is a lot more important to secure access to the server it self. If your server is not secured by something that prevents access from the internet, it will only take a few minutes for someone with bad intentions to take over your database or worse, the entire server.

Taxonomy upgrade extras: mysqlserver

Non-standard database set up with SELinux

Tue, 2016-12-13 15:26
What is SELinux?

The Security-Enhanced Linux is an extension to the Linux Kernel, made by the NSA (National Security Agency). It implements Mandatory Access Controls (MAC), which allow an administrator to define, how applications and users can access resources on a system.

There is more detail in the SELinux Wki: https://selinuxproject.org/page/FAQ
... and the CentOS documentation: https://wiki.centos.org/HowTos/SELinux

Some distributions have it installed by default, but not active, some have it installed and active and some don't have it installed.

How do I know if SELinux is active? SELinux comes with some new commands. To see the current status of SELinux, use "getenforce" or "sestatus": [root@localhost ~]# getenforce Enforcing

- OR -

[root@localhost ~]# sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28

There are three modes available:

  • Enforcing: SELinux is active and enforcing restrictions.
  • Permissive: Restrictions are not enforced, but policy violations are reported.
  • Disabled: SELinux is turned off.

Changing modes

If you want to change the mode of SELinux, use "setenforce":

setenforce [ Enforcing | Permissive | 1 | 0 ]

Or edit the configuration file under "/etc/selinux/config".

Install semanage

If you want to change SELinux policies in an easy way, you will need the tool "semanage" it can be installed with the following command:

yum install policycoreutils-python

Create a directory MySQL/MariaDB can access

NOTE: I am going to work with MariaDB for this blog, as it can be installed from repository in CentOS.

The easy way to create a new policy, which allows to MySQL or MariaDB to use a directory, is to install "semanage". It is provided with the following package:

yum install policycoreutils-python

Then proceed to create the new directory, where MySQL/MariaDB could store the binary logs, if they should not be in the datadir.

mkdir /var/lib/mysql_binlog/ chown -R mysql:mysql mysql* semanage fcontext -a -t mysqld_db_t "/var/lib/mysql_binlog(/.*)?" restorecon -Rv /var/lib/mysql_binlog

NOTE: You have to give the absolute path to the file or the directory!

If you want to use MySQL/MariaDB on a non-standard port, you also have to allow usage of that port:

semanage port -a -t mysqld_port_t -p tcp 3307

Once you have created the new directory for the binary logs and made sure it is owned by mysql, you need to change the type of the directory you created to the one that allows MySQL/MariDB to use this directory. If you do not do this, you will get a "Permission denied (13)" error.

"semanage" is used to make this change persistent, even when the entire file system relabelled.

I was although unable to change the socket. I am yet unsure what the problem was, as MariaDB did not start or return any error.

Enable MySQL to write to this directory vi /etc/my.cnf ... [mysqld] log-bin=/var/lib/mysql_binlog/binlog ... systemctl restart mariadb Taxonomy upgrade extras: mysqlmariadbcentossecurityselinux

Non-standard database set up with SELinux

Tue, 2016-12-13 15:26
What is SELinux?

The Security-Enhanced Linux is an extension to the Linux Kernel, made by the NSA (National Security Agency). It implements Mandatory Access Controls (MAC), which allow an administrator to define, how applications and users can access resources on a system.

There is more detail in the SELinux Wki: https://selinuxproject.org/page/FAQ
... and the CentOS documentation: https://wiki.centos.org/HowTos/SELinux

Some distributions have it installed by default, but not active, some have it installed and active and some don't have it installed.

How do I know if SELinux is active? SELinux comes with some new commands. To see the current status of SELinux, use "getenforce" or "sestatus": [root@localhost ~]# getenforce Enforcing

- OR -

[root@localhost ~]# sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28

There are three modes available:

  • Enforcing: SELinux is active and enforcing restrictions.
  • Permissive: Restrictions are not enforced, but policy violations are reported.
  • Disabled: SELinux is turned off.

Changing modes

If you want to change the mode of SELinux, use "setenforce":

setenforce [ Enforcing | Permissive | 1 | 0 ]

Or edit the configuration file under "/etc/selinux/config".

Install semanage

If you want to change SELinux policies in an easy way, you will need the tool "semanage" it can be installed with the following command:

yum install policycoreutils-python

Create a directory MySQL/MariaDB can access

NOTE: I am going to work with MariaDB for this blog, as it can be installed from repository in CentOS.

The easy way to create a new policy, which allows to MySQL or MariaDB to use a directory, is to install "semanage". It is provided with the following package:

yum install policycoreutils-python

Then proceed to create the new directory, where MySQL/MariaDB could store the binary logs, if they should not be in the datadir.

mkdir /var/lib/mysql_binlog/ chown -R mysql:mysql mysql* semanage fcontext -a -t mysqld_db_t "/var/lib/mysql_binlog(/.*)?" restorecon -Rv /var/lib/mysql_binlog

NOTE: You have to give the absolute path to the file or the directory!

If you want to use MySQL/MariaDB on a non-standard port, you also have to allow usage of that port:

semanage port -a -t mysqld_port_t -p tcp 3307

Once you have created the new directory for the binary logs and made sure it is owned by mysql, you need to change the type of the directory you created to the one that allows MySQL/MariDB to use this directory. If you do not do this, you will get a "Permission denied (13)" error.

"semanage" is used to make this change persistent, even when the entire file system relabelled.

I was although unable to change the socket. I am yet unsure what the problem was, as MariaDB did not start or return any error.

Enable MySQL to write to this directory vi /etc/my.cnf ... [mysqld] log-bin=/var/lib/mysql_binlog/binlog ... systemctl restart mariadb Taxonomy upgrade extras: mysqlmariadbcentossecurityselinux

Non-standard database set up with SELinux

Tue, 2016-12-13 15:26
What is SELinux?

The Security-Enhanced Linux is an extension to the Linux Kernel, made by the NSA (National Security Agency). It implements Mandatory Access Controls (MAC), which allow an administrator to define, how applications and users can access resources on a system.

There is more detail in the SELinux Wki: https://selinuxproject.org/page/FAQ
... and the CentOS documentation: https://wiki.centos.org/HowTos/SELinux

Some distributions have it installed by default, but not active, some have it installed and active and some don't have it installed.

How do I know if SELinux is active? SELinux comes with some new commands. To see the current status of SELinux, use "getenforce" or "sestatus": [root@localhost ~]# getenforce Enforcing

- OR -

[root@localhost ~]# sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28

There are three modes available:

  • Enforcing: SELinux is active and enforcing restrictions.
  • Permissive: Restrictions are not enforced, but policy violations are reported.
  • Disabled: SELinux is turned off.

Changing modes

If you want to change the mode of SELinux, use "setenforce":

setenforce [ Enforcing | Permissive | 1 | 0 ]

Or edit the configuration file under "/etc/selinux/config".

Install semanage

If you want to change SELinux policies in an easy way, you will need the tool "semanage" it can be installed with the following command:

yum install policycoreutils-python

Create a directory MySQL/MariaDB can access

NOTE: I am going to work with MariaDB for this blog, as it can be installed from repository in CentOS.

The easy way to create a new policy, which allows to MySQL or MariDB to use a directory, is to install "semanage". It is provided with the following package:

yum install policycoreutils-python

Then proceed to create the new directory, where MySQL/MariaDB could store the binary logs, if they should not be in the datadir.

mkdir /var/lib/mysql_binlog/ chown -R mysql:mysql mysql* semanage fcontext -a -t mysqld_db_t "/var/lib/mysql_binlog(/.*)?" restorecon -Rv /var/lib/mysql_binlog

NOTE: You have to give the absolute path to the file or the directory!

If you want to use MySQL/MariaDB on a non-standard port, you also have to allow usage of that port:

semanage port -a -t mysqld_port_t -p tcp 3307

Once you have created the new directory for the binary logs and made sure it is owned by mysql, you need to change the type of the directory you created to the one that allows MySQL/MariDB to use this directory. If you do not do this, you will get a "Permission denied (13)" error.

"semanage" is used to make this change persistent, even when the entire file system relabelled.

I was although unable to change the socket. I am yet unsure what the problem was, as MariaDB did not start or return any error.

Enable MySQL to write to this directory vi /etc/my.cnf ... [mysqld] log-bin=/var/lib/mysql_binlog/binlog ... systemctl restart mariadb Taxonomy upgrade extras: mysqlmariadbcentos

How to move InnoDB-Logfiles on a Galera Cluster

Wed, 2016-10-05 10:35

Somebody recently asked, what they had to do, if they wanted to move their InnoDB-Logfiles back to the datadir. As a challenge, the servers were part of a Galera Cluster.


My first thought was:

The problem is not the Galera Cluster itself, it is the rsync-SST (wsrep_sst_method = rsync) that could cause trouble and destroy your InnoDB-Logfiles, by simply overwriting or deleting them.


So I tried to confirm my thought and realised, I was wrong. This works anyway, because the node just takes the dataset from the other node. (The backup plan was ready now.)


Preferably, the cluster does an IST, where it only fetches the missing write sets. This way you do not have the danger of losing the InnoDB-Logfiles.


I will explain the way I would recommend:

First edit the my.cnf. The variable you have to change is innodb_log_group_home_dir. This variable contains the location of the log files. Set it to the new location of the logfiles.


After this is done, stop the MySQL server:

shell> service mysql stop - OR, for those who have systemd - shell> systemctl stop mysql

When the server is stopped, move the log files from the location they have been to their new location:

shell> mv /path/to/old/location/ibdata* /new/location/

After you made sure, they have been moved to the right place, you can start the MySQL-Server again.

shell> service mysql start - OR - shell> systemctl start mysql


If this goes wrong, you can force an SST by removing grastade.dat in the datadir. This will cause the node to fetch the dataset from a other node and return to work.

Taxonomy upgrade extras: Galera Clusterinnodb

How to move InnoDB-Logfiles on a Galera Cluster

Wed, 2016-10-05 10:35

Somebody recently asked, what they had to do, if they wanted to move their InnoDB-Logfiles back to the datadir. As a challenge, the servers were part of a Galera Cluster.


My first thought was:

The problem is not the Galera Cluster itself, it is the rsync-SST (wsrep_sst_method = rsync) that could cause trouble and destroy your InnoDB-Logfiles, by simply overwriting or deleting them.


So I tried to confirm my thought and realised, I was wrong. This works anyway, because the node just takes the dataset from the other node. (The backup plan was ready now.)


Preferably, the cluster does an IST, where it only fetches the missing write sets. This way you do not have the danger of losing the InnoDB-Logfiles.


I will explain the way I would recommend:

First edit the my.cnf. The variable you have to change is innodb_log_group_home_dir. This variable contains the location of the log files. Set it to the new location of the logfiles.


After this is done, stop the MySQL server:

shell> service mysql stop - OR, for those who have systemd - shell> systemctl stop mysql

When the server is stopped, move the log files from the location they have been to their new location:

shell> mv /path/to/old/location/ibdata* /new/location/

After you made sure, they have been moved to the right place, you can start the MySQL-Server again.

shell> service mysql start - OR - shell> systemctl start mysql


If this goes wrong, you can force an SST by removing grastade.dat in the datadir. This will cause the node to fetch the dataset from a other node and return to work.

Taxonomy upgrade extras: Galera Clusterinnodb

How to move InnoDB-Logfiles on a Galera Cluster

Wed, 2016-10-05 10:35

Somebody recently asked, what they had to do, if they wanted to move their InnoDB-Logfiles back to the datadir. As a challenge, the servers were part of a Galera Cluster.


My first thought was:

The problem is not the Galera Cluster itself, it is the rsync-SST (wsrep_sst_method = rsync) that could cause trouble and destroy your InnoDB-Logfiles, by simply overwriting or deleting them.


So I tried to confirm my thought and realised, I was wrong. This works anyway, because the node just takes the dataset from the other node. (The backup plan was ready now.)


Preferably, the cluster does an IST, where it only fetches the missing write sets. This way you do not have the danger of losing the InnoDB-Logfiles.


I will explain the way I would recommend:

First edit the my.cnf. The variable you have to change is innodb_log_group_home_dir. This variable contains the location of the log files. Set it to the new location of the logfiles.


After this is done, stop the MySQL server:

shell> service mysql stop - OR, for those who have systemd - shell> systemctl stop mysql

When the server is stopped, move the log files from the location they have been to their new location:

shell> mv /path/to/old/location/ibdata* /new/location/

After you made sure, they have been moved to the right place, you can start the MySQL-Server again.

shell> service mysql start - OR - shell> systemctl start mysql


If this goes wrong, you can force an SST by removing grastade.dat in the datadir. This will cause the node to fetch the dataset from a other node and return to work.

Taxonomy upgrade extras: Galera Clusterinnodb

Why is varchar(255) not varchar(255)?

Fri, 2016-06-24 16:18

Recently I was working on a clients question and stumbled over an issue with replication and mixed character sets. The client asked, wether it is possible to replicate data to a table on a MySQL slave, where one column had a different character set, than the column in the same table on the master.

 

I set up two servers with identical table definitions and changed the character set on one column on the slave from latin1 to utf8.

 

Master:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(255) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

Slave:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(255) CHARACTER SET utf8 DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

So far no problem, I was able to start the replication and set off some INSERT statements with special characters (like ä, ö, ü, ...). But when I went to look for them in the slave's table, I could not find them.

 

"SHOW SLAVE STATUS", showed me this error:

Column 1 of table 'test.test' cannot be converted from type 'varchar(255)' to type 'varchar(255)'

 

You might ask yourself now: But the columns have the same type, what is the problem? What is not shown in the error is the fact, that there are two different character sets.

 

The log file is of no help either. It only shows the same error and tells you to fix it.

2016-05-26 15:51:06 9269 [ERROR] Slave SQL: Column 1 of table 'test.test' cannot be converted from type 'varchar(255)' to type 'varchar(255)', Error_code: 1677 2016-05-26 15:51:06 9269 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'valkyrie_mysqld35701_binlog.000050' position 120 2016-05-26 15:53:39 9269 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013)

 

Skipping the statement will not work, as the server will just fail again, when the next statement shows up.

 

For all those who are now running to change the character set: STOP!

Changing characters set of columns or tables containing data can be fatal when done incorrectly. MySQL offers a statement to convert tables and columns to the character set you wish to have.

 

To convert the entire table, you can write:

ALTER TABLE tbl_name CONVERT TO CHARACTER SET charset_name;

 

To convert a single column, you can write:

ALTER TABLE tbl_name MODIFY latin1_column TEXT CHARACTER SET utf8;

 

More details can be found in the ALTER TABLE documentation of MySQL. (Converting character sets is at the end of the article.)

 

Just to be clear, this is no bug! MySQL replication was never intended to work with mixed character sets and it makes a lot of sense, that the replication is halted when differences are discovered. This test was only an experiment.

Taxonomy upgrade extras: replicationcharacter setutf8utf8mb4

Why is varchar(255) not varchar(255)?

Fri, 2016-06-24 16:18

Recently I was working on a clients question and stumbled over an issue with replication and mixed character sets. The client asked, wether it is possible to replicate data to a table on a MySQL slave, where one column had a different character set, than the column in the same table on the master.

 

I set up two servers with identical table definitions and changed the character set on one column on the slave from latin1 to utf8.

 

Master:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(255) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

Slave:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(255) CHARACTER SET utf8 DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

So far no problem, I was able to start the replication and set off some INSERT statements with special characters (like ä, ö, ü, ...). But when I went to look for them in the slave's table, I could not find them.

 

"SHOW SLAVE STATUS", showed me this error:

Column 1 of table 'test.test' cannot be converted from type 'varchar(255)' to type 'varchar(255)'

 

You might ask yourself now: But the columns have the same type, what is the problem? What is not shown in the error is the fact, that there are two different character sets.

 

The log file is of no help either. It only shows the same error and tells you to fix it.

2016-05-26 15:51:06 9269 [ERROR] Slave SQL: Column 1 of table 'test.test' cannot be converted from type 'varchar(255)' to type 'varchar(255)', Error_code: 1677 2016-05-26 15:51:06 9269 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'valkyrie_mysqld35701_binlog.000050' position 120 2016-05-26 15:53:39 9269 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013)

 

Skipping the statement will not work, as the server will just fail again, when the next statement shows up.

 

For all those who are now running to change the character set: STOP!

Changing characters set of columns or tables containing data can be fatal when done incorrectly. MySQL offers a statement to convert tables and columns to the character set you wish to have.

 

To convert the entire table, you can write:

ALTER TABLE tbl_name CONVERT TO CHARACTER SET charset_name;

 

To convert a single column, you can write:

ALTER TABLE tbl_name MODIFY latin1_column TEXT CHARACTER SET utf8;

 

More details can be found in the ALTER TABLE documentation of MySQL. (Converting character sets is at the end of the article.)

 

Just to be clear, this is no bug! MySQL replication was never intended to work with mixed character sets and it makes a lot of sense, that the replication is halted when differences are discovered. This test was only an experiment.

Why is varchar(255) not varchar(255)?

Fri, 2016-06-24 16:18

Recently I was working on a clients question and stumbled over an issue with replication and mixed character sets. The client asked, wether it is possible to replicate data to a table on a MySQL slave, where one column had a different character set, than the column in the same table on the master.

 

I set up two servers with identical table definitions and changed the character set on one column on the slave from latin1 to utf8.

 

Master:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(255) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

Slave:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(255) CHARACTER SET utf8 DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

So far no problem, I was able to start the replication and set off some INSERT statements with special characters (like ä, ö, ü, ...). But when I went to look for them in the slave's table, I could not find them.

 

"SHOW SLAVE STATUS", showed me this error:

Column 1 of table 'test.test' cannot be converted from type 'varchar(255)' to type 'varchar(255)'

 

You might ask yourself now: But the columns have the same type, what is the problem? What is not shown in the error is the fact, that there are two different character sets.

 

The log file is of no help either. It only shows the same error and tells you to fix it.

2016-05-26 15:51:06 9269 [ERROR] Slave SQL: Column 1 of table 'test.test' cannot be converted from type 'varchar(255)' to type 'varchar(255)', Error_code: 1677 2016-05-26 15:51:06 9269 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'valkyrie_mysqld35701_binlog.000050' position 120 2016-05-26 15:53:39 9269 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013)

 

Skipping the statement will not work, as the server will just fail again, when the next statement shows up.

 

For all those who are now running to change the character set: STOP!

Changing characters set of columns or tables containing data can be fatal when done incorrectly. MySQL offers a statement to convert tables and columns to the character set you wish to have.

 

To convert the entire table, you can write:

ALTER TABLE tbl_name CONVERT TO CHARACTER SET charset_name;

 

To convert a single column, you can write:

ALTER TABLE tbl_name MODIFY latin1_column TEXT CHARACTER SET utf8;

 

More details can be found in the ALTER TABLE documentation of MySQL. (Converting character sets is at the end of the article.)

 

Just to be clear, this is no bug! MySQL replication was never intended to work with mixed character sets and it makes a lot of sense, that the replication is halted when differences are discovered. This test was only an experiment.

How to become a certified DBA

Tue, 2016-05-10 10:16

I recently managed to get my certification as MySQL 5.6 DBA, and was asked to write a blog about it, because I had trouble getting the informations I needed.

You may have figured out too, that Oracle does not really supply you with information about the certification. At least, there is the MySQL documentation. It contains all the information you need.

Further, I recommend to use a virtual linux machine in combination with our tool MyEnv. This way you can simulate multiple scenarios, including replication set-ups, and if one or two servers die during your exercises, nobody gets mad at you.

When learning, make sure to have a look at the following topics:

  • Query tuning
  • Parameters tuning
  • MySQL client tools (mysqldump, mysqladmin, ...)
  • MySQL Audit Plugin
  • How to secure MySQL (Especially, the correct assignment of privileges.)
  • How to use the Performance and Information Schema
  • Partitions
  • Replication
  • Backup and Recovery (Both, physical and logical variant.)


The certification takes 150 minutes and contains 100 questions. 60% of your answers have to be correct, in order to pass. If you keep a pace of one answer per minute, you will also have enough time to go over those answers you were not entirely sure at the first time.

How to become a certified DBA

Tue, 2016-05-10 10:16

I recently managed to get my certification as MySQL 5.6 DBA, and was asked to write a blog about it, because I had trouble getting the informations I needed.

You may have figured out too, that Oracle does not really supply you with information about the certification. At least, there is the MySQL documentation. It contains all the information you need.

Further, I recommend to use a virtual linux machine in combination with our tool MyEnv. This way you can simulate multiple scenarios, including replication set-ups, and if one or two servers die during your exercises, nobody gets mad at you.

When learning, make sure to have a look at the following topics:

  • Query tuning
  • Parameters tuning
  • MySQL client tools (mysqldump, mysqladmin, ...)
  • MySQL Audit Plugin
  • How to secure MySQL (Especially, the correct assignment of privileges.)
  • How to use the Performance and Information Schema
  • Partitions
  • Replication
  • Backup and Recovery (Both, physical and logical variant.)


The certification takes 150 minutes and contains 100 questions. 60% of your answers have to be correct, in order to pass. If you keep a pace of one answer per minute, you will also have enough time to go over those answers you were not entirely sure at the first time.

How to become a certified DBA

Tue, 2016-05-10 10:16

I recently managed to get my certification as MySQL 5.6 DBA, and was asked to write a blog about it, because I had trouble getting the informations I needed.

You may have figured out too, that Oracle does not really supply you with information about the certification. At least, there is the MySQL documentation. It contains all the information you need.

Further, I recommend to use a virtual linux machine in combination with our tool MyEnv. This way you can simulate multiple scenarios, including replication set-ups, and if one or two servers die during your exercises, nobody gets mad at you.

When learning, make sure to have a look at the following topics:

  • Query tuning
  • Parameters tuning
  • MySQL client tools (mysqldump, mysqladmin, ...)
  • MySQL Audit Plugin
  • How to secure MySQL (Especially, the correct assignment of privileges.)
  • How to use the Performance and Information Schema
  • Partitions
  • Replication
  • Backup and Recovery (Both, physical and logical variant.)


The certification takes 150 minutes and contains 100 questions. 60% of your answers have to be correct, in order to pass. If you keep a pace of one answer per minute, you will also have enough time to go over those answers you were not entirely sure at the first time.

Define preferred SST donor for Galera Cluster

Fri, 2016-04-15 18:00

One of our customers recently ran into a problem, where he wanted to have a preferred donor for SST, whenever a node came up. The problem was, that the node did not come up, when the preferred donor was not running.

In the documentation, you can find the parameter wsrep_sst_donor, which prefers the specified node as SST donor. This is great, as long as the donor is actually running.

The problem can be fixed by adding a comma to the end of the value of wsrep_sst_donor, what would look like this:

wsrep_sst_donor="galera2,"

Note the comma at the end of the value. This trailing comma basically tells this node, that galera2 is the preferred donor, if galera2 is not available, any other available node will be used as donor.

You could also specify a secondary node, which is needed to be available for the node to come up:

wsrep_sst_donor="galera2,galera1"

In this case, galera1 wil be used as secondary donor if galera2 is not available. If both are not available, the node will refuse to come up.

Taxonomy upgrade extras: Galera Cluster

Define preferred SST donor for Galera Cluster

Fri, 2016-04-15 18:00

One of our customers recently ran into a problem, where he wanted to have a preferred donor for SST, whenever a node came up. The problem was, that the node did not come up, when the preferred donor was not running.

In the documentation, you can find the parameter wsrep_sst_donor, which prefers the specified node as SST donor. This is great, as long as the donor is actually running.

The problem can be fixed by adding a comma to the end of the value of wsrep_sst_donor, what would look like this:

wsrep_sst_donor="galera2,"

Note the comma at the end of the value. This trailing comma basically tells this node, that galera2 is the preferred donor, if galera2 is not available, any other available node will be used as donor.

You could also specify a secondary node, which is needed to be available for the node to come up:

wsrep_sst_donor="galera2,galera1"

In this case, galera1 wil be used as secondary donor if galera2 is not available. If both are not available, the node will refuse to come up.

Taxonomy upgrade extras: Galera Cluster

Define preferred SST donor for Galera Cluster

Fri, 2016-04-15 18:00

One of our customers recently ran into a problem, where he wanted to have a preferred donor for SST, whenever a node came up. The problem was, that the node did not come up, when the preferred donor was not running.

In the documentation, you can find the parameter wsrep_sst_donor, which prefers the specified node as SST donor. This is great, as long as the donor is actually running.

The problem can be fixed by adding a comma to the end of the value of wsrep_sst_donor, what would look like this:

wsrep_sst_donor="galera2,"

Note the comma at the end of the value. This trailing comma basically tells this node, that galera2 is the preferred donor, if galera2 is not available, any other available node will be used as donor.

You could also specify a secondary node, which is needed to be available for the node to come up:

wsrep_sst_donor="galera2,galera1"

In this case, galera1 wil be used as secondary donor if galera2 is not available. If both are not available, the node will refuse to come up.

Taxonomy upgrade extras: Galera Cluster

Replication in a star

Thu, 2016-01-21 21:24

Most of you know, that it is possible to synchronize MySQL and MariaDB servers using replication. But with the latest releases, it is also possible to use more than just two servers as a multi-master setup.

Most of you know that both MySQL and MariaDB support replication in a hierarchical master-slave-setup, to propagate changes across all connected servers.

But with the latest releases, a slave can have more than one master.

The keyword: Multi-Source replication

It is supported from MySQL 5.7 and MariaDB 10.0 on, and this article describes how to set it up.

What does Multi-Source mean?

Multi-Source means, that you can take two or more masters and replicate them to one slave, where their changes will be merged. This works just like the regular MySQL/MariaDB replication.

Well, we are going to exploit that a little. It is still possible to configure a Master-Master set up, what basically allows the following configuration.

Multiple servers are conjoined in a cluster, where every server is a master, and the replication happens over one central node.

Why should we use Multi-Source-Replication, if it's just Master-Master?

With a Master-Master-Setup, you are limited to a ring topology or two servers. With Multi-Source, you can now use more than two server without having to use the ring topology, which might break and cause the replication to halt.

Further it is possible to back everything up at one place, without the risk of interrupting access to the databases.

Layout

The logical topography is a star. The following image shows a possible set up, with servers located in different countries. Thanks to the asynchronous replication, which does not require a broadband connection to work properly, this is possible without a problem.

What has to be considered?
  • The problems for any other Master-Master-Setup apply here as well.
  • If the replication in the cluster is stalled, the problem is usually on more than one server, maybe even the entire cluster.
  • Although the synchronization is asynchronous and does not cause a lot of network traffic itself, the replication of large or heavily accessed databases will cause some traffic at the central node.

How do I set it up?

The way you set it up, is like any other Master-Master replication. Except, that you will have more masters in the cluster.

1) Set up a standard installation of MySQL 5.7 or MariaDB 10.0 or above.

 

2) Prepare the configuration on all servers:

- On all the outer nodes (In Layout: All except server 1)

log_slave_updates = 0;

- On the central node (In Layout: server 1)

log_slave_updates = 1;

 

NOTE:

  • The central server must forward everything it receives, so that the changes starting on some outer node will also reach all the other outer nodes.
  • The outer nodes must not forward such changes, because they would loop through the cluster forever.

 

- Give each server a unique ID!

 

- Create the user which will be used for the replication:

CREATE USER 'replicator'@'localhost' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'' IDENTIFIED BY 'password';

- On MySQL 5.7 you have to enable the use of GTIDs (see step 3).

 

3) Set up the replication with all the outer nodes:

I will start with MySQL 5.7:

Source:

  • MySQL Multi-Source-Replication
  • Online enable GTIDs
  •  

    Although there is a paragraph, which states that it is possible with file and position, I have experienced something different, what forced me to enable GTIDs.

    You have to change the repositories for "master info" and "relay log" to the format "TABLE":

mysql> SET GLOBAL master_info_repository = 'TABLE'; mysql> SET GLOBAL relay_log_info_repository = 'TABLE';

Further, it is necessary to enable GTID. From MySQL 5.7.6 and higher, it is possible to do this without restarting the server.

mysql> SET GLOBAL ENFORCE_GTID_CONSISTENCY = ON; mysql> SET GLOBAL gtid_mode= OFF_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON;

The value of "ENFORCE_GTID_CONSISTENCY" has to be "ON", otherwise the slave will return an error and refuse to work. And don't forget to make those changes in your "my.cnf" persistent.

Create a new link to a master:

mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master';

Start slave:

mysql> START SLAVE [thread_type] [FOR CHANNEL=[]];

Stop slave:

mysql> STOP SLAVE [thread_type] [FOR CHANNEL=[]];

How to modify a link:

mysql> STOP SLAVE FOR CHANNEL='mysql-master'; mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master'; mysql> START SLAVE FRO CHANNEL='mysql-master'; mysql> SHOW SLAVE STATUS FOR CHANNEL 'mysql-master'\G

 

Now MariaDB 10.0:

Source: MariaDB Multi-Source-Replication

Create a new link to a master (ATTENTION: MariaDB has a different syntax, compared to MySQL):

mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='';

Start all slaves on this server at the same time:

mariadb> START ALL SLAVES;

Check the status of all slaves on this server:

mariadb> SHOW ALL SLAVES STATUS\G

If you want to manage a slave on its own, you can use the regular commands. But you have to make the connection the default one.

Here is an example what you have to do, if you want to modify the connection 'maria-master':

mariadb> STOP SLAVE'maria-master'; mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD=''; mariadb> START SLAVE 'maria-master'; mariadb> SHOW SLAVE 'maria-master' STATUS\G

After you have done this, you have a normal Master-Slave replication between two servers (as shown in the following picture). To complete the star, repeat those steps on each server, you want to connect.

Is it possible expand an existing set up?

It is no problem to expand an existing master-master or master-slave set up. However I would recommend to create new connections. This way the replication is consistent in the configuration with the other connections.

How do I break it?

Just like any other Master-Master-Setup. So be careful, with writing on more than one server. In general, if a bad command is committed, it will be replicated to the other nodes. This will cause the cluster to stall.

Another problem is auto_increment. Duplicate IDs will cause a "Duplicate Key Error" and stall each server it happens on. This can be prevented by editing the values of "auto_increment_increment" and "auto_increment_offset". In this scenario, the value "auto_increment_increment" should be 7 (the amount of servers in the cluster) and the value of "auto_increment_offset" would be something from 0 to 6 (or 1 to 7, depending on your preferences).

How do I fix it?

Sadly, that is not so easy. In addition, the statements are still replicated over the cluster, what usually causes more than just one server to stall, most time it is the entire cluster.

You have to execute every action on each connection. This gets tedious if you have a large amount of nodes.

Let's assume you have a duplicate key error, because two inserts happened at the same time.

 

MySQL 5.7:

If you experience a stalled replication on MySQL, you have to skip the GTID of the transaction which caused the stall.

First, stop the slave:

mysql> STOP SLAVE FOR CHANNEL 'failed-transactions-channel-name';

Retrieve the next GTID:

mysql> SHOW SLAVE STATUS\G

The line "Retrieved_Gtid_Set" contains the next GTID which would be executed. Copy the value and tell the server to execute that GTID:

mysql> SET GTID_NEXT="dd57a411-b477-11e5-b518-005056244454";

Execute a blank transaction:

mysql> BEGIN; COMMIT;

Tell the server to take control of the GTIDs:

mysql> SET GTID_NEXT="AUTOMATIC";

And restart the slave:

mysql> START SLAVE FOR CHANNEL 'failed-transactions-channel-name';

 

MariaDB 10.0:

Stop the slave for the connection:

mariadb> STOP SLAVE 'maria-master';

Define which connection, you would like to edit:

mariadb> SET @@default_master_connection='maria-master'; # No, it's not a joke.

Skip the failed statement:

mariadb> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;

Restart the slave:

mariadb> START SLAVE 'maria-master';

Check if the slave is running:

mariadb> SHOW SLAVE 'maria-master' STATUS\G

Please note the "@@default_master_connection". It is really necessary to set this variable, or you will not be able to change the counter you want. I'm not sure, if I should be surprised or shocked, that this is the recommended solution by MariaDB.

Hot expand

What is needed to add a new node? Is it required to stop the entire cluster? Or can I just add a new server?

If you want to add a new node to the cluster, it would be best, if you take a dump from another node, including the master data. You then import that dump into the node, make sure the link to the master is configured correctly, and start it. If everything is set up the right way, you should have no problems at all.

It is not necessary to stop the entire cluster, since you can add the links between the nodes while the servers are up.

Backup and Restore Method 1

The thought behind Multi-Source replication was to make the administrators life easier when it comes to backups. Instead of backing up all data at their respective location, you can gather it on one server and do the backup on this machine, without interrupting the "productive" servers work.

To obtain a backup of all the replicated databases from the cluster, it would be best to do it on the central server. The reason for this is, that everything has to go over it. This method is suitable to protect yourself from to losing all data on the cluster. The downside is, when one node dies, you have to obtain the backup from that location and transfer it to the failed server.

Method 2

If you would like to keep your data save on the location of the server, you can set up a slave at each location and replicate the master to it. For a restore, you could use the existing slave as the new node of the star, while the old server is rebuilding. This set up is capable of keeping the downtime of the service as little as possible. Further, you are not required to transfer the backup from one location to another, since it is already stored close to the failed server.

 

 

Versions used:

MariaDB: 10.1.10

MySQL: 5.7.10

Taxonomy upgrade extras: GTIDreplicationmulti-source

Replication in a star

Thu, 2016-01-21 21:24

Most of you know, that it is possible to synchronize MySQL and MariaDB servers using replication. But with the latest releases, it is also possible to use more than just two servers as a multi-master setup.

Most of you know that both MySQL and MariaDB support replication in a hierarchical master-slave-setup, to propagate changes across all connected servers.

But with the latest releases, a slave can have more than one master.

The keyword: Multi-Source replication

It is supported from MySQL 5.7 and MariaDB 10.0 on, and this article describes how to set it up.

What does Multi-Source mean?

Multi-Source means, that you can take two or more masters and replicate them to one slave, where their changes will be merged. This works just like the regular MySQL/MariaDB replication.

Well, we are going to exploit that a little. It is still possible to configure a Master-Master set up, what basically allows the following configuration.

Multiple servers are conjoined in a cluster, where every server is a master, and the replication happens over one central node.

Why should we use Multi-Source-Replication, if it's just Master-Master?

With a Master-Master-Setup, you are limited to a ring topology or two servers. With Multi-Source, you can now use more than two server without having to use the ring topology, which might break and cause the replication to halt.

Further it is possible to back everything up at one place, without the risk of interrupting access to the databases.

Layout

The logical topography is a star. The following image shows a possible set up, with servers located in different countries. Thanks to the asynchronous replication, which does not require a broadband connection to work properly, this is possible without a problem.

What has to be considered?
  • The problems for any other Master-Master-Setup apply here as well.
  • If the replication in the cluster is stalled, the problem is usually on more than one server, maybe even the entire cluster.
  • Although the synchronization is asynchronous and does not cause a lot of network traffic itself, the replication of large or heavily accessed databases will cause some traffic at the central node.

How do I set it up?

The way you set it up, is like any other Master-Master replication. Except, that you will have more masters in the cluster.

1) Set up a standard installation of MySQL 5.7 or MariaDB 10.0 or above.

 

2) Prepare the configuration on all servers:

- On all the outer nodes (In Layout: All except server 1)

log_slave_updates = 0;

- On the central node (In Layout: server 1)

log_slave_updates = 1;

 

NOTE:

  • The central server must forward everything it receives, so that the changes starting on some outer node will also reach all the other outer nodes.
  • The outer nodes must not forward such changes, because they would loop through the cluster forever.

 

- Give each server a unique ID!

 

- Create the user which will be used for the replication:

CREATE USER 'replicator'@'localhost' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'' IDENTIFIED BY 'password';

- On MySQL 5.7 you have to enable the use of GTIDs (see step 3).

 

3) Set up the replication with all the outer nodes:

I will start with MySQL 5.7:

Source:

  • MySQL Multi-Source-Replication
  • Online enable GTIDs
  •  

    Although there is a paragraph, which states that it is possible with file and position, I have experienced something different, what forced me to enable GTIDs.

    You have to change the repositories for "master info" and "relay log" to the format "TABLE":

mysql> SET GLOBAL master_info_repository = 'TABLE'; mysql> SET GLOBAL relay_log_info_repository = 'TABLE';

Further, it is necessary to enable GTID. From MySQL 5.7.6 and higher, it is possible to do this without restarting the server.

mysql> SET GLOBAL ENFORCE_GTID_CONSISTENCY = ON; mysql> SET GLOBAL gtid_mode= OFF_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON;

The value of "ENFORCE_GTID_CONSISTENCY" has to be "ON", otherwise the slave will return an error and refuse to work. And don't forget to make those changes in your "my.cnf" persistent.

Create a new link to a master:

mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master';

Start slave:

mysql> START SLAVE [thread_type] [FOR CHANNEL=[]];

Stop slave:

mysql> STOP SLAVE [thread_type] [FOR CHANNEL=[]];

How to modify a link:

mysql> STOP SLAVE FOR CHANNEL='mysql-master'; mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master'; mysql> START SLAVE FRO CHANNEL='mysql-master'; mysql> SHOW SLAVE STATUS FOR CHANNEL 'mysql-master'\G

 

Now MariaDB 10.0:

Source: MariaDB Multi-Source-Replication

Create a new link to a master (ATTENTION: MariaDB has a different syntax, compared to MySQL):

mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='';

Start all slaves on this server at the same time:

mariadb> START ALL SLAVES;

Check the status of all slaves on this server:

mariadb> SHOW ALL SLAVES STATUS\G

If you want to manage a slave on its own, you can use the regular commands. But you have to make the connection the default one.

Here is an example what you have to do, if you want to modify the connection 'maria-master':

mariadb> STOP SLAVE'maria-master'; mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD=''; mariadb> START SLAVE 'maria-master'; mariadb> SHOW SLAVE 'maria-master' STATUS\G

After you have done this, you have a normal Master-Slave replication between two servers (as shown in the following picture). To complete the star, repeat those steps on each server, you want to connect.

Is it possible expand an existing set up?

It is no problem to expand an existing master-master or master-slave set up. However I would recommend to create new connections. This way the replication is consistent in the configuration with the other connections.

How do I break it?

Just like any other Master-Master-Setup. So be careful, with writing on more than one server. In general, if a bad command is committed, it will be replicated to the other nodes. This will cause the cluster to stall.

Another problem is auto_increment. Duplicate IDs will cause a "Duplicate Key Error" and stall each server it happens on. This can be prevented by editing the values of "auto_increment_increment" and "auto_increment_offset". In this scenario, the value "auto_increment_increment" should be 7 (the amount of servers in the cluster) and the value of "auto_increment_offset" would be something from 0 to 6 (or 1 to 7, depending on your preferences).

How do I fix it?

Sadly, that is not so easy. In addition, the statements are still replicated over the cluster, what usually causes more than just one server to stall, most time it is the entire cluster.

You have to execute every action on each connection. This gets tedious if you have a large amount of nodes.

Let's assume you have a duplicate key error, because two inserts happened at the same time.

 

MySQL 5.7:

If you experience a stalled replication on MySQL, you have to skip the GTID of the transaction which caused the stall.

First, stop the slave:

mysql> STOP SLAVE FOR CHANNEL 'failed-transactions-channel-name';

Retrieve the next GTID:

mysql> SHOW SLAVE STATUS\G

The line "Retrieved_Gtid_Set" contains the next GTID which would be executed. Copy the value and tell the server to execute that GTID:

mysql> SET GTID_NEXT="dd57a411-b477-11e5-b518-005056244454";

Execute a blank transaction:

mysql> BEGIN; COMMIT;

Tell the server to take control of the GTIDs:

mysql> SET GTID_NEXT="AUTOMATIC";

And restart the slave:

mysql> START SLAVE FOR CHANNEL 'failed-transactions-channel-name';

 

MariaDB 10.0:

Stop the slave for the connection:

mariadb> STOP SLAVE 'maria-master';

Define which connection, you would like to edit:

mariadb> SET @@default_master_connection='maria-master'; # No, it's not a joke.

Skip the failed statement:

mariadb> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;

Restart the slave:

mariadb> START SLAVE 'maria-master';

Check if the slave is running:

mariadb> SHOW SLAVE 'maria-master' STATUS\G

Please note the "@@default_master_connection". It is really necessary to set this variable, or you will not be able to change the counter you want. I'm not sure, if I should be surprised or shocked, that this is the recommended solution by MariaDB.

Hot expand

What is needed to add a new node? Is it required to stop the entire cluster? Or can I just add a new server?

If you want to add a new node to the cluster, it would be best, if you take a dump from another node, including the master data. You then import that dump into the node, make sure the link to the master is configured correctly, and start it. If everything is set up the right way, you should have no problems at all.

It is not necessary to stop the entire cluster, since you can add the links between the nodes while the servers are up.

Backup and Restore Method 1

The thought behind Multi-Source replication was to make the administrators life easier when it comes to backups. Instead of backing up all data at their respective location, you can gather it on one server and do the backup on this machine, without interrupting the "productive" servers work.

To obtain a backup of all the replicated databases from the cluster, it would be best to do it on the central server. The reason for this is, that everything has to go over it. This method is suitable to protect yourself from to losing all data on the cluster. The downside is, when one node dies, you have to obtain the backup from that location and transfer it to the failed server.

Method 2

If you would like to keep your data save on the location of the server, you can set up a slave at each location and replicate the master to it. For a restore, you could use the existing slave as the new node of the star, while the old server is rebuilding. This set up is capable of keeping the downtime of the service as little as possible. Further, you are not required to transfer the backup from one location to another, since it is already stored close to the failed server.

 

 

Versions used:

MariaDB: 10.1.10

MySQL: 5.7.10

Taxonomy upgrade extras: GTIDreplicationmulti-source replication

Pages