You are here

MySQL Tech-Feed (en)

How to Install MySQL Enterprise Server 5.7 on Red Hat Enterprise Linux 7

Shinguz - Mon, 2016-10-24 22:03

If you have already Red Hat Enterprise Linux 7 (RHEL 7) or CentOS 7 in use you probably found out, that it is not that easy to install MySQL Enterprise Server because there are already pre-installed MariaDB 5.5 libraries:

shell> yum localinstall mysql-commercial-client-5.7.16-1.1.el7.x86_64.rpm mysql-commercial-libs-5.7.16-1.1.el7.x86_64.rpm mysql-commercial-server-5.7.16-1.1.el7.x86_64.rpm mysql-commercial-common-5.7.16-1.1.el7.x86_64.rpm Loaded plugins: fastestmirror Examining mysql-commercial-client-5.7.16-1.1.el7.x86_64.rpm: mysql-commercial-client-5.7.16-1.1.el7.x86_64 Marking mysql-commercial-client-5.7.16-1.1.el7.x86_64.rpm to be installed Examining mysql-commercial-libs-5.7.16-1.1.el7.x86_64.rpm: mysql-commercial-libs-5.7.16-1.1.el7.x86_64 Marking mysql-commercial-libs-5.7.16-1.1.el7.x86_64.rpm to be installed Examining mysql-commercial-server-5.7.16-1.1.el7.x86_64.rpm: mysql-commercial-server-5.7.16-1.1.el7.x86_64 Marking mysql-commercial-server-5.7.16-1.1.el7.x86_64.rpm to be installed Examining mysql-commercial-common-5.7.16-1.1.el7.x86_64.rpm: mysql-commercial-common-5.7.16-1.1.el7.x86_64 Marking mysql-commercial-common-5.7.16-1.1.el7.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package mariadb-libs.x86_64 1:5.5.50-1.el7_2 will be obsoleted --> Processing Dependency: libmysqlclient.so.18()(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 Loading mirror speeds from cached hostfile * base: wftp.tu-chemnitz.de * extras: wftp.tu-chemnitz.de * updates: mirror.rackspeed.de --> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 ---> Package mysql-commercial-client.x86_64 0:5.7.16-1.1.el7 will be installed ---> Package mysql-commercial-common.x86_64 0:5.7.16-1.1.el7 will be installed ---> Package mysql-commercial-libs.x86_64 0:5.7.16-1.1.el7 will be obsoleting ---> Package mysql-commercial-server.x86_64 0:5.7.16-1.1.el7 will be installed --> Processing Dependency: /usr/bin/perl for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: net-tools for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: perl(Getopt::Long) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: perl(strict) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: libaio.so.1()(64bit) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Running transaction check ---> Package libaio.x86_64 0:0.3.109-13.el7 will be installed ---> Package mariadb-libs.x86_64 1:5.5.50-1.el7_2 will be obsoleted --> Processing Dependency: libmysqlclient.so.18()(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 --> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 ---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed ---> Package perl.x86_64 4:5.16.3-286.el7 will be installed --> Processing Dependency: perl-libs = 4:5.16.3-286.el7 for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Socket) >= 1.3 for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Scalar::Util) >= 1.10 for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl-macros for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl-libs for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(threads::shared) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(threads) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(constant) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Time::Local) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Time::HiRes) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Storable) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Socket) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Scalar::Util) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Pod::Simple::XHTML) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Pod::Simple::Search) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Filter::Util::Call) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Temp) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Spec::Unix) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Spec::Functions) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Spec) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Path) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Exporter) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Cwd) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Carp) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: libperl.so()(64bit) for package: 4:perl-5.16.3-286.el7.x86_64 ---> Package perl-Getopt-Long.noarch 0:2.40-2.el7 will be installed --> Processing Dependency: perl(Pod::Usage) >= 1.14 for package: perl-Getopt-Long-2.40-2.el7.noarch --> Processing Dependency: perl(Text::ParseWords) for package: perl-Getopt-Long-2.40-2.el7.noarch --> Running transaction check ---> Package mariadb-libs.x86_64 1:5.5.50-1.el7_2 will be obsoleted --> Processing Dependency: libmysqlclient.so.18()(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 --> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 ---> Package perl-Carp.noarch 0:1.26-244.el7 will be installed ---> Package perl-Exporter.noarch 0:5.68-3.el7 will be installed ---> Package perl-File-Path.noarch 0:2.09-2.el7 will be installed ---> Package perl-File-Temp.noarch 0:0.23.01-3.el7 will be installed ---> Package perl-Filter.x86_64 0:1.49-3.el7 will be installed ---> Package perl-PathTools.x86_64 0:3.40-5.el7 will be installed ---> Package perl-Pod-Simple.noarch 1:3.28-4.el7 will be installed --> Processing Dependency: perl(Pod::Escapes) >= 1.04 for package: 1:perl-Pod-Simple-3.28-4.el7.noarch --> Processing Dependency: perl(Encode) for package: 1:perl-Pod-Simple-3.28-4.el7.noarch ---> Package perl-Pod-Usage.noarch 0:1.63-3.el7 will be installed --> Processing Dependency: perl(Pod::Text) >= 3.15 for package: perl-Pod-Usage-1.63-3.el7.noarch --> Processing Dependency: perl-Pod-Perldoc for package: perl-Pod-Usage-1.63-3.el7.noarch ---> Package perl-Scalar-List-Utils.x86_64 0:1.27-248.el7 will be installed ---> Package perl-Socket.x86_64 0:2.010-3.el7 will be installed ---> Package perl-Storable.x86_64 0:2.45-3.el7 will be installed ---> Package perl-Text-ParseWords.noarch 0:3.29-4.el7 will be installed ---> Package perl-Time-HiRes.x86_64 4:1.9725-3.el7 will be installed ---> Package perl-Time-Local.noarch 0:1.2300-2.el7 will be installed ---> Package perl-constant.noarch 0:1.27-2.el7 will be installed ---> Package perl-libs.x86_64 4:5.16.3-286.el7 will be installed ---> Package perl-macros.x86_64 4:5.16.3-286.el7 will be installed ---> Package perl-threads.x86_64 0:1.87-4.el7 will be installed ---> Package perl-threads-shared.x86_64 0:1.43-6.el7 will be installed --> Running transaction check ---> Package mariadb-libs.x86_64 1:5.5.50-1.el7_2 will be obsoleted --> Processing Dependency: libmysqlclient.so.18()(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 --> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 ---> Package perl-Encode.x86_64 0:2.51-7.el7 will be installed ---> Package perl-Pod-Escapes.noarch 1:1.04-286.el7 will be installed ---> Package perl-Pod-Perldoc.noarch 0:3.20-4.el7 will be installed --> Processing Dependency: perl(parent) for package: perl-Pod-Perldoc-3.20-4.el7.noarch --> Processing Dependency: perl(HTTP::Tiny) for package: perl-Pod-Perldoc-3.20-4.el7.noarch ---> Package perl-podlators.noarch 0:2.5.1-3.el7 will be installed --> Running transaction check ---> Package mariadb-libs.x86_64 1:5.5.50-1.el7_2 will be obsoleted --> Processing Dependency: libmysqlclient.so.18()(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 --> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64 ---> Package perl-HTTP-Tiny.noarch 0:0.033-3.el7 will be installed ---> Package perl-parent.noarch 1:0.225-244.el7 will be installed --> Finished Dependency Resolution Error: Package: 2:postfix-2.10.1-6.el7.x86_64 (@anaconda) Requires: libmysqlclient.so.18(libmysqlclient_18)(64bit) Removing: 1:mariadb-libs-5.5.50-1.el7_2.x86_64 (@updates) libmysqlclient.so.18(libmysqlclient_18)(64bit) Obsoleted By: mysql-commercial-libs-5.7.16-1.1.el7.x86_64 (/mysql-commercial-libs-5.7.16-1.1.el7.x86_64) Not found Available: 1:mariadb-libs-5.5.44-2.el7.centos.x86_64 (base) libmysqlclient.so.18(libmysqlclient_18)(64bit) Available: 1:mariadb-libs-5.5.47-1.el7_2.x86_64 (updates) libmysqlclient.so.18(libmysqlclient_18)(64bit) Error: Package: 2:postfix-2.10.1-6.el7.x86_64 (@anaconda) Requires: libmysqlclient.so.18()(64bit) Removing: 1:mariadb-libs-5.5.50-1.el7_2.x86_64 (@updates) libmysqlclient.so.18()(64bit) Obsoleted By: mysql-commercial-libs-5.7.16-1.1.el7.x86_64 (/mysql-commercial-libs-5.7.16-1.1.el7.x86_64) ~libmysqlclient.so.20()(64bit) Available: 1:mariadb-libs-5.5.44-2.el7.centos.x86_64 (base) libmysqlclient.so.18()(64bit) Available: 1:mariadb-libs-5.5.47-1.el7_2.x86_64 (updates) libmysqlclient.so.18()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest

You can remove postfix and mariadb-libs but this will most probably cause you troubles sooner or later:

shell> yum remove postfix mariadb-libs Loaded plugins: fastestmirror Resolving Dependencies --> Running transaction check ---> Package mariadb-libs.x86_64 1:5.5.50-1.el7_2 will be erased ---> Package postfix.x86_64 2:2.10.1-6.el7 will be erased --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================================== Package Arch Version Repository Size ========================================================================================================================================================================================== Removing: mariadb-libs x86_64 1:5.5.50-1.el7_2 @updates 4.4 M postfix x86_64 2:2.10.1-6.el7 @anaconda 12 M Transaction Summary ========================================================================================================================================================================================== Remove 2 Packages Installed size: 17 M Is this ok [y/N]: n

If you look at the MySQL Installation Guide: Installing MySQL on Linux Using RPM Packages from Oracle it is unfortunately not really very well documented how to install MySQL Enterprise Server: "using rpm -Uvh instead makes the installation process more prone to failure, due to potential dependency issues the installation process might run into".

So the correct way to install MySQL Enterprise Server 5.7 on Red Hat Enterprise Linux 7 (RHEL7) or CentOS 7 is as follows:

shell> rpm --upgrade mysql-commercial-libs-compat-5.7.16-1.1.el7.x86_64.rpm mysql-commercial-libs-5.7.16-1.1.el7.x86_64.rpm mysql-commercial-common-5.7.16-1.1.el7.x86_64.rpm shell> yum localinstall mysql-commercial-server-5.7.16-1.1.el7.x86_64.rpm mysql-commercial-client-5.7.16-1.1.el7.x86_64.rpm Loaded plugins: fastestmirror Examining mysql-commercial-server-5.7.16-1.1.el7.x86_64.rpm: mysql-commercial-server-5.7.16-1.1.el7.x86_64 Marking mysql-commercial-server-5.7.16-1.1.el7.x86_64.rpm to be installed Examining mysql-commercial-client-5.7.16-1.1.el7.x86_64.rpm: mysql-commercial-client-5.7.16-1.1.el7.x86_64 Marking mysql-commercial-client-5.7.16-1.1.el7.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package mysql-commercial-client.x86_64 0:5.7.16-1.1.el7 will be installed ---> Package mysql-commercial-server.x86_64 0:5.7.16-1.1.el7 will be installed --> Processing Dependency: /usr/bin/perl for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 Loading mirror speeds from cached hostfile * base: wftp.tu-chemnitz.de * extras: wftp.tu-chemnitz.de * updates: mirror.rackspeed.de --> Processing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: net-tools for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: perl(Getopt::Long) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: perl(strict) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Processing Dependency: libaio.so.1()(64bit) for package: mysql-commercial-server-5.7.16-1.1.el7.x86_64 --> Running transaction check ---> Package libaio.x86_64 0:0.3.109-13.el7 will be installed ---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed ---> Package perl.x86_64 4:5.16.3-286.el7 will be installed --> Processing Dependency: perl-libs = 4:5.16.3-286.el7 for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Socket) >= 1.3 for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Scalar::Util) >= 1.10 for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl-macros for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl-libs for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(threads::shared) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(threads) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(constant) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Time::Local) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Time::HiRes) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Storable) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Socket) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Scalar::Util) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Pod::Simple::XHTML) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Pod::Simple::Search) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Filter::Util::Call) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Temp) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Spec::Unix) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Spec::Functions) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Spec) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(File::Path) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Exporter) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Cwd) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: perl(Carp) for package: 4:perl-5.16.3-286.el7.x86_64 --> Processing Dependency: libperl.so()(64bit) for package: 4:perl-5.16.3-286.el7.x86_64 ---> Package perl-Getopt-Long.noarch 0:2.40-2.el7 will be installed --> Processing Dependency: perl(Pod::Usage) >= 1.14 for package: perl-Getopt-Long-2.40-2.el7.noarch --> Processing Dependency: perl(Text::ParseWords) for package: perl-Getopt-Long-2.40-2.el7.noarch --> Running transaction check ---> Package perl-Carp.noarch 0:1.26-244.el7 will be installed ---> Package perl-Exporter.noarch 0:5.68-3.el7 will be installed ---> Package perl-File-Path.noarch 0:2.09-2.el7 will be installed ---> Package perl-File-Temp.noarch 0:0.23.01-3.el7 will be installed ---> Package perl-Filter.x86_64 0:1.49-3.el7 will be installed ---> Package perl-PathTools.x86_64 0:3.40-5.el7 will be installed ---> Package perl-Pod-Simple.noarch 1:3.28-4.el7 will be installed --> Processing Dependency: perl(Pod::Escapes) >= 1.04 for package: 1:perl-Pod-Simple-3.28-4.el7.noarch --> Processing Dependency: perl(Encode) for package: 1:perl-Pod-Simple-3.28-4.el7.noarch ---> Package perl-Pod-Usage.noarch 0:1.63-3.el7 will be installed --> Processing Dependency: perl(Pod::Text) >= 3.15 for package: perl-Pod-Usage-1.63-3.el7.noarch --> Processing Dependency: perl-Pod-Perldoc for package: perl-Pod-Usage-1.63-3.el7.noarch ---> Package perl-Scalar-List-Utils.x86_64 0:1.27-248.el7 will be installed ---> Package perl-Socket.x86_64 0:2.010-3.el7 will be installed ---> Package perl-Storable.x86_64 0:2.45-3.el7 will be installed ---> Package perl-Text-ParseWords.noarch 0:3.29-4.el7 will be installed ---> Package perl-Time-HiRes.x86_64 4:1.9725-3.el7 will be installed ---> Package perl-Time-Local.noarch 0:1.2300-2.el7 will be installed ---> Package perl-constant.noarch 0:1.27-2.el7 will be installed ---> Package perl-libs.x86_64 4:5.16.3-286.el7 will be installed ---> Package perl-macros.x86_64 4:5.16.3-286.el7 will be installed ---> Package perl-threads.x86_64 0:1.87-4.el7 will be installed ---> Package perl-threads-shared.x86_64 0:1.43-6.el7 will be installed --> Running transaction check ---> Package perl-Encode.x86_64 0:2.51-7.el7 will be installed ---> Package perl-Pod-Escapes.noarch 1:1.04-286.el7 will be installed ---> Package perl-Pod-Perldoc.noarch 0:3.20-4.el7 will be installed --> Processing Dependency: perl(parent) for package: perl-Pod-Perldoc-3.20-4.el7.noarch --> Processing Dependency: perl(HTTP::Tiny) for package: perl-Pod-Perldoc-3.20-4.el7.noarch ---> Package perl-podlators.noarch 0:2.5.1-3.el7 will be installed --> Running transaction check ---> Package perl-HTTP-Tiny.noarch 0:0.033-3.el7 will be installed ---> Package perl-parent.noarch 1:0.225-244.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================================== Package Arch Version Repository Size ========================================================================================================================================================================================== Installing: mysql-commercial-client x86_64 5.7.16-1.1.el7 /mysql-commercial-client-5.7.16-1.1.el7.x86_64 94 M mysql-commercial-server x86_64 5.7.16-1.1.el7 /mysql-commercial-server-5.7.16-1.1.el7.x86_64 721 M Installing for dependencies: libaio x86_64 0.3.109-13.el7 base 24 k net-tools x86_64 2.0-0.17.20131004git.el7 base 304 k perl x86_64 4:5.16.3-286.el7 base 8.0 M perl-Carp noarch 1.26-244.el7 base 19 k perl-Encode x86_64 2.51-7.el7 base 1.5 M perl-Exporter noarch 5.68-3.el7 base 28 k perl-File-Path noarch 2.09-2.el7 base 26 k perl-File-Temp noarch 0.23.01-3.el7 base 56 k perl-Filter x86_64 1.49-3.el7 base 76 k perl-Getopt-Long noarch 2.40-2.el7 base 56 k perl-HTTP-Tiny noarch 0.033-3.el7 base 38 k perl-PathTools x86_64 3.40-5.el7 base 82 k perl-Pod-Escapes noarch 1:1.04-286.el7 base 50 k perl-Pod-Perldoc noarch 3.20-4.el7 base 87 k perl-Pod-Simple noarch 1:3.28-4.el7 base 216 k perl-Pod-Usage noarch 1.63-3.el7 base 27 k perl-Scalar-List-Utils x86_64 1.27-248.el7 base 36 k perl-Socket x86_64 2.010-3.el7 base 49 k perl-Storable x86_64 2.45-3.el7 base 77 k perl-Text-ParseWords noarch 3.29-4.el7 base 14 k perl-Time-HiRes x86_64 4:1.9725-3.el7 base 45 k perl-Time-Local noarch 1.2300-2.el7 base 24 k perl-constant noarch 1.27-2.el7 base 19 k perl-libs x86_64 4:5.16.3-286.el7 base 687 k perl-macros x86_64 4:5.16.3-286.el7 base 43 k perl-parent noarch 1:0.225-244.el7 base 12 k perl-podlators noarch 2.5.1-3.el7 base 112 k perl-threads x86_64 1.87-4.el7 base 49 k perl-threads-shared x86_64 1.43-6.el7 base 39 k Transaction Summary ========================================================================================================================================================================================== Install 2 Packages (+29 Dependent packages) Total size: 826 M Total download size: 12 M Installed size: 852 M Is this ok [y/d/N]: y

Further information you can find in the slides of my colleague Jörg Brühe: MySQL selection also against the distirbution he presented at FrOSCon 2016.

Taxonomy upgrade extras: installationinstallmysqlenterpriserhelred hatcentosmariadbserver

Beware of large MySQL max_sort_length parameter

Shinguz - Wed, 2016-08-24 23:40

Today we had a very interesting phenomena at a customer. He complained that MySQL always get some errors of the following type:

[ERROR] mysqld: Sort aborted: Error writing file '/tmp/MYGbBrpA' (Errcode: 28 - No space left on device)

After a first investigation we found that df -h /tmp shows from time to time a full disk but we could not see any file with ls -la /tmp/MY*.

After some more investigation we found even the query from the Slow Query Log which was producing the same problem. It looked similar to this query:

SELECT * FROM test ORDER BY field5, field4, field3, field2, field1;

Now we were capable to simulate the problem at will with the following table:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(64) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `field1` varchar(16) DEFAULT NULL, `field2` varchar(16) DEFAULT NULL, `field3` varchar(255) DEFAULT NULL, `field4` varchar(255) DEFAULT NULL, `field5` varchar(32) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=8912746 DEFAULT CHARSET=utf8 ;

An we have seen the query in SHOW PROCESSLIST:

| Query | 26 | Creating sort index | select * from test order by field5, field4, field3, field2, field1 |

But we were still not capable to see who or better how the hell mysqld is filling our disk!

I remembered further that I have seen some strange settings in the my.cnf before when we did the review of the database configuration. But I ignored them somehow.

[mysqld] max_sort_length = 8M sort_buffer_size = 20M

Now I remembered again these settings. We changed max_sort_length back to default 1k and suddenly our space problems disappeared!

We played a bit around with different values of max_sort_length and got the following execution times for our query:

max_sort_lengthexecution time [s]comment 64 8.8 s128 8.2 s256 9.3 s512 11.8 s 1k 14.9 s 2k 20.0 s 8k129.0 s 8M 75.0 sdisk full (50 G)
Conclusion

We set the values of max_sort_length back to the defaults. Our problems disappeared and we got working and much faster SELECT queries.

Do not needlessly change default values of MySQL without proving the impact. It can become worse than before!!!

The default value of max_sort_length is a good compromise between performance and an appropriate sort length.

Addendum

What I really did not like on this solution was, that I did not understand the way the problem occurred. So I did some more investigation in this. We were discussing forth and back if this could be because of XFS, because of sparse files or some kind of memory mapped files (see also man mmap).

At the end I had the idea to look at the lsof command during my running query:

mysql> SELECT * FROM test ORDER BY field5, field4, field3, field2, field1; ERROR 3 (HY000): Error writing file '/tmp/MYBuWcXP' (Errcode: 28 - No space left on device) shell> lsof -p 14733 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 14733 mysql 32u REG 8,18 9705619456 30147474 /tmp/MYck8vf4 (deleted) mysqld 14733 mysql 49u REG 8,18 749797376 30147596 /tmp/MYBuWcXP (deleted)

So it looks like that there were some deleted files which were growing!

Further information from the IRC channel led me to the libc temporary files (see also man 3 tmpfile).

And some hints from MadMerlin|work pointed me to:

shell> ls /proc//fd

Where you can also see those temporary files.

Thanks to MadMerlin|work for the hints!

Taxonomy upgrade extras: sortfileorder by

Beware of large MySQL max_sort_length parameter

Shinguz - Wed, 2016-08-24 23:40

Today we had a very interesting phenomena at a customer. He complained that MySQL always get some errors of the following type:

[ERROR] mysqld: Sort aborted: Error writing file '/tmp/MYGbBrpA' (Errcode: 28 - No space left on device)

After a first investigation we found that df -h /tmp shows from time to time a full disk but we could not see any file with ls -la /tmp/MY*.

After some more investigation we found even the query from the Slow Query Log which was producing the same problem. It looked similar to this query:

SELECT * FROM test ORDER BY field5, field4, field3, field2, field1;

Now we were capable to simulate the problem at will with the following table:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(64) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `field1` varchar(16) DEFAULT NULL, `field2` varchar(16) DEFAULT NULL, `field3` varchar(255) DEFAULT NULL, `field4` varchar(255) DEFAULT NULL, `field5` varchar(32) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=8912746 DEFAULT CHARSET=utf8 ;

An we have seen the query in SHOW PROCESSLIST:

| Query | 26 | Creating sort index | select * from test order by field5, field4, field3, field2, field1 |

But we were still not capable to see who or better how the hell mysqld is filling our disk!

I remembered further that I have seen some strange settings in the my.cnf before when we did the review of the database configuration. But I ignored them somehow.

[mysqld] max_sort_length = 8M sort_buffer_size = 20M

Now I remembered again these settings. We changed max_sort_length back to default 1k and suddenly our space problems disappeared!

We played a bit around with different values of max_sort_length and got the following execution times for our query:

max_sort_lengthexecution time [s]comment 64 8.8 s128 8.2 s256 9.3 s512 11.8 s 1k 14.9 s 2k 20.0 s 8k129.0 s 8M 75.0 sdisk full (50 G)
Conclusion

We set the values of max_sort_length back to the defaults. Our problems disappeared and we got working and much faster SELECT queries.

Do not needlessly change default values of MySQL without proving the impact. It can become worse than before!!!

The default value of max_sort_length is a good compromise between performance and an appropriate sort length.

Addendum

What I really did not like on this solution was, that I did not understand the way the problem occurred. So I did some more investigation in this. We were discussing forth and back if this could be because of XFS, because of sparse files or some kind of memory mapped files (see also man mmap).

At the end I had the idea to look at the lsof command during my running query:

mysql> SELECT * FROM test ORDER BY field5, field4, field3, field2, field1; ERROR 3 (HY000): Error writing file '/tmp/MYBuWcXP' (Errcode: 28 - No space left on device) shell> lsof -p 14733 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 14733 mysql 32u REG 8,18 9705619456 30147474 /tmp/MYck8vf4 (deleted) mysqld 14733 mysql 49u REG 8,18 749797376 30147596 /tmp/MYBuWcXP (deleted)

So it looks like that there were some deleted files which were growing!

Further information from the IRC channel led me to the libc temporary files (see also man 3 tmpfile).

And some hints from MadMerlin|work pointed me to:

shell> ls /proc//fd

Where you can also see those temporary files.

Thanks to MadMerlin|work for the hints!

Taxonomy upgrade extras: sortfileorder by

FromDual Performance Monitor for MySQL and MariaDB 0.10.6 has been released

Shinguz - Wed, 2016-08-03 19:40

FromDual has the pleasure to announce the release of the new version 0.10.6 of its popular Database Performance Monitor for MySQL, MariaDB, Galera Cluster and Percona Server fpmmm.

You can download fpmmm from here.

In the inconceivable case that you find a bug in fpmmm please report it to our Bug-tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

This release contains various bug fixes and improvements. The previous release had some major bugs so we recommend to upgrade...

Changes in fpmmm v0.10.6 fpmmm agent
  • Do not connect to server bug fixed.
  • Special case when lock file was removed when it was read is fixed.
  • Added ORDER BY to all GROUP BY to be compliant for the future.
  • Zabbix 3.0 templates added.
  • MaaS: Function curl_file_create implemented for php < 5.5
  • MaaS: Debug message fixed.
  • Maas: Curl upload fixed.
  • MaaS: InnoDB: Deadlock and Foreign Key errors are only escaped with xxx when used in MaaS. Otherwise they are sent normally. Foreign Key errors with MaaS is now also escaped with xxx.
Process module
  • Wrong substitution in process vm calculation fixed.
Galera module
  • Template: Galera items changed from normal to delta.
InnoDB module
  • Template: Fixed InnoDB template to work with Zabbix v3.0.
  • Template: InnoDB locking graph improved.

For subscriptions of commercial use of fpmmm please get in contact with us.

Taxonomy upgrade extras: mysqlperformancemonitormonitoringfpmmmmaasperformance monitormpmrelease

MySQL Environment MyEnv 1.3.1 has been released

Shinguz - Wed, 2016-08-03 08:27

FromDual has the pleasure to announce the release of the new version 1.3.1 of its popular MySQL, Galera Cluster, MariaDB and Percona Server multi-instance environment MyEnv.

The new MyEnv can be downloaded here.

In the inconceivable case that you find a bug in the MyEnv please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x or higher to 1.3.1 # cd ${HOME}/product # tar xf /download/myenv-1.3.1.tar.gz # rm -f myenv # ln -s myenv-1.3.1 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Changes in MyEnv 1.3.1 MyEnv
  • Bash function bootstrap added.
  • Galera options --bootstrap --new-cluster and start method bootstrap was implemented. Typo fixed.
  • New 5.7 variables added and 5.6 variables to avoid nasty warnings in the error log added to the my.cnf template. Further new file system structure was prepared.
  • MySQL 5.7 variables for error log behaviour added.
  • Comment for log_bin added to my.cnf template.
  • ulimit problem fixed rudely in MyEnv init script.
  • wsrep_provider for CentOS added in my.cnf template.
  • Cgroup template improved.
  • Cgroup how-to improved and configuration example added.
MyEnv Installer
  • default as instance name set to blacklist.
  • Typo fixed in help of installMyEnv.
MyEnv Utilities
  • Test table prepared for explicit_defaults_for_timestamp configuration.
  • insert_test.sh now has optional parameters for user, host etc.

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationtestingupgradereleasecloudcgroupscontainermysqld_multi

Temporary tables and MySQL STATUS information

Shinguz - Fri, 2016-07-08 18:42

When analysing MySQL configuration and status information at customers it is always interesting to see how the applications behave. This can partially be seen by the output of the SHOW GLOBAL STATUS command. See also Reading MySQL fingerprints.

Today we wanted to know where the high Com_create_table and the twice as high Com_drop_table is coming from. One suspect was TEMPORARY TABLES. But are real temporary tables counted as Com_create_table and Com_drop_table at all? This is what we want to find out today. The tested MySQL version is 5.7.11.

Caution: Different MySQL or MariaDB versions might behave differently!

Session 1 Global Session 2 CREATE TABLE t1 (id INT);
Query OK, 0 rows affected     Com_create_table +1
Opened_table_definitions +1 Com_create_table +1
Opened_table_definitions +1    CREATE TABLE t1 (id INT);
ERROR 1050 (42S01): Table 't1' already exists     Com_create_table +1
Open_table_definitions +1
Open_tables +1
Opened_table_definitions +1
Opened_tables +1 Com_create_table + 1
Open_table_definitions +1
Open_tables +1
Opened_table_definitions +1
Opened_tables +1    CREATE TABLE t1 (id INT);
ERROR 1050 (42S01): Table 't1' already exists     Com_create_table + 1 Com_create_table + 1    DROP TABLE t1;
Query OK, 0 rows affected     Com_drop_table +1
Open_table_definitions -1
Open_tables -1 Com_drop_table +1
Open_table_definitions -1
Open_tables -1    DROP TABLE t1;
ERROR 1051 (42S02): Unknown table 'test.t1'     Com_drop_table -1 Com_drop_table -1    CREATE TEMPORARY TABLE ttemp (id INT);
Query OK, 0 rows affected     Com_create_table +1
Opened_table_definitions +2
Opened_tables +1 Com_create_table +1
Opened_table_definitions +2
Opened_tables +1    CREATE TEMPORARY TABLE ttemp (id INT);
ERROR 1050 (42S01): Table 'ttemp' already exists     Com_create_table +1 Com_create_table +1    DROP TABLE ttemp;
Query OK, 0 rows affected     Com_drop_table +1 Com_drop_table +1    CREATE TEMPORARY TABLE ttemp (id int);
Query OK, 0 rows affected   CREATE TEMPORARY TABLE ttemp (id int);
Query OK, 0 rows affected Com_create_table +1
Opened_table_definitions +2
Opened_tables +1 Com_create_table +2
Opened_table_definitions +4
Opened_tables +2 Com_create_table +1
Opened_table_definitions +2
Opened_tables +1  DROP TABLE ttemp;
Query OK, 0 rows affected   DROP TABLE ttemp;
Query OK, 0 rows affected Com_drop_table +1 Com_drop_table +2 Com_drop_table +1
Conclusion
  • A successful CREATE TABLE command opens and closes a table definition.
  • A non successful CREATE TABLE command opens the table definition and the file handle of the previous table. So a faulty application can be quite expensive.
  • A further non successful CREATE TABLE command has no other impact.
  • A DROP TABLE command closes a table definition and the file handle.
  • A CREATE TEMPORARY TABLE opens 2 table definitions and the file handle. Thus behaves different than CREATE TABLE
  • But a faulty CREATE TEMPORARY TABLE seems to be much less intrusive.
  • Open_table_definitions and Open_tables is always global, also in session context.
Taxonomy upgrade extras: statustemporary table

MySQL spatial functionality - points of interest around me

Shinguz - Wed, 2016-06-01 10:13

This week I was preparing the exercises for our MySQL/MariaDB for Beginners training. One of the exercises of the training is about MySQL spatial (GIS) features. I always tell customers: "With these features you can answer questions like: Give me all points of interest around me!"

Now I wanted to try out how it really works and if it is that easy at all...

To get myself an idea of what I want to do I did a little sketch first:

   My position   Shops   Restaurants   Cafes

To do this I needed a table and some data:

CREATE TABLE poi ( id INT UNSIGNED NOT NULL AUTO_INCREMENT , name VARCHAR(40) , type VARCHAR(20) , sub_type VARCHAR(20) , pt POINT NOT NULL , PRIMARY KEY (id) , SPATIAL INDEX(pt) ) ENGINE=InnoDB; INSERT INTO poi (name, type, sub_type, pt) VALUES ('Shop 1', 'Shop', 'Cloth', Point(2,2)) , ('Cafe 1', 'Cafe', '', Point(11,2)) , ('Shop 2', 'Shop', 'Cloth', Point(5,4)) , ('Restaurant 1', 'Restaurant', 'Portugies', Point(8,7)) , ('Cafe 2', 'Cafe', '', Point(3,9)) , ('Shop 3', 'Shop', 'Hardware', Point(11,9)) ;

This looks as follows:

SELECT id, CONCAT(ST_X(pt), '/', ST_Y(pt)) AS "X/Y", name, type, sub_type FROM poi; +----+-----------+--------------+------------+-----------+ | id | X/Y | name | type | sub_type | +----+-----------+--------------+------------+-----------+ | 1 | 2/2 | Shop 1 | Shop | Cloth | | 2 | 11/2 | Cafe 1 | Cafe | | | 3 | 5/4 | Shop 2 | Shop | Cloth | | 4 | 8/7 | Restaurant 1 | Restaurant | Portugies | | 5 | 3/9 | Cafe 2 | Cafe | | | 6 | 11/9 | Shop 3 | Shop | Hardware | +----+-----------+--------------+------------+-----------+

Now the question: "Give me all shops in a distance of 4.5 units around me":

SET @hereami = POINT(9,4); SELECT id, ST_AsText(pt) AS point, name, ROUND(ST_Distance(@hereami, pt), 2) AS distance FROM poi WHERE ST_Distance(@hereami, pt) < 4.5 AND type = 'Shop' ORDER BY distance ASC ; +----+------------+--------+----------+ | id | point | name | distance | +----+------------+--------+----------+ | 3 | POINT(5 4) | Shop 2 | 4.00 | +----+------------+--------+----------+ 1 row in set (0.37 sec)

The query execution plan looks like this:

id: 1 select_type: SIMPLE table: poi partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 650361 filtered: 10.00 Extra: Using where; Using filesort

So no use of the spatial index yet. :-(

Reading the MySQL documentation Using Spatial Indexes provides some more information:

The optimizer investigates whether available spatial indexes can be involved in the search for queries that use a function such as MBRContains() or MBRWithin() in the WHERE clause.

So it looks like the optimizer CAN evaluate function covered fields in this specific case. But not with the function ST_Distance I have chosen.

So my WHERE clause must look like: "Give me all points within a polygon spanned 4.5 units around my position..."

I did not find any such function in the short run. So I created a hexagon which is not too far from a circle...

With this hexagon I tried again:

SET @hereami = POINT(9,4); SET @hexagon = 'POLYGON((9 8.5, 12.897 6.25, 12.897 1.75, 9 -0.5, 5.103 1.75, 5.103 6.25, 9 8.5))'; SELECT id, ST_AsText(pt) AS point, name, ROUND(ST_Distance(@hereami, pt), 2) AS distance FROM poi WHERE MBRContains(ST_GeomFromText(@hexagon), pt) AND ST_Distance(@hereami, pt) < 4.5 AND type = 'Shop' ORDER BY distance ASC ; Empty set (0.03 sec)

And tadaaah: Damned fast, but the result is not the same! :-( When you look at the graph above it is obvious why: The missing shop is 0.103 units outside of our hexagon search range but within our circle range. So an octagon would have been the better approach...

At least the index is considered now! :-)

id: 1 select_type: SIMPLE table: poi partitions: NULL type: range possible_keys: pt key: pt key_len: 34 ref: NULL rows: 31356 filtered: 10.00 Extra: Using where; Using filesort

For specifying a an "outer" hexagon I was too lazy. So I was specifying a square:

SET @hereami = POINT(9,4); SET @square = 'POLYGON((4.5 8.5, 13.5 8.5, 13.5 -0.5, 4.5 -0.5, 4.5 8.5))'; SELECT id, ST_AsText(pt) AS point, name, ROUND(ST_Distance(@hereami, pt), 2) AS distance FROM poi WHERE MBRContains(ST_GeomFromText(@square), pt) AND ST_Distance(@hereami, pt) < 4.5 AND type = 'Shop' ORDER BY distance ASC ; +----+------------+--------+----------+ | id | point | name | distance | +----+------------+--------+----------+ | 3 | POINT(5 4) | Shop 2 | 4.00 | +----+------------+--------+----------+ 1 row in set (0.02 sec)

So, my shop is in the result again now. And even a bit faster!

Now I wanted to find out if this results are any faster than the conventional method with an index on (x) and (y) or (x, y):

SELECT id, ST_AsText(pt) AS point, name, ROUND(ST_Distance(@hereami, pt), 2) AS distance FROM poi WHERE x >= 4.5 AND x <= 13.5 AND y >= -0.5 AND y <= 8.5 AND ST_Distance(@hereami, pt) < 4.5 AND type = 'Shop' ORDER BY distance ASC ; 1 row in set (0.15 sec)

Here the optimizer chooses the index on x. But I think he could do better. So I forced to optimizer to use the index on (x, y):

SELECT id, ST_AsText(pt) AS point, name, ROUND(ST_Distance(@hereami, pt), 2) AS distance FROM poi FORCE INDEX (xy) WHERE x >= 4.5 AND x <= 13.5 AND y >= -0.5 AND y <= 8.5 AND ST_Distance(@hereami, pt) < 4.5 AND type = 'Shop' ORDER BY distance ASC ; 1 row in set (0.03 sec) id: 1 select_type: SIMPLE table: poi partitions: NULL type: range possible_keys: xy key: xy key_len: 10 ref: NULL rows: 115592 filtered: 1.11 Extra: Using index condition; Using where; Using filesort

Same performance than with the spatial index. So it looks like for this simple task with my data distribution conventional methods do well enough.

No I wanted to try a polygon which comes as close as possible to a circle. This I solved with a MySQL stored function which returns a polygon:/p>

DROP FUNCTION polygon_circle; delimiter // CREATE FUNCTION polygon_circle(pX DOUBLE, pY DOUBLE, pDiameter DOUBLE, pPoints SMALLINT UNSIGNED) -- RETURNS VARCHAR(4096) DETERMINISTIC RETURNS POLYGON DETERMINISTIC BEGIN DECLARE i SMALLINT UNSIGNED DEFAULT 0; DECLARE vSteps SMALLINT UNSIGNED; DECLARE vPolygon VARCHAR(4096) DEFAULT ''; -- Input validation IF pPoints < 3 THEN RETURN NULL; END IF; IF pPoints > 360 THEN RETURN NULL; END IF; IF pPoints > 90 THEN RETURN NULL; END IF; if (360 % pPoints) != 0 THEN RETURN NULL; END IF; -- Start SET vSteps = 360 / pPoints; WHILE i < 360 DO SET vPolygon = CONCAT(vPolygon, (pX + (SIN(i * 2 * PI() / 360) * pDiameter)), ' ', (pY + (COS(i * 2 * PI() / 360) * pDiameter)), ', '); SET i = i + vSteps; END WHILE; -- Add first point again SET vPolygon = CONCAT("POLYGON((", vPolygon, (pX + (SIN(0 * 2 * PI() / 360) * pDiameter)), " ", (pY + (COS(0 * 2 * PI() / 360) * pDiameter)), "))"); -- RETURN vPolygon; RETURN ST_GeomFromText(vPolygon); END; // delimiter ; SELECT ST_AsText(polygon_circle(9, 4, 4.5, 6)); -- SELECT polygon_circle(9, 4, 4.5, 8);

Then calling the query in the same way:

SET @hereami = POINT(9,4); SELECT id, ST_AsText(pt) AS point, name, ROUND(ST_Distance(@hereami, pt), 2) AS distance FROM poi WHERE MBRContains(polygon_circle(9, 4, 4.5, 90), pt) AND ST_Distance(@hereami, pt) < 4.5 AND type = 'Shop' ORDER BY distance ASC ; +----+------------+--------+----------+ | id | point | name | distance | +----+------------+--------+----------+ | 3 | POINT(5 4) | Shop 2 | 4.00 | +----+------------+--------+----------+ 1 row in set (0.03 sec)

This seems not to have any significant negative impact on performance.

Results Test#rowsoperationlatencyTotal655360FTS1300 msSpatial exact Circle4128FTS520 msSpatial inner Hexagon3916range (pt)20 msSpatial outer Square4128range (pt)30 msConventional outer Square on (x)4128range (x) or (y)150 msConventional outer Square on (xy)4128range (x,y)30 msSpatial good Polygon4128range (pt)30 msTaxonomy upgrade extras: spatialgis

Why you should take care of MySQL data types

Shinguz - Wed, 2016-05-25 11:42

A customer reported last month that MySQL does a full table scan (FTS) if a query was filtered by a INT value on a VARCHAR column. First I told him that this is not true any more because MySQL has fixed this behaviour long time ago. He showed me that I was wrong:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(64) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), KEY `data` (`data`) ) ENGINE=InnoDB; EXPLAIN SELECT * FROM test WHERE data = 42\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: test partitions: NULL type: ALL possible_keys: data key: NULL key_len: NULL ref: NULL rows: 522500 filtered: 10.00 Extra: Using where EXPLAIN SELECT * FROM test WHERE data = '42'\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: test partitions: NULL type: ref possible_keys: data key: data key_len: 67 ref: const rows: 1 filtered: 100.00 Extra: NULL

When I executed the query I got some more interesting information:

SELECT * FROM test WHERE data = '42'; Empty set (0.00 sec) SELECT * FROM test WHERE data = 42; +--------+----------------------------------+---------------------+ | id | data | ts | +--------+----------------------------------+---------------------+ | 1096 | 42a5cb4a3e76857a3efe7af44ba9f4dd | 2016-05-25 10:26:59 | ... | 718989 | 42a1921fb2df42126d85f9586532eda4 | 2016-05-25 10:27:12 | +--------+----------------------------------+---------------------+ 767 rows in set, 65535 warnings (0.26 sec)

Looking at the warnings we also find the reason: MySQL does the cast on the column and not on the value which is a bit odd IMHO:

show warnings; | Warning | 1292 | Truncated incorrect DOUBLE value: '80f52706c2f9de40472ec29a7f70c992' |

A bit suspicious I looked at the warnings of the query execution plan again:

show warnings; +---------+------+---------------------------------------------------------------------------------------------+ | Level | Code | Message | +---------+------+---------------------------------------------------------------------------------------------+ | Warning | 1739 | Cannot use ref access on index 'data' due to type or collation conversion on field 'data' | | Warning | 1739 | Cannot use range access on index 'data' due to type or collation conversion on field 'data' | +---------+------+---------------------------------------------------------------------------------------------+

I thought this was fixed, but it seems not. The following releases behave like this: MySQL 5.0.96, 5.1.73, 5.5.38, 5.6.25, 5.7.12 and MariaDB 5.5.41, 10.0.21 and 10.1.9

The other way around it seems to work in both cases:

SELECT * FROM test WHERE id = 42; +----+----------------------------------+---------------------+ | id | data | ts | +----+----------------------------------+---------------------+ | 42 | 81d74057d7be8f20563da404bb1b3ab0 | 2016-05-25 10:26:56 | +----+----------------------------------+---------------------+ SELECT * FROM test WHERE id = '42'; +----+----------------------------------+---------------------+ | id | data | ts | +----+----------------------------------+---------------------+ | 42 | 81d74057d7be8f20563da404bb1b3ab0 | 2016-05-25 10:26:56 | +----+----------------------------------+---------------------+ EXPLAIN SELECT * FROM test WHERE id = 42\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: test partitions: NULL type: const possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: const rows: 1 filtered: 100.00 Extra: NULL
Taxonomy upgrade extras: query tuningexplaindata typesql

MariaDB 10.2 Window Function Examples

Shinguz - Mon, 2016-04-18 22:39

MariaDB 10.2 has introduced some Window Functions for analytical queries.

See also: Window Functions, Window Functions, Window function and Rows and Range, Preceding and Following

Function ROW_NUMBER()

Simulate a row number (sequence) top 3

SELECT ROW_NUMBER() OVER (PARTITION BY NULL ORDER BY category_id) AS num , category.category_id FROM category LIMIT 3 ;

or

SELECT ROW_NUMBER() OVER (ORDER BY category_id) AS num , category.category_id FROM category LIMIT 3 ; +-----+-------------+ | num | category_id | +-----+-------------+ | 1 | ACTUAL | | 2 | ADJUSTMENT | | 3 | BUDGET | +-----+-------------+
ROW_NUMBER() per PARTITION SELECT ROW_NUMBER() OVER (PARTITION BY store_type ORDER BY SUM(sf.store_sales) DESC) AS Nbr , s.store_type AS "Store Type", s.store_city AS City, SUM(sf.store_sales) AS Sales FROM store AS s JOIN sales_fact AS sf ON sf.store_id = s.store_id GROUP BY s.store_type, s.store_city ORDER BY s.store_type, Rank ; +-----+---------------------+---------------+------------+ | Nbr | Store Type | City | Sales | +-----+---------------------+---------------+------------+ | 1 | Deluxe Supermarket | Salem | 1091274.68 | | 2 | Deluxe Supermarket | Tacoma | 993823.44 | | 3 | Deluxe Supermarket | Hidalgo | 557076.84 | | 4 | Deluxe Supermarket | Merida | 548297.64 | | 5 | Deluxe Supermarket | Vancouver | 534180.96 | | 6 | Deluxe Supermarket | San Andres | 518044.80 | | 1 | Gourmet Supermarket | Beverly Hills | 619013.24 | | 2 | Gourmet Supermarket | Camacho | 357772.88 | | 1 | Mid-Size Grocery | Yakima | 304590.92 | | 2 | Mid-Size Grocery | Mexico City | 166503.48 | | 3 | Mid-Size Grocery | Victoria | 144827.48 | | 4 | Mid-Size Grocery | Hidalgo | 144272.84 | +-----+---------------------+---------------+------------+
Function RANK()

Ranking of top 10 salaries

SELECT full_name AS Name, salary AS Salary , RANK() OVER(ORDER BY salary DESC) AS Rank FROM employee ORDER BY salary DESC LIMIT 10 ; +-----------------+----------+------+ | Name | Salary | Rank | +-----------------+----------+------+ | Sheri Nowmer | 80000.00 | 1 | | Darren Stanz | 50000.00 | 2 | | Donna Arnold | 45000.00 | 3 | | Derrick Whelply | 40000.00 | 4 | | Michael Spence | 40000.00 | 4 | | Maya Gutierrez | 35000.00 | 6 | | Pedro Castillo | 35000.00 | 6 | | Laurie Borges | 35000.00 | 6 | | Beverly Baker | 30000.00 | 9 | | Roberta Damstra | 25000.00 | 10 | +-----------------+----------+------+
Function DENSE_RANK() SELECT full_name AS Name, salary AS Salary , DENSE_RANK() OVER(ORDER BY salary DESC) AS Rank FROM employee ORDER BY salary DESC LIMIT 10 ; +-----------------+----------+------+ | Name | Salary | Rank | +-----------------+----------+------+ | Sheri Nowmer | 80000.00 | 1 | | Darren Stanz | 50000.00 | 2 | | Donna Arnold | 45000.00 | 3 | | Derrick Whelply | 40000.00 | 4 | | Michael Spence | 40000.00 | 4 | | Maya Gutierrez | 35000.00 | 5 | | Pedro Castillo | 35000.00 | 5 | | Laurie Borges | 35000.00 | 5 | | Beverly Baker | 30000.00 | 6 | | Roberta Damstra | 25000.00 | 7 | +-----------------+----------+------+
Aggregation Windows SELECT full_name AS Name, salary AS Salary , SUM(salary) OVER(ORDER BY salary DESC) AS "Sum sal" FROM employee ORDER BY salary DESC LIMIT 10 ; +-----------------+----------+-----------+ | Name | Salary | Sum sal | +-----------------+----------+-----------+ | Sheri Nowmer | 80000.00 | 80000.00 | | Darren Stanz | 50000.00 | 130000.00 | | Donna Arnold | 45000.00 | 175000.00 | | Derrick Whelply | 40000.00 | 255000.00 | | Michael Spence | 40000.00 | 255000.00 | | Laurie Borges | 35000.00 | 360000.00 | | Maya Gutierrez | 35000.00 | 360000.00 | | Pedro Castillo | 35000.00 | 360000.00 | | Beverly Baker | 30000.00 | 390000.00 | | Roberta Damstra | 25000.00 | 415000.00 | +-----------------+----------+-----------+
Function CUME_DIST() and PERCENT_RANK() SELECT s.store_state AS State, s.store_city AS City, SUM(e.salary) AS Salary , CUME_DIST() OVER (PARTITION BY State ORDER BY Salary) AS CumeDist , PERCENT_RANK() OVER (PARTITION BY State ORDER BY Salary) AS PctRank FROM employee AS e JOIN store AS s on s.store_id = e.store_id WHERE s.store_country = 'USA' GROUP BY s.store_name ORDER BY s.store_state, Salary DESC ; +-------+---------------+-----------+--------------+--------------+ | State | City | Salary | CumeDist | PctRank | +-------+---------------+-----------+--------------+--------------+ | CA | Alameda | 537000.00 | 1.0000000000 | 1.0000000000 | | CA | Los Angeles | 221200.00 | 0.8000000000 | 0.7500000000 | | CA | San Diego | 220200.00 | 0.6000000000 | 0.5000000000 | | CA | Beverly Hills | 191800.00 | 0.4000000000 | 0.2500000000 | | CA | San Francisco | 30520.00 | 0.2000000000 | 0.0000000000 | | OR | Salem | 260220.00 | 1.0000000000 | 1.0000000000 | | OR | Portland | 221200.00 | 0.5000000000 | 0.0000000000 | | WA | Tacoma | 260220.00 | 1.0000000000 | 1.0000000000 | | WA | Spokane | 223200.00 | 0.8571428571 | 0.8333333333 | | WA | Bremerton | 221200.00 | 0.7142857143 | 0.6666666667 | | WA | Seattle | 220200.00 | 0.5714285714 | 0.5000000000 | | WA | Yakima | 74060.00 | 0.4285714286 | 0.3333333333 | | WA | Bellingham | 23220.00 | 0.2857142857 | 0.1666666667 | | WA | Walla Walla | 21320.00 | 0.1428571429 | 0.0000000000 | +-------+---------------+-----------+--------------+--------------+
Function NTILE() SELECT promotion_name, media_type , TO_DAYS(end_date)-TO_DAYS(start_date) AS Duration , NTILE(4) OVER (PARTITION BY promotion_name ORDER BY DURATION) AS quartile , NTILE(5) OVER (PARTITION BY promotion_name ORDER BY DURATION) AS quintile , NTILE(100) OVER (PARTITION BY promotion_name ORDER BY DURATION) AS precentile FROM promotion WHERE promotion_name = 'Weekend Markdown' LIMIT 10 ; +------------------+-------------------------+----------+----------+----------+------------+ | promotion_name | media_type | Duration | quartile | quintile | precentile | +------------------+-------------------------+----------+----------+----------+------------+ | Weekend Markdown | In-Store Coupon | 2 | 1 | 1 | 9 | | Weekend Markdown | Daily Paper | 3 | 3 | 4 | 29 | | Weekend Markdown | Radio | 3 | 4 | 4 | 36 | | Weekend Markdown | Daily Paper, Radio | 2 | 2 | 2 | 13 | | Weekend Markdown | Daily Paper, Radio, TV | 2 | 2 | 3 | 20 | | Weekend Markdown | TV | 2 | 3 | 3 | 26 | | Weekend Markdown | Sunday Paper | 3 | 3 | 4 | 28 | | Weekend Markdown | Daily Paper, Radio, TV | 3 | 3 | 4 | 34 | | Weekend Markdown | Daily Paper | 2 | 1 | 2 | 10 | | Weekend Markdown | Street Handout | 2 | 2 | 2 | 18 | | Weekend Markdown | Bulk Mail | 3 | 4 | 5 | 37 | | Weekend Markdown | Cash Register Handout | 2 | 2 | 2 | 14 | | Weekend Markdown | Daily Paper, Radio, TV | 3 | 3 | 4 | 31 | | Weekend Markdown | Sunday Paper | 2 | 3 | 3 | 27 | | Weekend Markdown | Sunday Paper, Radio, TV | 1 | 1 | 1 | 4 | +------------------+-------------------------+----------+----------+----------+------------+
Taxonomy upgrade extras: mariadbdwhreportingAnalyticsWindow FunctionOLAPData Mart

Galera Cache sizing

Shinguz - Mon, 2016-04-04 22:03

To synchronize the data between the Galera Cluster and a new or re-entering Galera node Galera Cluster uses 2 different mechanisms:

  • For full synchronization of data: Snapshot State Transfer (SST).
  • For delta synchronization of data: Incremental State Transfer (IST).

The Incremental State Transfer (IST) is relevant when a node is already known to the Galera Cluster and just left the cluster short time ago. This typically happens in a maintenance window during a rolling cluster restart.

The Galera Cache is a round-robin file based cache that keeps all the write-sets (= transactions + meta data) for a certain amount of time. This time, which should be bigger than your planned maintenance window, depends on the size of the Galera Cache (default 128 Mbyte) and the traffic which will happen during your maintenance window.

If your traffic is bigger than the Galera Cache can keep Galera Cluster will fall-back from IST to SST which is a very expensive operation for big databases.

The size of the Galera Cache can be calculated of the delta of the sum of the following 2 Galera status informations before and after the maintenance window:

Galera Cache size = delta(wsrep_replicated_bytes + wsrep_received_bytes)

Ideally you determine these values before your change happens in a time window where you have roughly the same traffic as during your maintenance window.

If you do not have a Galera Cluster in place yet or if you do not have those values available you can also use the numbers of the traffic written to the binary log or the number of the traffic written to InnoDB transaction log (Innodb_os_log_written).

As a rough estimate we have evaluated the following formulas for you:

Binary Log Traffic x 1.3 = Wsrep traffic (+/- 10%)

or

InnoDB Log File traffic x 0.6 = Wsrep traffic (+/- 10%)
Taxonomy upgrade extras: Galera Clustercachesizing

Max_used_connections per user/account

Shinguz - Thu, 2015-07-30 23:34
Taxonomy upgrade extras: max_used_connectionsuseraccountconnectionconfiguration

How many connections can be opened concurrently against my MySQL or MariaDB database can be configured and checked with the following command:

SHOW GLOBAL VARIABLES LIKE 'max_connections'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | max_connections | 505 | +-----------------+-------+

If this limit was ever reached in the past can be checked with:

SHOW GLOBAL STATUS LIKE 'max_use%'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | Max_used_connections | 23 | +----------------------+-------+

But on MySQL instances with many different applications (= databases/schemas) and thus many different users it is a bit more complicated to find out which of these users have connected how many times concurrently. We can configure how many connections one specific user can have at maximum at the same time with:

SHOW GLOBAL VARIABLES LIKE 'max_user_connections'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | max_user_connections | 500 | +----------------------+-------+

Further we can limit one specific user with:

GRANT USAGE ON *.* TO 'repl'@'%' WITH MAX_CONNECTIONS_PER_HOUR 100 MAX_USER_CONNECTIONS 10;

and check with:

SELECT User, Host, max_connections, max_user_connections FROM mysql.user; +------+---------------+-----------------+----------------------+ | User | Host | max_connections | max_user_connections | +------+---------------+-----------------+----------------------+ | root | localhost | 0 | 0 | | repl | % | 100 | 10 | | repl | 192.168.1.139 | 0 | 0 | +------+---------------+-----------------+----------------------+

But we have currently no chance to check if this limit was reached or nearly reached in the past...

A feature request for this was opened at MySQL wit bug #77888

Solution

If you cannot wait for the implementation here we have a little workaround:

DROP TABLE IF EXISTS mysql.`max_used_connections`; CREATE TABLE mysql.`max_used_connections` ( `USER` char(16) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL, `HOST` char(60) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL, `MAX_USED_CONNECTIONS` bigint(20) NOT NULL, PRIMARY KEY (`USER`, `HOST`) USING HASH ) ENGINE=MEMORY DEFAULT CHARSET=utf8 ; DROP EVENT IF EXISTS mysql.gather_max_used_connections; -- event_scheduler = on CREATE DEFINER=root@localhost EVENT mysql.gather_max_used_connections ON SCHEDULE EVERY 10 SECOND DO INSERT INTO mysql.max_used_connections SELECT user, host, current_connections FROM performance_schema.accounts WHERE user IS NOT NULL AND host IS NOT NULL ON DUPLICATE KEY UPDATE max_used_connections = IF(current_connections > max_used_connections, current_connections, max_used_connections) ; SELECT * FROM mysql.max_used_connections; +--------+-----------+----------------------+ | USER | HOST | MAX_USED_CONNECTIONS | +--------+-----------+----------------------+ | root | localhost | 4 | | zabbix | localhost | 21 | +--------+-----------+----------------------+

Caution: Because we used a MEMORY table those values are reset at every MySQL restart (as it happens with the PERFORMANCE_SCHEMA or the INFORMATION_SCHEMA).

FromDual Backup Manager for MySQL 1.2.2 has been released

Shinguz - Tue, 2015-06-23 11:33

FromDual has the pleasure to announce the release of the new version 1.2.2 of the popular Backup Manager for MySQL and MariaDB (fromdual_bman).

You can download the FromDual Backup Manager from here.

In the inconceivable case that you find a bug in the Backup Manager please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.2.x to 1.2.2 # cd ${HOME}/product # tar xf /download/fromdual_brman-1.2.2.tar.gz # rm -f fromdual_brman # ln -s fromdual_brman-1.2.2 fromdual_brman
Changes in FromDual Backup Manager 1.2.2 FromDual Backup Manager

It contains mainly fixes with brman catalog and physical backups.

You can verify your current FromDual Backup Manager version with the following command:

fromdual_bman --version
  • Archiving with physical backup bug fixed.
  • Connect replaced by OO style and error exit fixed.
  • Create catalog fixed.
  • Archivedir without archive option does not make sense.

Wir suchen Dich: MySQL/MariaDB DBA für FromDual Support

Shinguz - Mon, 2015-06-22 13:51
Taxonomy upgrade extras: jobDBAsupportWer sind wir?

FromDual ist das führende unabhängige Beratungs- und Dienstleistungsunternehmen für MySQL, Galera Cluster, MariaDB und Percona Server in Europa mit Hauptsitz in der Schweiz.

Unsere Kunden stammen hauptsächlich aus Europa und reichen vom kleinen Start-Up bis zur europäischen Top-500 Firma. Sie erhalten von uns Support bei Datenbank-Problemen, direkte Eingriffe als remote-DBA, Schulung für ihre DBAs und Entwickler sowie Beratung bei Architektur- und Design-Entscheidungen. Ausserdem entwickeln wir Tools rund um MySQL, schreiben Blog-Artikel und halten Vorträge bei Konferenzen.

Da unsere qualitativ guten Dienstleistungen immer mehr Kunden anziehen, brauchen wir Kollegen (m/w), welche selbst und mit uns wachsen wollen.

Stellenbeschreibung

Wir suchen deutschsprachige Mitarbeiter (Sie oder Ihn) auf Junior- oder Senior-Level für Dienstleistungen rund um MySQL (hauptsächlich Support und remote-DBA Arbeiten) in Vollzeit. Primär solltest Du sicherstellen, dass die geschäftskritischen MySQL-Datenbanken unserer Kunden wie am Schnürchen laufen - und falls nicht, diese schnell wieder ans Laufen kriegen...


Unser/e "Wunschkandidat/in"

  • hat Erfahrung im Betrieb kritischer und hoch verfügbarer produktiver Datenbanken hauptsächlich auf Linux,
  • kennt Replikation in allen Variationen aus der täglichen Arbeit,
  • weiß, wie die meist verbreiteten MySQL-HA-Setups funktionieren und wie man sie wieder effizient repariert, wenn ein Problem auftritt,
  • ist sattelfest in SQL,
  • bringt Erfahrung mit Galera Cluster mit,
  • kann Bash skripten und einfache Programme in mindestens einer verbreiteten Programmier-/Skripting-Sprache (PHP, Bash, ...) erstellen.

Wir suchen Verstärkung, die von soliden Grundlagen aus auf dem Weg zu diesem Ideal ist.


Was wir von Dir erwarten:

  • Kenntnisse in MySQL, Percona Server oder MariaDB oder Bereitschaft, sich diese anzueignen
  • wissen, wie man kritische Datenbank-Systeme betreibt
  • Verständnis, was beim Betrieb von Datenbanken falsch laufen kann
  • selbständige Arbeitsweise (remote) mit Kommunikation über IRC, Skype, Mail und Telefon
  • Kenntnisse des Linux Systems

DBA- oder DevOps-Erfahrungen wären z.B. eine gute fachliche Basis.


Du schätzt den direkten Kontakt mit Kunden, hast ein gutes Gespür für deren Probleme, kannst zuhören und findest schnell die eigentlichen Probleme. Du bist gewohnt, proaktiv zu handeln bevor etwas passiert, und führst den Kunden wieder auf den richtigen Pfad zurück.


Um Deine Arbeit erledigen zu können, arbeitest Du in einer europäischen Zeitzone. Deine Arbeitszeit kannst Du, der betrieblichen Situation entsprechend, flexibel gestalten. Wir erwarten, dass Du Deinen Beitrag zum Bereitschaftsdienst leistest. FromDual hat voraussichtlich keine Büroräumlichkeiten in Deinem Wohnort. Ein Umzug ist jedoch nicht notwendig: Wir ermöglichen Dir das Arbeiten von zu Hause aus oder unterstützen Dich bei der Suche einer geeigneten Arbeitsräumlichkeit in Deiner Nähe. Gute schriftliche und mündliche Englischkenntnisse sind erforderlich.

Was wir Dir bieten:
  • Deinen Leistungen angemessenes Gehalt.
  • Möglichkeit Dich zum Top MySQL-Datenbankspezialisten zu entwickeln.
  • Selbständiges Arbeiten.
  • Verantwortung für Deine Projekte und Kunden zu übernehmen.
  • Gute Kameradschaft im Team, sowie lockerer und angenehmer Umgang.
  • Stellenbezogene Weiterbildungsmöglichkeiten.
  • Teilnahme an Open Source Anlässen.
  • Arbeit von Deinem bevorzugten Wohnort aus.

Du solltest in der Lage sein, die meiste Zeit selbständig zu arbeiten, zu denken und zu handeln und Dir neues Wissen selbständig anzueignen (durch Web-Suche, die MySQL-Dokumentation, Ausprobieren, etc.). Solltest Du dennoch einmal nicht weiterkommen, werden Dir Deine Kollegen von FromDual gerne helfen.


Wenn Du jemanden brauchst, der Dir die ganze Zeit Dein Händchen hält, ist FromDual nicht die richtige Wahl.


Wie geht es weiter

Wenn Du an dieser Chance interessiert bist und Du denkst, dass Du die passende Kandidatin oder der passende Kandidat bist, würden wir uns freuen, von Dir zu hören. Wir wissen, dass niemand 100% auf diese Stellenbeschreibung passt!


Bitte schicke Deinen ungeschönten Lebenslauf mit Deinen Gehaltsvorstellungen an jobs@fromdual.com. Wenn Du mehr über diese Stelle erfahren oder wenn Du mit mir persönlich sprechen möchtest, ruf mich bitte an unter +41 79 830 09 33 (Oli Sennhauser, CTO). Bitte nur Bewerber, KEINE Headhunter!


Nachdem Du uns Deinen Lebenslauf zugeschickt hast, darfst Du Deine Fähigkeiten in einem kleinen MySQL-Test unter Beweis zu stellen. Nach bestandenem Test laden wir Dich für die finalen Interviews ein.

Controlling worldwide manufacturing plants with MySQL

Shinguz - Thu, 2015-05-14 21:43
Taxonomy upgrade extras: multi-source replicationmysql-replicationreplicationMulti-Master Replicationfan-in replicationrow filteringGTID

A MySQL customer of FromDual has different manufacturing plants spread across the globe. They are operated by local companies. FromDuals customer wants to maintain the manufacturing receipts centralized in a MySQL database in the Head Quarter in Europe. Each manufacturing plant should only see their specific data.

Manufacturing log information should be reported backup to European Head Quarter MySQL database.

The process was designed as follows:

Preparation of Proof of Concept (PoC)

To simulate all cases we need different schemas. Some which should be replicated, some which should NOT be replicated:

CREATE DATABASE finance; CREATE TABLE finance.accounting ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(255) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), KEY `data_rename` (`data`) ); CREATE DATABASE crm; CREATE TABLE crm.customer ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(255) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), KEY `data_rename` (`data`) ); CREATE DATABASE erp; -- Avoid specifying Storage Engine here!!! CREATE TABLE erp.manufacturing_data ( id INT UNSIGNED NOT NULL AUTO_INCREMENT , manufacture_plant VARCHAR(32) , manufacture_info VARCHAR(255) , PRIMARY KEY (id) , KEY (manufacture_plant) ); CREATE TABLE erp.manufacturing_log ( id INT UNSIGNED NOT NULL AUTO_INCREMENT , manufacture_plant VARCHAR(32) , log_data VARCHAR(255) , PRIMARY KEY (id) , KEY (manufacture_plant) );
MySQL replication architecture

Before you start with such complicated MySQL set-ups it is recommended to make a little sketch of what you want to build:

Preparing the Production Master database (Prod M1)

To make use of all the new and cool features of MySQL we used the new GTID replication. First we set up a Master (Prod M1) and its fail-over System (Prod M2) in the customers Head Quarter:

# /etc/my.cnf [mysqld] binlog_format = row # optional log_bin = binary-log # mandatory, also on Slave! log_slave_updates = on # mandatory gtid_mode = on # mandatory enforce_gtid_consistency = on # mandatory server-id = 39 # mandatory

This step requires a system restart (one minute downtime).

Preparing the Production Master standby database (Prod M2)

On Master (Prod M1):

GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.1.%' IDENTIFIED BY 'secret'; mysqldump -u root --set-gtid-purged=on --master-data=2 --all-databases --triggers --routines --events > /tmp/full_dump.sql

On Slave (Prod M2):

CHANGE MASTER TO MASTER_HOST='192.168.1.39', MASTER_PORT=3306 , MASTER_USER='replication', MASTER_PASSWORD='secret' , MASTER_AUTO_POSITION=1; RESET MASTER; -- On SLAVE! system mysql -u root < /tmp/full_dump.sql START SLAVE;

To make it easier for a Slave to connect to its master we set a VIP in front of those 2 database servers (VIP Prod). This VIP should be used by all applications in the head quarter and also the filter engines.

Set-up filter engines (Filter BR and Filter CN)

To make sure every manufacturing plant sees only the data it is allowed to see we need a filtering engine between the production site and the manufacturing plant (Filter BR and Filter CN).

To keep this filter engine lean we use a MySQL instance with all tables converted to the Blackhole Storage Engine:

# /etc/my.cnf [mysqld] binlog_format = row # optional log_bin = binary-log # mandatory, also on Slave! log_slave_updates = on # mandatory gtid_mode = on # mandatory enforce_gtid_consistency = on # mandatory server-id = 36 # mandatory default_storage_engine = blackhole

On the production master (Prod M1) we get the data as follows:

mysqldump -u root --set-gtid-purged=on --master-data=2 --triggers --routines --events --no-data --databases erp > /tmp/erp_dump_nd.sql

The Filter Engines (Filter BR and CN) are set-up as follows::

-- Here we can use the VIP! CHANGE MASTER TO master_host='192.168.1.33', master_port=3306 , master_user='replication', master_password='secret' , master_auto_position=1; RESET MASTER; -- On SLAVE! system cat /tmp/erp_dump_nd.sql | sed 's/ ENGINE=[a-zA-Z]*/ ENGINE=blackhole/' | mysql -u root START SLAVE;

Do not forget to also create the replication user on the filter engines.

GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.1.%' IDENTIFIED BY 'secret';
Filtering out all non ERP schemata

We only want the erp schema to be replicated to the manufacturing plants, not the crm or the finance application. This we achieve with the following option on the filter engines:

# /etc/my.cnf [mysqld] replicate_do_db = erp replicate_ignore_table = erp.manufacturing_log
MySQL row filtering

To achieve row filtering we use TRIGGERS. Make sure they are not replicated further down the hierarchy:

SET SESSION SQL_LOG_BIN = 0; use erp DROP TRIGGER IF EXISTS filter_row; delimiter // CREATE TRIGGER filter_row BEFORE INSERT ON manufacturing_data FOR EACH ROW BEGIN IF ( NEW.manufacture_plant != 'China' ) THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Row was filtered out.' , CLASS_ORIGIN = 'FromDual filter trigger' , SUBCLASS_ORIGIN = 'filter_row' , CONSTRAINT_SCHEMA = 'erp' , CONSTRAINT_NAME = 'filer_row' , SCHEMA_NAME = 'erp' , TABLE_NAME = 'manufacturing_data' , COLUMN_NAME = '' , MYSQL_ERRNO = 1644 ; END IF; END; // delimiter ; SET SESSION SQL_LOG_BIN = 0;

Up to now this would cause to stop replication for every filtered row. To avoid this we tell the Filtering Slaves to skip this error number:

# /etc/my.cnf [mysqld] slave_skip_errors = 1644
Attaching production manufacturing Slaves (Man BR M1 and Man CN M1)

When we have finished everything on our head quarter site. We can start with the manufacturing sites (BR and CN):

On Master (Prod M1):

mysqldump -u root --set-gtid-purged=on --master-data=2 --triggers --routines --events --where='manufacture_plant="Brazil"' --databases erp > /tmp/erp_dump_br.sql mysqldump -u root --set-gtid-purged=on --master-data=2 --triggers --routines --events --where='manufacture_plant="China"' --databases erp > /tmp/erp_dump_cn.sql

On the Manufacturing Masters (Man BR M1 and Man BR M2). Here we do NOT use a VIP because we think a blackhole storage engine is robust enough as master:

CHANGE MASTER TO master_host='192.168.1.43', master_port=3306 , master_user='replication', master_password='secret' , master_auto_position=1; RESET MASTER; -- On SLAVE! system cat /tmp/erp_dump_br.sql | mysql -u root START SLAVE;

The standby manufacturing (Man BR M2 and Man CN M2) database is created in the same way as the production manufacturing database on the master.

Testing replication from HQ to manufacturing plants

First we make sure, crm and finance is not replicated out and replication also does not stop (on Prod M1):

INSERT INTO finance.accounting VALUES (NULL, 'test data over VIP', NULL); INSERT INTO finance.accounting VALUES (NULL, 'test data over VIP', NULL); INSERT INTO crm.customer VALUES (NULL, 'test data over VIP', NULL); INSERT INTO crm.customer VALUES (NULL, 'test data over VIP', NULL); UPDATE finance.accounting SET data = 'Changed data'; UPDATE crm.customer SET data = 'Changed data'; DELETE FROM finance.accounting WHERE id = 1; DELETE FROM crm.customer WHERE id = 1; SELECT * FROM finance.accounting; SELECT * FROM crm.customer; SHOW SLAVE STATUS\G

The schema filter seems to work correctly. Then we check if also the row filter works correctly. For this we have to run the queries in statement based replication (SBR)! Otherwise the trigger would not fire:

use mysql INSERT INTO erp.manufacturing_data VALUES (NULL, 'China', 'Highly secret manufacturing info as RBR.'); INSERT INTO erp.manufacturing_data VALUES (NULL, 'Brazil', 'Highly secret manufacturing info as RBR.'); -- This needs SUPER privilege... :-( SET SESSION binlog_format = STATEMENT; -- Caution those rows will NOT be replicated!!! -- See filter rules for SBR INSERT INTO erp.manufacturing_data VALUES (NULL, 'China', 'Highly secret manufacturing info as SBR lost.'); INSERT INTO erp.manufacturing_data VALUES (NULL, 'Brazil', 'Highly secret manufacturing info as SBR lost.'); use erp INSERT INTO manufacturing_data VALUES (NULL, 'China', 'Highly secret manufacturing info as SBR.'); INSERT INTO manufacturing_data VALUES (NULL, 'Brazil', 'Highly secret manufacturing info as SBR.'); INSERT INTO manufacturing_data VALUES (NULL, 'Germany', 'Highly secret manufacturing info as SBR.'); INSERT INTO manufacturing_data VALUES (NULL, 'Switzerland', 'Highly secret manufacturing info as SBR.'); SET SESSION binlog_format = ROW; SELECT * FROM erp.manufacturing_data;
Production data back to head quarter

Now we have to take care about the production data on their way back to the HQ. To achieve this we use the new MySQL 5.7 feature called multi source replication. For multi source replication the replication repositories must be kept in tables instead of files:

# /etc/my.cnf [mysqld] master_info_repository = TABLE # mandatory relay_log_info_repository = TABLE # mandatory

Then we have to configure 2 replication channels from Prod M1 to their specific manufacturing masters over the VIP (VIP BR and VIP CN):

CHANGE MASTER TO MASTER_HOST='192.168.1.98', MASTER_PORT=3306 , MASTER_USER='replication', MASTER_PASSWORD='secret' , MASTER_AUTO_POSITION=1 FOR CHANNEL "manu_br"; CHANGE MASTER TO MASTER_HOST='192.168.1.99', MASTER_PORT=3306 , MASTER_USER='replication', MASTER_PASSWORD='secret' , MASTER_AUTO_POSITION=1 FOR CHANNEL "manu_cn"; START SLAVE FOR CHANNEL 'manu_br'; START SLAVE FOR CHANNEL 'manu_cn'; SHOW SLAVE STATUS FOR CHANNEL 'manu_br'\G SHOW SLAVE STATUS FOR CHANNEL 'manu_cn'\G

Avoid to configure and activate the channels on Prod M2 as well.

Testing back replication from manufacturing plants

Brazil on Man BR M1:

INSERT INTO manufacturing_log VALUES (1, 'Production data from Brazil', 'data');

China on Man CN M1:

INSERT INTO manufacturing_log VALUES (2, 'Production data from China', 'data');

For testing:

SELECT * FROM manufacturing_log;

Make sure you do not run into conflicts (Primary Key, AUTO_INCREMENTS). Make sure filtering is defined correctly!

To check the different channel states you can use the following command:

SHOW SLAVE STATUS\G or SELECT ras.channel_name, ras.service_state AS 'SQL_thread', ras.remaining_delay , CONCAT(user, '@', host, ':', port) AS user , rcs.service_state AS IO_thread, REPLACE(received_transaction_set, '\n', '') AS received_transaction_set FROM performance_schema.replication_applier_status AS ras JOIN performance_schema.replication_connection_configuration AS rcc ON rcc.channel_name = ras.channel_name JOIN performance_schema.replication_connection_status AS rcs ON ras.channel_name = rcs.channel_name ;
Troubleshooting Inject empty transaction

If you try to skip a transaction as you did earlier (SQL_SLAVE_SKIP_COUNTER) you will face some problems:

STOP SLAVE; ERROR 1858 (HY000): sql_slave_skip_counter can not be set when the server is running with @@GLOBAL.GTID_MODE = ON. Instead, for each transaction that you want to skip, generate an empty transaction with the same GTID as the transaction

To skip the next transaction you have find the ones applied so far:

SHOW SLAVE STATUS\G ... Executed_Gtid_Set: c3611091-f80e-11e4-99bc-28d2445cb2e9:1-20

then tell MySQL to skip this by injecting a new empty transaction:

SET SESSION GTID_NEXT='c3611091-f80e-11e4-99bc-28d2445cb2e9:21'; BEGIN; COMMIT; SET SESSION GTID_NEXT='AUTOMATIC'; SHOW SLAVE STATUS\G ... Executed_Gtid_Set: c3611091-f80e-11e4-99bc-28d2445cb2e9:1-21 START SLAVE;
Revert from GTID-based replication to file/position-based replication

If you want to fall-back from MySQL GTID-based replication to file/position-based replication this is quite simple:

CHANGE MASTER TO MASTER_AUTO_POSITION = 0;
MySQL Support and Engineering

If you need some help or support our MySQL support and engineering team is happy to help you.

Logging Galera Cluster conflicts

Shinguz - Sat, 2015-04-11 12:30
Taxonomy upgrade extras: logginggaleraclusterconflictdeadlockerror logerror

We typically suggest our customers to use our MySQL/Galera Cluster my.cnf configuration template to avoid MySQL configuration and performance problems.

And we are paranoid as well. Thus we enable all useful logging:

wsrep_log_conflicts = 1

But this has also some consequences of more visibility...

If you monitor carefully your Galera Cluster for example with the FromDual Performance Monitor for MySQL and MariaDB, you might probably see some strange values increasing from time to time:

mysql< SHOW GLOBAL STATUS LIKE 'wsrep_local_%r_s'; +---------------------------+-------+ | Variable_name | Value | +---------------------------+-------+ | wsrep_local_cert_failures | 42 | | wsrep_local_bf_aborts | 13 | +---------------------------+-------+

Those values are indicators that some transactions (Galera write sets) did to not succeed and were aborted by Galera. In this case the paranoid logging helps to find, what exactly was aborted and possibly helps to find out, if this can or should be fixed:

150410 1:44:18 [Note] WSREP: cluster conflict due to certification failure for threads: 150410 1:44:18 [Note] WSREP: Victim thread: THD: 151856, mode: local, state: executing, conflict: cert failure, seqno: 30399304 SQL: UPDATE login SET lTsexpire = UNIX_TIMESTAMP(NOW()) + lTimeout WHERE lSessionId = 'va3ta7besku82k56ncv3bnhlj5' *** Priority TRANSACTION: TRANSACTION 464359568, ACTIVE 0 sec starting index read mysql tables in use 1, locked 1 1 lock struct(s), heap size 360, 0 row lock(s) MySQL thread id 4, OS thread handle 0x7f1c0916c700, query id 8190690 Update_rows_log_event::find_row(30399302) *** Victim TRANSACTION: TRANSACTION 464359562, ACTIVE 0 sec mysql tables in use 1, locked 1 2 lock struct(s), heap size 360, 1 row lock(s), undo log entries 1 MySQL thread id 151856, OS thread handle 0x7f1c09091700, query id 8190614 172.20.100.11 sam_angiz query end UPDATE login SET lTsexpire = UNIX_TIMESTAMP(now()) + lTimeout WHERE lSessionId = 'va3ta7besku82k56ncv3bnhlj5' *** WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 835205 page no 3 n bits 72 index `PRIMARY` of table `fromdual`.`login` trx table locks 1 total table locks 2 trx id 464359562 lock_mode X locks rec but not gap lock hold time 0 wait time before grant 0 150410 1:44:18 [Note] WSREP: cluster conflict due to high priority abort for threads: 150410 1:44:18 [Note] WSREP: Winning thread: THD: 4, mode: applier, state: executing, conflict: no conflict, seqno: 30399302 SQL: (null) 150410 1:44:18 [Note] WSREP: Victim thread: THD: 151856, mode: local, state: committing, conflict: no conflict, seqno: -1 SQL: UPDATE login SET lTsexpire = UNIX_TIMESTAMP(now()) + lTimeout WHERE lSessionId = 'va3ta7besku82k56ncv3bnhlj5'

In the above Galera conflict 2 login transactions where running at the same time. They both come with the same Session ID and want to update the expiry timestamp. Now how to solve or fix this:

  • First check, if this table has a Primary Key (tables without a PK causes full table scans which can last for long time, increasing the chance for conflicts).
  • Second check, if there is a (UNIQUE?) index on lSessionId. A missing index leads to full table scans which increases the chance for conflicts.
  • Third check WHY 2 logins from the same Session ID can arrive at the same time (within 1 second) on 2 different Galera nodes (Ajax requests, etc...). Try to avoid such situations.

Galera Cluster last inactive check and VMware snapshots

Shinguz - Sat, 2015-04-11 11:46
Taxonomy upgrade extras: galeravmwaresnapshot

From time to time we see at Galera Cluster customer engagements the following, for me scary, warning in the MySQL error log:

[Warning] WSREP: last inactive check more than PT1.5S ago (PT7.06159S), skipping check

We mostly see this in VMware set-ups. Some further enquiry with the Galera developers did not give a satisfying answer:

This can be seen on bare metal as well - with poorly configured mysqld, O/S, or simply being overloaded. All it means is that this thread could not get CPU time for 7.1 seconds. You can imagine that access to resources in virtual machines is even harder (especially I/O) than on bare metal, so you will see this in virtual machines more often.

This is not a Galera specific issue (it just reports being stuck, other mysqld threads are equally stuck) so there is no configuration options for that. You simply must make sure that your system and mysqld are properly configured, that there is enough RAM (buffer pool not over provisioned), that there is swap, that there are proper I/O drivers installed on guest and so on.

Basically, Galera runs in virtual machines as well as well virtual machines approximates bare metal.

We were still suspecting that this is somehow VMware related. This week we had the chance to investigate... At 01:36 am node Galera2 lost connection to the Cluster and became NON-PRIMARY. This is basically a bad sign:

150401 1:36:15 [Warning] WSREP: last inactive check more than PT1.5S ago (PT5.08325S), skipping check 150401 1:36:15 [Note] WSREP: (09c6b2f2, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://192.168.42.2:4567 150401 1:36:16 [Note] WSREP: view(view_id(NON_PRIM,09c6b2f2,30) memb { 09c6b2f2,0 } joined { } left { } partitioned { ce6bf2e1,0 d1f9bee0,0 }) 150401 1:36:16 [Note] WSREP: view(view_id(NON_PRIM,09c6b2f2,31) memb { 09c6b2f2,0 } joined { } left { } partitioned { ce6bf2e1,0 d1f9bee0,0 }) 150401 1:36:16 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 150401 1:36:16 [Note] WSREP: Flow-control interval: [16, 16] 150401 1:36:16 [Note] WSREP: Received NON-PRIMARY. 150401 1:36:16 [Note] WSREP: Shifting SYNCED -> OPEN (TO: 26304132) 150401 1:36:16 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 150401 1:36:16 [Note] WSREP: Flow-control interval: [16, 16] 150401 1:36:16 [Note] WSREP: Received NON-PRIMARY. 150401 1:36:16 [Warning] WSREP: Send action {(nil), 328, TORDERED} returned -107 (Transport endpoint is not connected) 150401 1:36:16 [Note] WSREP: New cluster view: global state: dcca768c-b5ad-11e3-bbc0-fb576fb3c451:26304132, view# -1: non-Primary, number of nodes: 1, my index: 0, protocol version 3 150401 1:36:17 [Note] WSREP: (09c6b2f2, 'tcp://0.0.0.0:4567') reconnecting to d1f9bee0 (tcp://192.168.42.1:4567), attempt 0

I suspected, after some investigation with the FromDual Performance Monitor for MySQL and MariaDB, that the database backup (mysqldump) could be the reason. It was not. But the customer explained, that after the database backup they do a VMware snapshot.

And when we compared our problem with the backup log file:

2015/04/01 01:35:08 [3] backup.fromdual.com: Creating a snapshot of galera3 2015/04/01 01:35:16 [3] backup.fromdual.com: Created a snapshot of galera3 2015/04/01 01:35:23 [3] backup.fromdual.com: galera3: backup the changed blocks of disk 'Festplatte 1' using NBD transport 2015/04/01 01:36:10 [3] backup.fromdual.com: galera3: saving the Change Block Tracking's reference for disk 'Festplatte 1' 2015/04/01 01:36:10 [3] backup.fromdual.com: Removing Arkeia's snapshot of galera3

we can see that our problem pretty much started with the end of the WMware snapshot (01:36:10 + 5.08 = 1:36:15). By the way: For such kind of investigations it is always good to have a ntp daemon for time synchronization running. Otherwise problem investigation becomes much harder...

Some more and deeper investigation shows that we loose from time to time nodes during VMware snapshots (galera3). But they recover quickly because they can do an IST. In worst case we can loose 2 nodes and then the whole Galera Cluster has gone.

192.168.42.3 / node Galera3 2015-04-10 01:44:00 [3] backup.fromdual.com: Creating a snapshot of galera3 2015-04-10 01:44:08 [3] backup.fromdual.com: Created a snapshot of galera3 2015-04-10 01:44:15 [3] backup.fromdual.com: galera3: backup the changed blocks of disk 'Festplatte 1' using NBD transport 2015-04-10 01:45:39 [3] backup.fromdual.com: galera3: saving the Change Block Tracking's reference for disk 'Festplatte 1' 2015-04-10 01:45:39 [3] backup.fromdual.com: Removing Arkeia's snapshot of galera3
150410 1:44:07 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera1:4567 tcp://galera2:4567 150410 1:44:07 [Warning] WSREP: last inactive check more than PT1.5S ago (PT7.06159S), skipping check 150410 1:44:08 [Note] WSREP: Received NON-PRIMARY. 150410 1:44:10 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 30399299) 150410 1:44:11 [Warning] WSREP: Gap in state sequence. Need state transfer. 150410 1:44:11 [Note] WSREP: Prepared IST receiver, listening at: tcp://galera3:4568 150410 1:44:11 [Note] WSREP: Member 0.0 (galera3) requested state transfer from '*any*'. Selected 2.0 (galera2)(SYNCED) as donor. 150410 1:44:11 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 30399309) 150410 1:44:11 [Note] WSREP: Requesting state transfer: success, donor: 2 150410 1:44:11 [Note] WSREP: 2.0 (galera2): State transfer to 0.0 (galera3) complete. 150410 1:44:11 [Note] WSREP: Member 2.0 (galera2) synced with group. 150410 1:44:11 [Note] WSREP: Receiving IST: 8 writesets, seqnos 30399291-30399299 150410 1:44:11 [Note] WSREP: IST received: dcca768c-b5ad-11e3-bbc0-fb576fb3c451:30399299 150410 1:44:11 [Note] WSREP: 0.0 (galera3): State transfer from 2.0 (galera2) complete. 150410 1:44:11 [Note] WSREP: Shifting JOINER -> JOINED (TO: 30399309) 150410 1:44:11 [Note] WSREP: Member 0.0 (galera3) synced with group. 150410 1:44:11 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 30399309) 150410 1:44:11 [Note] WSREP: Synchronized with group, ready for connections 150410 1:44:13 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') turning message relay requesting off 150410 1:45:42 [Warning] WSREP: last inactive check more than PT1.5S ago (PT2.47388S), skipping check 150410 1:45:43 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera1:4567 tcp://galera2:4567 150410 1:45:44 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') reconnecting to 54de92f8 (tcp://galera1:4567), attempt 0 150410 1:45:44 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') reconnecting to c9d964d3 (tcp://galera2:4567), attempt 0 150410 1:45:48 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') turning message relay requesting off 150410 1:47:26 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera1:4567 150410 1:47:27 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') reconnecting to 54de92f8 (tcp://galera1:4567), attempt 0 150410 1:47:31 [Note] WSREP: (158f71de, 'tcp://0.0.0.0:4567') turning message relay requesting off
192.168.42.1 / node Galera1 2015-04-10 01:47:24 [3] backup.fromdual.com: Creating a snapshot of galera1 2015-04-10 01:47:29 [3] backup.fromdual.com: Created a snapshot of galera1 2015-04-10 01:47:40 [3] backup.fromdual.com: galera1: backup the changed blocks of disk 'Festplatte 1' using NBD transport 2015-04-10 01:48:43 [3] backup.fromdual.com: galera1: saving the Change Block Tracking's reference for disk 'Festplatte 1' 2015-04-10 01:48:44 [3] backup.fromdual.com: Removing Arkeia's snapshot of galera1 150410 1:44:02 [Note] WSREP: (54de92f8, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera3:4567 150410 1:44:04 [Note] WSREP: (54de92f8, 'tcp://0.0.0.0:4567') reconnecting to 158f71de (tcp://galera3:4567), attempt 0 150410 1:44:12 [Note] WSREP: Member 0.0 (galera3) requested state transfer from '*any*'. Selected 2.0 (galera2)(SYNCED) as donor. 150410 1:45:43 [Note] WSREP: (54de92f8, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera3:4567 150410 1:45:44 [Note] WSREP: (54de92f8, 'tcp://0.0.0.0:4567') reconnecting to 158f71de (tcp://galera3:4567), attempt 0 150410 1:45:48 [Note] WSREP: (54de92f8, 'tcp://0.0.0.0:4567') turning message relay requesting off 150410 1:47:27 [Warning] WSREP: last inactive check more than PT1.5S ago (PT3.66452S), skipping check 150410 1:47:27 [Note] WSREP: (54de92f8, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera3:4567 150410 1:47:30 [Note] WSREP: (54de92f8, 'tcp://0.0.0.0:4567') turning message relay requesting off
192.168.42.2 / node Galera2 2015-04-10 02:09:55 [3] backup.fromdual.com: Creating a snapshot of galera2 2015-04-10 02:09:58 [3] backup.fromdual.com: Created a snapshot of galera2 2015-04-10 02:10:05 [3] backup.fromdual.com: galera2: backup the changed blocks of disk 'Festplatte 1' using NBD transport 2015-04-10 02:10:53 [3] backup.fromdual.com: galera2: saving the Change Block Tracking's reference for disk 'Festplatte 1' 2015-04-10 02:10:54 [3] backup.fromdual.com: Removing Arkeia's snapshot of galera2
150410 1:44:02 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera3:4567 150410 1:44:03 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') reconnecting to 158f71de (tcp://galera3:4567), attempt 0 150410 1:44:08 [Warning] WSREP: discarding established (time wait) 158f71de (tcp://192.168.42.3:4567) 150410 1:44:11 [Note] WSREP: Member 0.0 (galera3) requested state transfer from '*any*'. Selected 2.0 (galera2)(SYNCED) as donor. 150410 1:44:13 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') turning message relay requesting off 150410 1:45:43 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera3:4567 150410 1:45:44 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') reconnecting to 158f71de (tcp://galera3:4567), attempt 0 150410 1:45:48 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') turning message relay requesting off 150410 1:47:26 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://galera1:4567 150410 1:47:27 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') reconnecting to 54de92f8 (tcp://galera1:4567), attempt 0 150410 1:47:30 [Note] WSREP: (c9d964d3, 'tcp://0.0.0.0:4567') turning message relay requesting off 150410 2:09:57 [Warning] WSREP: last inactive check more than PT1.5S ago (PT1.83618S), skipping check

The backups are done with the 2 options:

enabled.

Possibly this is the reason and one should disable those features in combination with Galera. Further investigation is going on. In worst case VMware snapshotting with Galera should be avoided.

Rename MySQL Partition

Shinguz - Fri, 2015-03-06 15:00
Taxonomy upgrade extras: partitionrenameDDL

Before I forget it and have to search again here a short note about how to rename a MySQL Partition:

My dream:

ALTER TABLE history RENAME PARTITION p2015_kw10 INTO p2015_kw09;
In reality: ALTER TABLE history REORGANIZE PARTITION p2015_kw10 INTO ( PARTITION p2015_kw09 VALUES LESS THAN (UNIX_TIMESTAMP('2015-03-02 00:00:00')) );

Caution: REORGANIZE PARTITION causes a full copy of the whole partition!

Hint: I assume it would be very easy for MySQL or MariaDB to make this DDL command an in-place operation...

MySQL Partitioning was introduced in MySQL 5.1.

MySQL Enterprise Incremental Backup simplified

Shinguz - Wed, 2015-02-25 19:41
Taxonomy upgrade extras: mebMySQL Enterprise BackupenterpriseBackupincremental backup

MySQL Enterprise Backup (MEB) has the capability to make real incremental (differential and cumulative?) backups. The actual releases are quite cool and you should really look at it...

Unfortunately the original MySQL documentation is much too complicated for my simple mind. So I did some testing and simplified it a bit for our customers...

If you want to dive into the original documentation please look here: Making an Incremental Backup .

If you want to use MySQL Enterprise Backup please let us know and we send you a quote...

Prepare MySQL Backup infrastructure mkdir /backup/full /backup/incremental1 /backup/incremental2
Full MySQL Backup mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full backup mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full apply-log
First MySQL Incremental Backup mysqlbackup --defaults-file=/etc/my.cnf --user=root --incremental --incremental-base=dir:/backup/full --incremental-backup-dir=/backup/incremental1 backup mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full --incremental-backup-dir=/backup/incremental1 apply-incremental-backup
Second MySQL Incremental Backup mysqlbackup --defaults-file=/etc/my.cnf --user=root --incremental --incremental-base=dir:/backup/full --incremental-backup-dir=/backup/incremental2 backup mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full --incremental-backup-dir=/backup/incremental2 apply-incremental-backup

and so on...

MySQL Restore mysqlbackup --defaults-file=/etc/my.cnf --user=root --backup-dir=/backup/full copy-back

Have fun with MySQL Enterprise Backup. If you need any help with your MySQL Backup concept, please let us know.

Creating Event Handlers with MySQL Enterprise Monitor

Shinguz - Tue, 2015-02-17 13:57
Taxonomy upgrade extras: MySQL Enterprise Monitormonitoringeventhandlermpmperformance monitor

MySQL Enterprise Monitor (MEM) has by default no Event Handlers created and activated. These Event Handlers you have to define yourself according to your needs.

In this article we discuss how to create MySQL Enterprise Monitor Event Handlers with MEM v.3.0.18. For other (older) versions the steps may vary...

Task: Event Handler for maximum Connections reached

We would like to be notified by MySQL Enterprise Monitor when the number of connections is near to max_connections.

For this we search first which Advisors are available at all: Configuration -> Advisors -> Availability.


Here we can see that we have an Advisor called Maximum Connection Limit Nearing Or Reached which is scheduled for every 5 minutes and has thresholds at 75, 85, 95 and 100%:


Now we know which Advisor should create and Event. As a next step we have to create and Event which should be triggered: Configuration -> Event Handling -> Create Event Handler.


Here we can create and Event with all its needed configuration: Events -> All -> server.


If we look at the Events we can even see the detailed description and how the values for the Event are collected:


Task: Event Handler for used disk space

For this Event Handler we need the Advisor Filesystem Free Space under Operating System:


In this advisor we can configure the Threshold as well:


In the Event Handler we can define which Assets shall be monitored. For example the mountpoint: /.


Local disks can only be monitored, if a local MySQL Enterprise Monitor Agent is installed. An agent-less MySQL Enterprise Monitor cannot monitor local disk resources...

Have fun using the MySQL Enterprise Monitor. If you need any help in installing or configuring MEM do not hesitate to contact us.

All these functions are also implemented in the FromDual Performance Monitor for MySQL. If you want to relay on Open Source technology only you should consider our Performance Monitor.

Nagios and Icinga plug-ins for MySQL 1.0.0 have been released

Shinguz - Wed, 2015-02-04 22:02
Taxonomy upgrade extras: nagiosicingaplug-inmonitorperformancealert

FromDual has the pleasure to announce the release of the new version 1.0.0 of its widely used Nagios and Icinga plug-ins for MySQL, Galera Cluster, MariaDB and Percona Server.

All plug-ins are basically renewed and should now work all correctly.

The new Nagios/Icinga plug-ins can be downloaded here.

In the inconceivable case that you find a bug in the Nagios/Icinga plug-ins please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Description of the current functionality

Details about the functionality and the usage of each plug-in you get with the option: --help.

The following Nagios/Icinga plug-in for MySQL and MariaDB are currently available:

check_db_mysql.pl

This Nagios/Icinga plug-in alerts you if your MySQL database is not up and running.

check_errorlog_mysql.pl and errorLogFilterRules.pm

This Nagios/Icinga plug-in alerts you if it finds some suspicious messages in the MySQL error log.
The rules which messages should be ignored can be found in the file errorLogFilterRules.pm. If you want to add your own filter rules please add them in this file as well.

check_galera_nodes.pl

This Nagios/Icinga plug-in alerts you if the actual number of nodes in your a Galera Cluster is not the expected one.

check_repl_mysql_cnt_slave_hosts.pl

This Nagios/Icinga plug-in alerts you if your MySQL Slaves have not reported to their Master properly their existence with the report_host variable.

check_repl_mysql_heartbeat.pl

This Nagios/Icinga plug-in alerts you if your MySQL Slave is too many heartbeats behind its Master.

check_repl_mysql_io_thread.pl

This Nagios/Icinga plug-in alerts you if your MySQL Slaves IO thread is not up an running.

check_repl_mysql_read_exec_pos.pl

This Nagios/Icinga plug-in alerts you if your MySQL Slaves read and execution positions differ too much.

check_repl_mysql_readonly.pl

This Nagios/Icinga plug-in alerts you if your MySQL Slave is NOT set to readonly.

check_repl_mysql_seconds_behind_master.pl

This Nagios/Icinga plug-in alerts you if your MySQL Slave falls too many seconds behind its Master.

check_repl_mysql_sql_thread.pl

This Nagios/Icinga plug-in alerts you if your Slaves SQL thread is not up an running.

perf_mysql.pl

This Nagios/Icinga plug-in gathers MySQL and MariaDB performance data.

Changes in FromDual Nagios/Icinga plug-ins 1.0.0 All plug-ins
  • Usage was improved. The usage can be shown with the --help option.
  • Usage states now which GRANT privileges are needed for a specific plug-in.
  • Examples added how to use each plug-in.
  • Default socket location moved from /tmp/mysql.sock to /var/run/mysqld/mysqld.sock.
  • New host/socket convention implemented in all scripts similar to MySQL client tools.
  • -epn tag added for Icinga.
check_errorlog_mysql.pl
  • Some bugs fixed.
  • More filtering rules added.
  • Filtering rules separated into own file.
  • Entry point finding problem fixed.
check_repl_mysql_heartbeat.pl
  • Script name fixed.
check_db.pl
  • Unknown command problem with Galera Cluster caught.
  • mysqladmin ping removed and implemented in Perl.
Support and Subscription for commercial use

For subscriptions for commercial use of this software please get in contact with us.

Pages

Subscribe to FromDual Aggregator – MySQL Tech-Feed (en)