MySQL Cluster Setup and Replication between NDB and Non NDB Engines

11 Comments

Index 

1. Introduction

2. MySQL Database Cluster Details

3. Steps to setup MySql Cluster

4. Testing the Cluster

5. Cluster Replication

6. Steps to setup Cluster Replication

Introduction

This document aims to provide the steps to set up a mysql database cluster. The cluster is assumed to be consisting of three nodes, one of which is management node and remaining two are data and sql nodes.

The document then goes on to explain the steps required to set up replication between two such independent mysql database clusters.

MySql Database Cluster Details

MySql Cluster is basically one or more computers configured to work in a way such that data will be shared amongst all the involved nodes. The database engine to be used for all the database tables is NDB.
Following diagram represents a typical cluster.

Diag. 1 : Typical MySql Cluster

Steps to setup MySql Cluster

1. For the sake of simplicity, this document assumes that the cluster is set up using three database machines. The three servers are referred as Server A, Server B and Server C. also it is assumed that no mysql component exists on the server initially.

2. Open a putty session on all the three servers using ‘root’ access.

3. Also ensure that ‘/etc/hosts’ file contains entries for all the involved servers.

4. Before starting with the setup, first determine the roles to be played by each server in the cluster. For rest of document, it is assumed that Server A will act as Management Node of the cluster whereas Server B & C will act as Data and Sql nodes.

5. On server A, run the following commands to check if any mysql process is running.
Shell>ps –aef | grep mysql

Shell>ps –aef | grep ndb

If any process is running, it is better to shutdown (or kill) the process before moving ahead. Also, it should be noted that, if the process is running, then some mysql component is already installed on the server. It is better to uninstall the component and start over. For unistallation guide goto step

6. If no process is running, next check to be preformed is whether any mysql component is already installed on the server or not? Execute following command on the shell.
Shell>rpm –qa | grep –i mysql

This command will print all the mysql packages already installed on the server.
To remove this package (or all the installed packages), run the following command.
Shell>rpm –e <package_name>

It is always advised to take DB backup before removing mysql from the system.

7. Once all the installed mysql packages are removed, the server is ready to be installed with fresh new version of mysql cluster software. Download the latest mysql cluster package from mysql site. The list of packages for version mysql-5.1.51 ndb-7.1.9 is given below.

MySQL-Cluster-com-client-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-clusterj-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-debuginfo-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-devel-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-embedded-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-extra-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-management-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-server-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-shared-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-shared-compat-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-storage-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-test-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-tools-7.1.9a-1.rhel4.i386.rpm

8. Install all the above mysql packages except for ‘MySQL-Cluster-com-server-7.1.9a-1.rhel4.i386.rpm’ and ‘MySQL-Cluster-com-shared-compat-7.1.9a-1.rhel4.i386.rpm’. Run following command to install the package.

Shell>rpm –ivh <package_name>

You might need to clear all the dependencies to install the packages.

9. Once the packages are installed, go to ‘mysql’ directory. This should be available under ‘/var/lib’. In case it is not present here, search for this directory in the system. You can use following command to search the system.

Shell>find / -name “mysql”

10. On the same level as this directory, there should be another directory named ‘mysql-cluster’. Move inside ‘mysql-cluster’.

11. Create a text file named ‘config.ini’ in this directory. The contents of the file are given below. Meaning of each parameter is also explained in comments.

[NDBD DEFAULT]
NoOfReplicas=2    #Number of data and sql nodes in the cluster. In #our case this is 2.

[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]

[TCP DEFAULT]
# Managment Server
SendBufferMemory=8M
ReceiveBufferMemory=8M

[NDB_MGMD]
# the IP of THIS SERVER
HostName=<Server A’s IP>

[NDBD]
# the IP of the FIRST SERVER (Data Node)
HostName=<Server B’s IP>
DataDir=/var/lib/mysql-cluster

[NDBD]
# the IP of the FIRST SERVER (Data Node)
HostName=<Server C’s IP>
DataDir=/var/lib/mysql-cluster

[MYSQLD]
[MYSQLD]

12. Now move to Server B. Re-execute steps 4,5 and 6 on the server.

13. Once all the packages are available install all the packages except for ‘MySQL-Cluster-com-management-7.1.9a-1.rhel4.i386.rpm’ and ‘MySQL-Cluster-com-shared-compat-7.1.9a-1.rhel4.i386.rpm’. You can use same command given in step 7 to install the packages.

14. Once the packages are installed, go to /etc directory. Create/Edit ‘my.cnf’ file in this directory. The content of typical my.cnf is given below.

[mysqld]

ndbcluster                                             # run NDB engine
ndb-connectstring=<Server A’s IP>        # location of MGM node

datadir=mysql directory on Server B>
#socket=/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
default_character_set=utf8
collation_server=utf8_unicode_ci
character_set_server=utf8
skip_character_set_client_handshake
back_log = 75
max_connections = 500
key_buffer = 1G
myisam_sort_buffer_size = 64M
join_buffer_size = 1M
read_buffer_size = 1M
sort_buffer_size = 2M
table_cache = 1800
thread_cache_size = 384
wait_timeout = 7200
connect_timeout = 10
tmp_table_size = 64M
max_heap_table_size = 64M
max_allowed_packet = 64M
max_connect_errors = 1000
read_rnd_buffer_size = 524288
bulk_insert_buffer_size = 8M
query_cache_limit = 4M
query_cache_size =52428800
query_cache_type = 1
query_prealloc_size = 65536
query_alloc_block_size = 131072
default-storage-engine = MyISAM
log-slow-queries
long_query_time = 5

[mysqld_safe]
nice = -5
open_files_limit = 8192

[mysqldump]
quick
max_allowed_packet = 16M

[myisamchk]
key_buffer = 256M
sort_buffer = 64M
read_buffer = 64M
write_buffer = 64M

#options for ndbd process:
[MYSQL_CLUSTER]
ndb-connectstring=<Server A’s IP>       # location of MGM node

15. Repeat the steps 11, 12 and 13 for Server C.

16. Once this step is done, the cluster configuration is done. The cluster of three servers A, B & C is now ready.

Testing the Cluster

1. Once the cluster is configured using steps given above, we will move to testing of the cluster.

2. On the Management Node (i.e. Server A), run the following command.
Shell>ndb_mgmd –f <path of config.ini file>

3. This is should start the management node of the cluster. Run the following set of commands to check the status of the cluster.
Shell>ndb_mgm
— NDB Cluster — Management Client —
ndb_mgm>show

The ‘show’ command will show the status of all the nodes in the cluster. Initially this would be ‘not connected’.

4. Now move to Server B and C respectively and execute following command on these servers.
Shell>ndbd –initial –connect-string=

5. This should start the node B and C and put them in the cluster. Now go back to Server A and re-run the show command. This will now show the ndbd nodes as connected and started.

6. Once the nodes are started and put in cluster, all that remains is starting mysql daemon on sql nodes so that they can start sharing and replicating the data. To achieve this, go to Server B and C and run following command.
Shell>mysqld –u root &

7. Once the mysql daemon starts gracefully, the cluster is fully functional and operating. The ‘show’ command on Server A will now display accordingly.

8. You can now test this cluster. Try creating any table on any of the database on one of the sql nodes (Server B or Server C). Please take care to use ‘NDB’ engine to create the table. This should get replicated to another node in no time. All the insertion, updates and deletions will also get replicated amongst the nodes.

Cluster Replication
The cluster replication refers to the configuration such that data is shared between two independent clusters. This configuration allows the data nodes of cluster 1 to communicate and send/receive db transactions from data nodes of cluster 2, so that the data on all the data nodes of both the clusters is always the same.

There are various ways of implementing this. The way, we have adopted for NGMP is depicted in the following diagram

Diag. 2 : Cluster Replication Setup

Here we have two clusters viz. Cluster 1 and Cluster 2 with 2 data and sql nodes each. The node A of cluster 1 acts as master to node Y of cluster 2.
Similarly, node Z of cluster 2 acts as master to node C of cluster 1.
This way, any data arriving on any of the node gets to all the remaining nodes in all the clusters.

Steps to setup Cluster Replication

1. For the sake of explanation, it is assumed that, we have already set up two clusters. Each cluster has three nodes, with one of them as management and remaining two as data and sql nodes. Cluster 1 has Server A as management node and Server B & C as data and sql node. Cluster 2 has Server X as management node and Server Y & Z as data and sql node.

2. Before setting up the replication, it is advised to determine which nodes will act as master to which nodes. The configuration we have explained here assumed that Node B is mater to Node Y and Node Z is master to Node C.

3. Shutdown (or kill) mysql daemon running on node B and node Y.

4. Open /etc/my.cnf on node B (Server B) for editing. Add following lines to the [mysqld] section of the file.
log-bin
binlog-format=row
server-id=10

5. Similarly, open the /etc/my.cnf on the node Y (Server Y). Add following lines to the [mysqld] section of the file.
server-id=20
master-host=<Server B’s IP>
master-port=3306
master-user=<username>
master-password=<password>

Please note username as password can be set to any appropriate value.

6. Start the mysql daemon on both the nodes using following command.
Shell>mysqld –u root &

7. Once the mysqld comes up gracefully on both the servers, go to node B and start the mysql client. On mysql prompt, execute the following command.

Mysql>GRANT REPLICATION SLAVE TO *.* ON ‘<username>’@’<Server Y’s IP>’ IDENTIFIED BY ‘<password>’

The username and password should be same as mentioned in the my.cnf file.

8. Now start the mysql client on node Y and execute following command.
Mysql>Slave start;

9. This should start the replication between the nodes B and Y with node B as master.

10. Execute similar steps on node Z and node C to set up replication between them with node Z as master.

11. Once this is done, the replication between the clusters is set up. We can check this by inserting any data in anyone node of the two clusters and checking if the db transaction gets replicated across the clusters on all nodes.

12. To further check the replication status, go to any master node (Node B or Node Z) and execute following query.

Mysql> SELECT * FROM mysql.ndb_binlog_index;
+———-+——————-+—————–+———+———+———+———–+—————-+————+——-+
| Position | File              | epoch           | inserts | updates | deletes | schemaops | orig_server_id | orig_epoch | gci   |
+———-+——————-+—————–+———+———+———+———–+—————-+————+——-+
|      554 | ./win3-bin.000003 | 137073881251848 |       1 |       0 |       0 |         0 |              0 |          0 | 31915 |
+———-+——————-+—————–+———+———+———+———–+—————-+————+——-+
1 row in set (0.00 sec)

Please note the value of ‘epoch’ filed.

13. Now execute following query on the respective slave

Mysql> SELECT * FROM mysql.ndb_apply_status;
+———–+—————–+—————–+———–+———+
| server_id | epoch           | log_name        | start_pos | end_pos |
+———–+—————–+—————–+———–+———+
|        10 | 137073881251848 | win3-bin.000003 |       554 |     885 |
+———–+—————–+—————–+———–+———+
1 row in set (0.00 sec)

The ‘epoch’ value here should match that with the master. If so, replication is working fine.

14. If the epoch value does not match, please execute the set of queries on the slave

Mysql>SLAVE STOP;

Mysql>CHANGE MASTER TO MASTER_HOST='<MASTER’s IP>’, MASTER_PORT=3306, MASTER_LOG_FILE = ‘<VALUE OF File FIELD IN ndb_binlog_index TABLE ON MASTER>’, MASTER_LOG_POS=<VALUE OF Epoch FIELD IN ndb_binlog_index TABLE ON MASTER>;

Mysql>SLAVE START;

15. This should resolve the issue. This concludes configuration and testing of Mysql database cluster and cluster replication.

Advertisements

Resuming the Blog

Leave a comment

Hi all,

I was away from blogging for quite long. Occupied with many responsibilities and personal commitments.
Hopefully will continue bringing up technical articles to simplify DB tasks.

-Thanks

Auto Recover MyISAM Tables

4 Comments

Enable MyISAM Auto-Repair

MyISAM can be configured to check and repair its tables automatically. By enabling the MyISAM auto repair you let the server check each MyISAM table when it is opened. It checks  if the table was closed properly when it was  last used, also checks if it needs any repair, if required it repairs the table.

To enable auto check and repair,you can start the server with –myisam-recover with following options.

DEFAULT for the default checking.

BACKUP tells the server to make a backup of any table that it must change.

FORCE causes table recovery to be performed even if it would cause the loss of more than one row of data.

QUICK performs quick recovery: Tables that have no holes resulting from deletes or updates are skipped.

You can also add it to the my.cnf file as shown below

[mysqld]

myisam-recover=FORCE,BACKUP

-Thanks

RHEL LVS setup for MySQL DB Nodes

10 Comments

I was configuring MySQL Cluster where the application servers use a properties file to connect to the MySQL Data/Storage node (I configured both Data and Storage nodes on same physical server).
I want the application to use a single IP address, to access the DB servers in cluster. Since if any of the nodes fail, the application servers should still be able to query the databases.
Hence I thought of using LVS so that multiple application servers can access the DB servers through VIP. To achieve the same I have configured LVS, thought of sharing the same.

RHEL Linux LVS setup:
=====================

Pre-Requisites:
===============

REDHAT Linux Cluster Packages
VIRTUAL IP

Configuration for LB server:
============================

Configure packet forwarding on LB server

vi /etc/sysctl.conf

net.ipv4.ip_forward = 1        # change to 1,

Install piranha package from redhat cluster group

Once installed, configure the password:

# piranha-passwd

for autostart the LVS services, below are the commands

# chkconfig pulse on
# chkconfig piranha-gui on
# chkconfig httpd on

Start the http and piranha services

# service httpd start
# service piranha-gui start

Open http://localhost:3636 in a Web browser to access the Piranha Configuration Tool.
Click on the Login button and enter piranha for the Username and the administrative password you created in the Password field.

OR

Configure file to set up LVS

vi /etc/sysconfig/ha/lvs.cf        # Sample configuration file for 11.1 and 11.4

serial_no = 47
primary = 172.16.11.1
service = lvs
backup_active = 1
backup = 172.16.11.4
heartbeat = 1
heartbeat_port = 539
keepalive = 3
deadtime = 6
network = direct
debug_level = NONE
monitor_links = 0
syncdaemon = 0
virtual MySQL {
active = 1
address = 172.16.11.10 bond0:1
vip_nmask = 255.255.255.255
port = 3306
expect = “UP”
use_regex = 1
send_program = “/root/mysql_mon.sh %h”
load_monitor = none
scheduler = wlc
protocol = tcp
timeout = 5
reentry = 3
quiesce_server = 0
server Server1 {
address = 172.16.11.2
active = 1
weight = 1
}
server Server2 {
address = 172.16.11.3
active = 1
weight = 1
}
}

Restart the pulse services on the LVS Routers

# service pulse restart

Verify the LVS. Check the ipvsadm entries:

[root@ServerLB|172.16.11.1~]~ # ipvsadm

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.11.10:mysql wlc
-> Server1  :mysql              Route   1      0          0
-> Server2  :mysql              Route   1      0          0

MySQL Real Servers configuration:
=================================

Run below commands on both the MySQL nodes

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

Configure loopback on both the MySQL nodes

ifconfig lo:0 172.16.11.10 netmask 255.255.255.255 up

OR (Permanent changes)

vi /etc/sysconfig/network-scripts/ifcfg-lo:0

DEVICE=lo:0
IPADDR=172.16.11.10
NETMASK=255.255.255.255
NETWORK=172.16.11.0
BROADCAST=172.16.11.255
ONBOOT=yes
NAME=loopback

To bring up the IP alias the ifup command is used:
/sbin/ifup lo

Automate MySQL Dumps using Linux CRON Job

14 Comments

MySQL Dump Using Linux CRON Job

If you are a database administrator who would like to automate you tasks. Here is a simple and very basic task that can be automated,
MySQL Database Dumps are the very basic task every administrator does, no matter how simple it sounds, it is most useful in failure scenarios. Hence you would have to perform this task very often.

It is very likely to miss on taking dumps on daily routine, hence you can come up with an alternative to dump your databases by scheduling it to run automatically. This will let you concentrate on your other task which might need more attention.

There are several ways to dump a database, you have many utilities and tools to do so. Also many tools give you the option to schedule the dumps through a GUI.

Follow the below steps to automate your MySQL dump.

Firstly, you need create a .sh file with these entries,

>vi MySQLdump.sh

mysqldump -u root -pmysql123 –all-databases –routines| gzip > /root/MySQLDB_`date ‘+%m-%d-%Y’`.sql.gz
mysqldump -h 172.16.21.3 -u root -pmysql123 –all-databases –routines | gzip > /root/MySQLDB.3_`date ‘+%m-%d-%Y’`.sql.gz

In above two lines, I’m scheduling the cron job to dump my database on the same machine, also the second line is dumping a databsase from a remote host.
You need to provide access to the local system on the remote host to perform the task.

Secondly, create a CRON job to schedule the dumps.

>crontab -e

30 15 * * * /root/MySQLdump.sh 2>&1>> /root/MySQLdump.log

The above will dump the database every day at 15:30.

You can schedule it your way.

NDB ENGINE

3 Comments

For latest developments and updates on cluster visit below.

Many limitations of previous versions addressed.

http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-development-5-1-ndb-7-1.html

Start MySQL at bootup

6 Comments

At the time of Server failure, you would want your MySQL to start at the boot.

To achieve the same follow the below steps at the prompt.

>cd /etc/rc.d/rc3.d
>ln -s /etc/rc.d/init.d/mysql S98mysql

The second line will create a symlink in the above directory. By doing this you are asking your server to run it during level 3 start up.

However you can also run the below commands manually post startup.

Using files in /etc/rc.d/init.d directly, for example:

/etc/rc.d/init.d/mysql start
/etc/rc.d/init.d/mysql stop

-Death

Older Entries