MySQL Cluster Setup and Replication between NDB and Non NDB Engines

11 Comments

Index 

1. Introduction

2. MySQL Database Cluster Details

3. Steps to setup MySql Cluster

4. Testing the Cluster

5. Cluster Replication

6. Steps to setup Cluster Replication

Introduction

This document aims to provide the steps to set up a mysql database cluster. The cluster is assumed to be consisting of three nodes, one of which is management node and remaining two are data and sql nodes.

The document then goes on to explain the steps required to set up replication between two such independent mysql database clusters.

MySql Database Cluster Details

MySql Cluster is basically one or more computers configured to work in a way such that data will be shared amongst all the involved nodes. The database engine to be used for all the database tables is NDB.
Following diagram represents a typical cluster.

Diag. 1 : Typical MySql Cluster

Steps to setup MySql Cluster

1. For the sake of simplicity, this document assumes that the cluster is set up using three database machines. The three servers are referred as Server A, Server B and Server C. also it is assumed that no mysql component exists on the server initially.

2. Open a putty session on all the three servers using ‘root’ access.

3. Also ensure that ‘/etc/hosts’ file contains entries for all the involved servers.

4. Before starting with the setup, first determine the roles to be played by each server in the cluster. For rest of document, it is assumed that Server A will act as Management Node of the cluster whereas Server B & C will act as Data and Sql nodes.

5. On server A, run the following commands to check if any mysql process is running.
Shell>ps –aef | grep mysql

Shell>ps –aef | grep ndb

If any process is running, it is better to shutdown (or kill) the process before moving ahead. Also, it should be noted that, if the process is running, then some mysql component is already installed on the server. It is better to uninstall the component and start over. For unistallation guide goto step

6. If no process is running, next check to be preformed is whether any mysql component is already installed on the server or not? Execute following command on the shell.
Shell>rpm –qa | grep –i mysql

This command will print all the mysql packages already installed on the server.
To remove this package (or all the installed packages), run the following command.
Shell>rpm –e <package_name>

It is always advised to take DB backup before removing mysql from the system.

7. Once all the installed mysql packages are removed, the server is ready to be installed with fresh new version of mysql cluster software. Download the latest mysql cluster package from mysql site. The list of packages for version mysql-5.1.51 ndb-7.1.9 is given below.

MySQL-Cluster-com-client-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-clusterj-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-debuginfo-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-devel-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-embedded-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-extra-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-management-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-server-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-shared-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-shared-compat-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-storage-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-test-7.1.9a-1.rhel4.i386.rpm
MySQL-Cluster-com-tools-7.1.9a-1.rhel4.i386.rpm

8. Install all the above mysql packages except for ‘MySQL-Cluster-com-server-7.1.9a-1.rhel4.i386.rpm’ and ‘MySQL-Cluster-com-shared-compat-7.1.9a-1.rhel4.i386.rpm’. Run following command to install the package.

Shell>rpm –ivh <package_name>

You might need to clear all the dependencies to install the packages.

9. Once the packages are installed, go to ‘mysql’ directory. This should be available under ‘/var/lib’. In case it is not present here, search for this directory in the system. You can use following command to search the system.

Shell>find / -name “mysql”

10. On the same level as this directory, there should be another directory named ‘mysql-cluster’. Move inside ‘mysql-cluster’.

11. Create a text file named ‘config.ini’ in this directory. The contents of the file are given below. Meaning of each parameter is also explained in comments.

[NDBD DEFAULT]
NoOfReplicas=2    #Number of data and sql nodes in the cluster. In #our case this is 2.

[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]

[TCP DEFAULT]
# Managment Server
SendBufferMemory=8M
ReceiveBufferMemory=8M

[NDB_MGMD]
# the IP of THIS SERVER
HostName=<Server A’s IP>

[NDBD]
# the IP of the FIRST SERVER (Data Node)
HostName=<Server B’s IP>
DataDir=/var/lib/mysql-cluster

[NDBD]
# the IP of the FIRST SERVER (Data Node)
HostName=<Server C’s IP>
DataDir=/var/lib/mysql-cluster

[MYSQLD]
[MYSQLD]

12. Now move to Server B. Re-execute steps 4,5 and 6 on the server.

13. Once all the packages are available install all the packages except for ‘MySQL-Cluster-com-management-7.1.9a-1.rhel4.i386.rpm’ and ‘MySQL-Cluster-com-shared-compat-7.1.9a-1.rhel4.i386.rpm’. You can use same command given in step 7 to install the packages.

14. Once the packages are installed, go to /etc directory. Create/Edit ‘my.cnf’ file in this directory. The content of typical my.cnf is given below.

[mysqld]

ndbcluster                                             # run NDB engine
ndb-connectstring=<Server A’s IP>        # location of MGM node

datadir=mysql directory on Server B>
#socket=/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
default_character_set=utf8
collation_server=utf8_unicode_ci
character_set_server=utf8
skip_character_set_client_handshake
back_log = 75
max_connections = 500
key_buffer = 1G
myisam_sort_buffer_size = 64M
join_buffer_size = 1M
read_buffer_size = 1M
sort_buffer_size = 2M
table_cache = 1800
thread_cache_size = 384
wait_timeout = 7200
connect_timeout = 10
tmp_table_size = 64M
max_heap_table_size = 64M
max_allowed_packet = 64M
max_connect_errors = 1000
read_rnd_buffer_size = 524288
bulk_insert_buffer_size = 8M
query_cache_limit = 4M
query_cache_size =52428800
query_cache_type = 1
query_prealloc_size = 65536
query_alloc_block_size = 131072
default-storage-engine = MyISAM
log-slow-queries
long_query_time = 5

[mysqld_safe]
nice = -5
open_files_limit = 8192

[mysqldump]
quick
max_allowed_packet = 16M

[myisamchk]
key_buffer = 256M
sort_buffer = 64M
read_buffer = 64M
write_buffer = 64M

#options for ndbd process:
[MYSQL_CLUSTER]
ndb-connectstring=<Server A’s IP>       # location of MGM node

15. Repeat the steps 11, 12 and 13 for Server C.

16. Once this step is done, the cluster configuration is done. The cluster of three servers A, B & C is now ready.

Testing the Cluster

1. Once the cluster is configured using steps given above, we will move to testing of the cluster.

2. On the Management Node (i.e. Server A), run the following command.
Shell>ndb_mgmd –f <path of config.ini file>

3. This is should start the management node of the cluster. Run the following set of commands to check the status of the cluster.
Shell>ndb_mgm
— NDB Cluster — Management Client —
ndb_mgm>show

The ‘show’ command will show the status of all the nodes in the cluster. Initially this would be ‘not connected’.

4. Now move to Server B and C respectively and execute following command on these servers.
Shell>ndbd –initial –connect-string=

5. This should start the node B and C and put them in the cluster. Now go back to Server A and re-run the show command. This will now show the ndbd nodes as connected and started.

6. Once the nodes are started and put in cluster, all that remains is starting mysql daemon on sql nodes so that they can start sharing and replicating the data. To achieve this, go to Server B and C and run following command.
Shell>mysqld –u root &

7. Once the mysql daemon starts gracefully, the cluster is fully functional and operating. The ‘show’ command on Server A will now display accordingly.

8. You can now test this cluster. Try creating any table on any of the database on one of the sql nodes (Server B or Server C). Please take care to use ‘NDB’ engine to create the table. This should get replicated to another node in no time. All the insertion, updates and deletions will also get replicated amongst the nodes.

Cluster Replication
The cluster replication refers to the configuration such that data is shared between two independent clusters. This configuration allows the data nodes of cluster 1 to communicate and send/receive db transactions from data nodes of cluster 2, so that the data on all the data nodes of both the clusters is always the same.

There are various ways of implementing this. The way, we have adopted for NGMP is depicted in the following diagram

Diag. 2 : Cluster Replication Setup

Here we have two clusters viz. Cluster 1 and Cluster 2 with 2 data and sql nodes each. The node A of cluster 1 acts as master to node Y of cluster 2.
Similarly, node Z of cluster 2 acts as master to node C of cluster 1.
This way, any data arriving on any of the node gets to all the remaining nodes in all the clusters.

Steps to setup Cluster Replication

1. For the sake of explanation, it is assumed that, we have already set up two clusters. Each cluster has three nodes, with one of them as management and remaining two as data and sql nodes. Cluster 1 has Server A as management node and Server B & C as data and sql node. Cluster 2 has Server X as management node and Server Y & Z as data and sql node.

2. Before setting up the replication, it is advised to determine which nodes will act as master to which nodes. The configuration we have explained here assumed that Node B is mater to Node Y and Node Z is master to Node C.

3. Shutdown (or kill) mysql daemon running on node B and node Y.

4. Open /etc/my.cnf on node B (Server B) for editing. Add following lines to the [mysqld] section of the file.
log-bin
binlog-format=row
server-id=10

5. Similarly, open the /etc/my.cnf on the node Y (Server Y). Add following lines to the [mysqld] section of the file.
server-id=20
master-host=<Server B’s IP>
master-port=3306
master-user=<username>
master-password=<password>

Please note username as password can be set to any appropriate value.

6. Start the mysql daemon on both the nodes using following command.
Shell>mysqld –u root &

7. Once the mysqld comes up gracefully on both the servers, go to node B and start the mysql client. On mysql prompt, execute the following command.

Mysql>GRANT REPLICATION SLAVE TO *.* ON ‘<username>’@’<Server Y’s IP>’ IDENTIFIED BY ‘<password>’

The username and password should be same as mentioned in the my.cnf file.

8. Now start the mysql client on node Y and execute following command.
Mysql>Slave start;

9. This should start the replication between the nodes B and Y with node B as master.

10. Execute similar steps on node Z and node C to set up replication between them with node Z as master.

11. Once this is done, the replication between the clusters is set up. We can check this by inserting any data in anyone node of the two clusters and checking if the db transaction gets replicated across the clusters on all nodes.

12. To further check the replication status, go to any master node (Node B or Node Z) and execute following query.

Mysql> SELECT * FROM mysql.ndb_binlog_index;
+———-+——————-+—————–+———+———+———+———–+—————-+————+——-+
| Position | File              | epoch           | inserts | updates | deletes | schemaops | orig_server_id | orig_epoch | gci   |
+———-+——————-+—————–+———+———+———+———–+—————-+————+——-+
|      554 | ./win3-bin.000003 | 137073881251848 |       1 |       0 |       0 |         0 |              0 |          0 | 31915 |
+———-+——————-+—————–+———+———+———+———–+—————-+————+——-+
1 row in set (0.00 sec)

Please note the value of ‘epoch’ filed.

13. Now execute following query on the respective slave

Mysql> SELECT * FROM mysql.ndb_apply_status;
+———–+—————–+—————–+———–+———+
| server_id | epoch           | log_name        | start_pos | end_pos |
+———–+—————–+—————–+———–+———+
|        10 | 137073881251848 | win3-bin.000003 |       554 |     885 |
+———–+—————–+—————–+———–+———+
1 row in set (0.00 sec)

The ‘epoch’ value here should match that with the master. If so, replication is working fine.

14. If the epoch value does not match, please execute the set of queries on the slave

Mysql>SLAVE STOP;

Mysql>CHANGE MASTER TO MASTER_HOST='<MASTER’s IP>’, MASTER_PORT=3306, MASTER_LOG_FILE = ‘<VALUE OF File FIELD IN ndb_binlog_index TABLE ON MASTER>’, MASTER_LOG_POS=<VALUE OF Epoch FIELD IN ndb_binlog_index TABLE ON MASTER>;

Mysql>SLAVE START;

15. This should resolve the issue. This concludes configuration and testing of Mysql database cluster and cluster replication.

Resuming the Blog

Leave a comment

Hi all,

I was away from blogging for quite long. Occupied with many responsibilities and personal commitments.
Hopefully will continue bringing up technical articles to simplify DB tasks.

-Thanks

Auto Recover MyISAM Tables

4 Comments

Enable MyISAM Auto-Repair

MyISAM can be configured to check and repair its tables automatically. By enabling the MyISAM auto repair you let the server check each MyISAM table when it is opened. It checks  if the table was closed properly when it was  last used, also checks if it needs any repair, if required it repairs the table.

To enable auto check and repair,you can start the server with –myisam-recover with following options.

DEFAULT for the default checking.

BACKUP tells the server to make a backup of any table that it must change.

FORCE causes table recovery to be performed even if it would cause the loss of more than one row of data.

QUICK performs quick recovery: Tables that have no holes resulting from deletes or updates are skipped.

You can also add it to the my.cnf file as shown below

[mysqld]

myisam-recover=FORCE,BACKUP

-Thanks

SQL Server Integration Services 2008(SSIS) and MySQL

40 Comments

In my previous article I had created linked server, to access and query a database. However, it is just the basic step to try and bring two different RDBMS systems linked together.

In this article I will discuss about creating a SSIS package to transfer data from MS SQL Server 2008 to MySQL 5.5

Prerequisite:

___________________________________________________

SQL Server Business Intelligence Development Studio

MySQL Connector Net 5.2.7

You can download this package from http://www.mysql.com/downloads/connector/net/

Connector/Net is a fully-managed ADO.NET driver for MySQL.

MySQL Connector/ODBC 5.1 (Which is already installed, in our previous article)

http://www.mysql.com/downloads/connector/odbc/

Brief about SSIS

___________________________________________________

SSIS provides a graphical front end to design control flow data processing logic.  Once designed, these ‘packages’ are compiled into ‘.dtsx’ packages which can then be distributed and run on any machine with the SSIS tools installed.

Packages contain two main logic flows, a ‘Control Flow’ which defines a sequence of logical operations which are processed in sequence. Each step is completed before the next starts e.g.

1.  Empty out work tables in a database

2.  Populate the work tables with data

3.  Perform calculations and update the values in the work table

4.  Update OLAP cubes with the data from the work tables

5.  Run reports against the OLAP cubes.

This level of control also allows processing loops to be defined e.g.

For each file in a specified folder, read the contents of the file and write it into a specified table.

The second main logic flow is the ‘Data Flow’.  This allows for the processing of data at the record level. Data is read from a ‘Data Source’ and passes down a series of ‘Data Transformations’ to a ‘Data Destination’. These transformations can be as simple as changing the data type of fields e.g. varchar(4000) to varchar(2000) or decimal(18,2) to decimal(8,2), or can be more complex like data merges, joins, pivot tables, multicasts etc. Each transformation is represented by an icon in the designer and the icons are linked together to define the logic path.

Creating SSIS package to transfer data from MS SQL Server 2008 to MySQL

___________________________________________________

Follow the below steps to create a SSIS package for data transfer.

Go to START>MS SQL SERVER 2008>SQL SERVER BUSINESS INTELLIGENCE DEVELOPMENT STUDIO and Click on File>New>Project

Under Business Intelligence Projects, select Integration Services Project and rename the Project Name.

This will open a Package Designer Screen, where you can start designing the data flow for your package.

Configure Connection Managers for you package

___________________________________________________

Right Click in the area where it shows the Connection Manager tab and select New ADO.Net Connection.

You will see the Configuration Manager Editor window

Click New

Now you need to define connection manager configuration for source (i.e. MS SQL Server 2008) in our case.

In the Connection Manager Editor, by default the Provider is set to .Net Providers\SqlClient Data Provider

Use your test server name or IP for Server Name and under connect to database, select the database to use. I’m using ssistest.

Similarly, you need to create connection manager configuration for Destination (i.e. MySQL)

Right Click, under Connection Manager’s tab and Select ADO.NET Connection as mentioned above and click New on Connection Manager Editor.

For Provider click on the drop down arrow to select .Net Providers\odbc data Provider.

In the previous article we had created System DSN name MySQL. We will use the same here for Use user or system data source name.

Enter your login information and Test Connection. It should succeed, and then click Ok.

Now, on Configure ADO.NET Connection Manager screen, you can see both the source and destination are configured.

It’s now time to add control flows to the package, ensure you are on Control Flow tab and Drag Data Flow Task from the Control Flow Items under ToolBox.

Now click on the Data Flow tab above and Drag ADO NET Source and ADO NET Destination as shown below. You can rename the Source and Destination Names from Properties.

Right click on ADO NET SOURCE i.e. MS SQL Server 2008 and click EDIT. Make the below changes as mentioned on the screen.

As mentioned before I have ssistest database which has few sample tables, that I have exported from AdventureWorks database. I’m selecting one of the tables named HumanResources.Department for this example.

In the above screen you can see the test server selected with database and table to transfer the data from.

Similarly right click on MySQL (Destination) and click Edit. On this screen, under ADO.NET Connection Manager select MySQL.root from the drop downlist.

Since we do not have any destination table to map with the source, we need to click on New. It will show you the below message, click OK

Now remove the quotes and Click OK to create a table on destination as shown.

It will come back to the editor,

Click on the Mappings tab and check if all the columns are mapped and click OK.

This will complete the data flow design of our package.

Saving the SSIS Package

___________________________________________________

Once the package is created, save the copy of it to SSIS Package store or MSDB. Follow the below steps,

Click on File> Save Copy of Package.dtsx As from the menu it will open a window,

Select SSIS Package Store as the Package Location and Select the name of your test server.

Enter /File System/Export2MySQL as the package path and Click OK.

Select File > Save Copy of Package.dtsx as again from the menu and Select SQL Server as the Package Location. Select the name of your test server and fill in your authentication information.

Enter Export2MySQL as the package path and Click OK.

Once the copies are saved you can see them under Integration Services Stored Packages as shown below.

Changes on MySQL

___________________________________________________

Before running the Package, make this change on the MySQL Server

mysql> SET GLOBAL sql_mode= ‘ANSI’

Executing the Package

___________________________________________________

You can execute the package either from SSMS and BI Development Studio. I’m doing it from the BI, on the right hand side your BI screen you will find solution explorer, which has the Package.dtsx listed. Right click and select Execute Package.

The moment you click execute the debugging process start and the Output is show below on the BI screen.  If the Data Flow Task is changed to Green, it means the package has succeeded, and if it is Red it means somewhere in the flow there was an error.

On the Data Flow Tab, you can see the number of rows transferred in the execution.

There is a progress tab on the screen, where you can check the steps taken in the flow, it also list errors and warnings if any.

Query the MySQL Server to see if the data transfer was successful as show below.

This was about transferring data from MS SQL Server 2008 to MySQL, you can do vice verse by changing the source and destination while designing the package on Data Flow tab.

Hope this article is useful.

Link MySQL to MS SQL Server2008

32 Comments

I came up with Idea of this article, when I tried to create replication setup between MS SQL 2008 and MySQL. I was unable to do it since MS SQL provides only two non SQL Subscriptions/Publishers options.

However I created the linked server through which I can access and query the MySQL databases and tables.

A linked server (a virtual server) may be considered a more flexible way of achieving remote access, with the added benefits of remote table access and distributed queries. Microsoft manages the link mechanism via OLE DB technology. Specifically, an OLE DB data source points to the specific database that can be accessed using OLEDB.

Let’s try and create MySQL linked Server on MS SQL Server 2008 and query a database.

Step 1: Create ODBC DSN for MySQL

To create ODBC DSN you need to download the MySQL Connector/ODBC Drivers 5.1 from http://www.mysql.com/downloads/connector/odbc/ .

Once you download and install the ODBC drivers, it’s time to create the DSN. Initially check if the drivers are listed under your data sources from CONTROL PANEL>Administrative Tools>Data Sources(ODBC)

After you see the drivers listed. Follow the Images to setup MySQL ODBC DSN.

On System DSN tab click Add button,

After you click OK and Close the window, MySQL ODBC will be added to System DSN as shown below.

Steps 2: Create Linked Server through SSMS

Under Object Browser expand Server Objects and right click Linked Servers as shown below

Click New Linked Sever, It brings up a window; fill in the information as shown to create linked server under General tab.

In the above screen I have entered the following details to create a linked server for MySQL.

Provider: Microsoft OLE DB Provider for ODBC Drivers

Product name:  MySQL

Data Source: MySQL (This the system dsn created earlier)

Provider String:  DRIVER={MySQL ODBC 5.1 Driver};SERVER=localhost;PORT=3306;DATABASE=repltest; USER=user;PASSWORD=password;OPTION=3;

(This string is providing all the information to connect to MySQL using the ODBC)

Location: Null

Catalog: repltest (Database name to access and query)

Now on the Security tab, select Be made using this security context option and enter the credentials to connect to MySQL server.

Also finally under Server Options tab, change RPC and RPC Out properties to True, by default they are set to False.

Click Ok, after making all the mentioned changes. This will create the linked server and it will be listed under SSMS Object Browser. Right Click on the MYSQL linked server and click Test Connection.

It should show you the above once succeeded. You can also browse the MYSQL linked server to check if the catalogs are displayed by expanding it.

Step 3: Create Linked Server using T-SQL

While the linked server can be created using the built-in wizard of the Management Studio, it can also be created using TSQL statements as in the following listing (run both statements, the first one creates the linked server and the second the logins).

Exec master.dbo.sp_addlinkedserver
@server=N’localhost’,
@srvprodcut=N’MySQL’,
@provider=N’MSDASQL’,
@datasrc=N’MySQL’

Exec master.dbo.sp_addlinkedserverlogin
@server=N’localhost’,
@locallogin=NULL,
@rmtuser=N’user’,
@rmtpassword=N'<your password>’
@rmtsrvname=N’localhost’

Step 4: Accessing and Querying MySQL through SSMS

Open a new query tab, and run a select query [ select * from openquery(MYSQL,‘select * from reptab’)]

Since we also have enabled the RPC, we can test the same using the following query [Execute (‘select * from reptab‘)at MYSQL]

If it returns the same results, the RPC is configured fine.

Follow all the above steps to configure working Linked Server to MySQL.


Walkthrough: Cluster Setup SQL/Win2008

18 Comments

Problem
After Installing SQL Server 2008 Installation Process you must have noticed how different SQL Server 2008 installation is from its previous versions. Now, we will be installing SQL Server 2008 on a Windows Server 2008 Cluster. Windows Server 2008 has a lot of differences from its previous versions and one of them is the clustering feature.

Solution
There have been a lot of changes regarding clustering between Windows Server 2003 and Windows Server 2008. It took quite a lot of effort for us to build a cluster in Windows Server 2003 – from making sure that the server hardware for all nodes are cluster-compatible to creating resource groups. Microsoft has redefined clustering with Windows Server 2008, making it simpler and easier to implement. Now that both SQL Server 2008 and Windows Server 2008 are out in the market for quite some time, it would be a must to prepare ourselves to be able to setup and deploy a clustered environment running both. Installing SQL Server on a stand-alone server or member server in the domain is pretty straight-forward. Dealing with clustering is a totally different story. The goal of this series of tips is to be able to help DBAs who may be charged with installing SQL Server on a Windows Server 2008 cluster.

Prepare the cluster nodes

I will be working on a 2-node cluster throughout the series and you can extend it by adding nodes later on. You can do these steps on a physical hardware or a virtual environment. I opted to do this on a virtual environment running VMWare. To start with, download and install a copy of the evaluation version of Windows Server 2008 Enterprise Edition. This is pretty straight-forward and does not even require any product key or activation. Evaluation period runs for 60 days and can be extended up to 240 days so you have more than enough time to play around with it. Just make sure that you select at least the Enterprise Edition during the installation process and have at least 12GB of disk space for your local disks. This is to make sure you have enough space for both Windows Server 2008 and the binaries for SQL Server 2008. A key thing to note here is that you should already have a domain on which to join these servers and that both have at least 2 network cards – one for the public network and the other for the heartbeat. Although you can run a cluster with a single network card, it isn’t recommend at all. I’ll lay out the details of the network configuration as we go along. After the installation, my recommendation is to immediately install .NET Framework 3.5 with Service Pack 1 and Windows Installer 4.5 (the one for Windows Server 2008 x86 is named Windows6.0-KB942288-v2-x86.msu). These two are prerequisites for SQL Server 2008 and would speed up the installation process later on.

Carve out your shared disks

We had a lot of challenges in Windows Server 2003 when it comes to shared disks that we will use for our clusters. For one, the 2TB limit which has a lot to do with the master boot record (MBR) has been overcome by having the GUID Partition Table (GPT) support in Windows Server 2008. This allows you to have 16 Exabytes for a partition. Another has been the use of directly attached SCSI storage. This is no longer supported for Failover Clustering in Windows Server 2008. The only supported ones will be Serially Attached Storage (SAS), Fiber Channel and iSCSI. For this example, we will be using an iSCSI storage with the help of an iSCSI Software Initiator to connect to a software-based target. I am using StarWind’s iSCSI SAN to emulate a disk image that my cluster will use as shared disks. In preparation for running SQL Server 2008 on this cluster, I recommend creating at least 4 disks – one for the quorum disk, one for MSDTC, one for the SQL Server system databases and one for the user databases. Your quorum and MSDTC disks can be as small as 1GB, although Microsoft TechNet specifies a 512MB minimum for the quorum disk. If you decide to use iSCSI as your shared storage in a production environment, a dedicated network should be used so as to isolate it from all other network traffic. This also means having a dedicated network card on your cluster nodes to access the iSCSI storage.

Present your shared disks to the cluster nodes

Windows Server 2008 comes with iSCSI Initiator software that enables connection of a Windows host to an external iSCSI storage array using network adapters. This differs from previous versions of Microsoft Windows where you need to download and install this software prior to connecting to an iSCSI storage. You can launch the tool from Administrative Tools and select iSCSI Initiator.

To connect to the iSCSI target:

In the iSCSI Initiator Properties page, click on the Discovery tab.

Under the Target Portals section, click on the Add Portal button.

In the Add Target Portal dialog, enter the DNS name or IP address of your iSCSI Target and click OK. If you are hosting the target on another Windows host as an image file, make sure that you have your Windows Firewall configured to enable inbound traffic to port 3260. Otherwise, this should be okay.

Back in the iSCSI Initiator Properties page, click on the Targets tab. You should see a list of the iSCSI Targets that we have defined earlier

Select one of the targets and click on the Log on button.

In the Log On to Target dialog, select the Automatically restore this connection when the computer starts checkbox. Click OK.

Once you are done, you should see the status of the target change to Connected. Repeat this process for all the target disks we initially created on both of the servers that will become nodes of your cluster.

Once the targets have been defined using the iSCSI Initiator tool, you can now bring the disks online, initialize them, and create new volumes using the Server Manager console. I won’t go into much detail on this process as it is similar to how we used to do it in Windows Server 2003, except for the new management console. After the disks have been initialized and volumes created, you can try logging in to the other server and verify that you can see the disks there as well. You can rescan the disks if they haven’t yet appeared.

Adding Windows Server 2008 Application Server Role

Since we will be installing SQL Server 2008 later on, we will have to add the Application Server role on both of the nodes. A server role is a program that allows Windows Server 2008 to perform a specific function for multiple clients within a network. To add the Application Server role,

Open the Server Manager console and select Roles.

Click the Add Roles link.  This will run the Add Roles Wizard

In the Select Server Roles dialog box, select the Application Server checkbox. This will prompt you to add features required for Application Server role. Click Next.

In the Application Server dialog box, click Next.

In the Select Role Services dialog box, select Incoming Remote Transactions and Outgoing Remote Transactions checkboxes. These options will be used by MSDTC. Click Next

In the Confirm Installation Selections dialog box, click Install. This will go thru the process of installing the Application Server role

In the Installation Results dialog box, click Close. This completes the installation of the Application Server role on the first node. You will have to repeat this process for the other server

We have now gone thru the process of creating the cluster at this point. In the next tip in this series, we will go thru the process of installing the Failover Cluster feature, validating the nodes that will become a part of the cluster and creating the cluster itself. And that is just on the Windows side. Once we manage to create a working Windows Server 2008 cluster, that’s the only time we can proceed to install SQL Server 2008.

To continue this series on Installing SQL Server 2008 on a Windows Server 2008 Cluster, we will look at building our Windows Server 2008 cluster in preparation for SQL Server 2008. We have completed the installation of the Application Server role in both of the servers that we will be using as part of our cluster. This tip will walk you through the installation of the Failover Cluster Feature, validating the servers that will be a part of the cluster, and creating the cluster.

Adding the Failover Cluster Feature

Windows Server 2008 calls them features which are simply software programs that can support or augment the functionality of server roles. Since we’ve already installed the Application Server role in our server, let’s define a feature from this perspective: failover clustering simply augments the role as an application server by making it highly available. It is disabled by default, unlike in Windows Server 2003 so we need to add it on both of the servers that will form a part of our cluster.

To add the Failover Clustering feature:

Open the Server Manager console and select Features.

Click the Add Features link. This will run the Add Features Wizard

In the Select Features dialog box, select the Failover Clustering checkbox and click Next.

In the Confirm Installation Selections dialog box, click Install to confirm the selection and proceed to do the installation of the Failover Clustering feature.

In the Installation Results dialog box, click Close. This completes the installation of the Failover Clustering feature on the first node.

That’s how simple and easy it is to add the Failover Clustering feature in Windows Server 2008. You will have to do this on both nodes to complete the process. Once you have managed to install the Failover Cluster Feature on both nodes, we can proceed to validate our servers if they are ready for clustering.

Running the Windows Server 2008 Validate Cluster Configuration

Unlike in previous versions of Windows where Microsoft had some sort of a hardware compatibility list (HCL) from which we had to find and select components tested to be clustering-supported, this wizard is like the “seal” that tells you whether or not the hardware you are using is supported. In fact, Microsoft has partnered with hardware vendors to create the Failover Cluster Configuration Program to make the acquisition of hardware for Windows Server 2008 Failover Clustering very easy and simple. Basically, your hardware will be supported for clustering if it meets these two requirements: the server has a “Certified for Windows Server 2008” logo and it passes this wizard.

One word of caution: do not skip any error message that this wizard generates in the final report. Doing so would simply mean that your configuration going forward will be unsupported. You only need to run this wizard on either of the nodes.

To run the Validate Cluster Configuration Wizard:

Open the Failover Cluster Management console

Under the Management section, click the Validate a Configuration link. This will run the Validate a Configuration Wizard

In the Select Servers or a Cluster dialog box, enter the hostnames of the nodes that you want to add as members of your cluster and click Next.

In the Testing Options dialog box, click Next to run all the necessary tests to validate whether or not the nodes are OK for clustering. If this is the first time to run the wizard, you must run all the tests for validation. For succeding runs, especially when adding hardware like disk subsystems ot network cards on your cluster nodes, you can selectively choose which tests to run as long as you have initially validated your hardware by running all tests.

In the Confirmation dialog box, click Next. This will run all the necessary validation tests.

In the Summary dialog box, verify that all the report returns successful.

If you have reached this part of the process, the wizard will tell you whether or not you can proceed to the next step of creating your cluster. As I’ve mentioned earlier, do not attempt to go any further if this report returned any error messages. I have seen some installations where the shared disk is displaying an error in the validation report prompting me to reconfigure the shared disk. This could mean removing the disk subsystem from both nodes, creating new disks and presenting them on the nodes as mentioned. It would be best to work with your storage engineers or your system administrators when in doubt as different vendors may have different implementations of their disk subsystems.

I’ve also seen issues pertaining to IPv6. This is a fairly common issue which can easily be resolved. The error message in the cluster validation report looks something similar to the one displayed below

Verifying  that there are no duplicate IP addresses between any pair of nodes.

Found  duplicate IP address fe80::100:7f:fffe%13 on node node1.domain.local adapter

Local Area Connection* X and node node2.domain.local adapter Local Area  Connection* X.

This blog post outlines the step in resolving this issue. In a few cases, however, I needed to disable the Teredo Tunneling Pseudo-Interface adapter from Device Manager before I got a successful summary report generated by the Validate Cluster Configuration wizard. The bottom line is simply to make sure that the report returns a successful validation before creating the cluster.

Creating the Windows Server 2008 Cluster

You’ve finally reached this step in the entire process. This means you are now ready to create your Windows Server 2008 cluster. It’s as easy as running the Create Cluster Wizard on either of the nodes. Make sure that you have your virtual hostname and IP address ready before proceeding

To run the Create a Cluster Wizard:

Open the Failover Cluster Management console

Under the Management section, click the Create a Cluster link. This will run the Create Cluster Wizard

In the Select Servers dialog box, enter the hostnames of the nodes that you want to add as members of your cluster and click Next.

In the Access Point for Administering the Cluster dialog box, enter the virtual hostname and IP address that you will use to administer the cluster. Click Next

In the Confirmation dialog box, click Next. This will configure Failover Clustering on both nodes of the cluster, add DNS and Active Directory entries for the cluster hostname.

In the Summary dialog box, verify that all the report returns successful.

Congratulations! You now have a working Windows Server 2008 cluster. Notice how easy it was to do all of these with fewer steps and less configuration compared to its predecessors. You can now validate whether your cluster is working or not. A simple test would be to do a continuous PING on the virtual hostname or IP address that you have assigned to your cluster. Reboot one of the nodes and see how your PING test responds. At this point, you are now ready to install SQL Server 2008.

OPTIONAL: Configuring your cluster quorum

This section is sometimes necessary especially when Windows Server 2008 decides to take a different disk subsystem as a quorum other than the one you’ve originally intended it to. Notice that in the Create a Cluster wizard, there was no option to select the disk subsystem that we can use as a quorum disk (now called the “witness” disk). By default, the Create a Cluster wizard will use the first available disk as the witness disk. I have seen cases where the originally planned witness disk is sized 1GB while the other shared disks are sized 200 GB. The wizard then selects one of the 200GB-sized disks as a witness disk, requiring you to move the witness disk to the original allocation. To validate, check the Storage node under the Failover Cluster Management console

To configure the quorum in a failover cluster:

Open the Failover Cluster Management console

Select the name of the cluster you have just created. Right-click on the cluster, select More Actions, and click Configure Cluster Quorum Settings… This will open up the Configure Cluster Quorum Wizard

In the Select Quorum Configuration dialog box, select the Node and Disk Majority (recommended for your current number of nodes) option. The options presented will depend on how you want your cluster configured. The current selection is for a 2-node cluster configuration

In the Configure Storage Witness dialog box, validate and select the disk that you want your cluster to use as the quorum/witness disk. Click Next

In the Confirmation dialog box, verify that the disk configuration for the quorum/witness disk is correct. Click Next

In the Summary dialog box, verify that all the configurations are successful.

For more details on Configuring the Quorum in a Failover Cluster in Windows Server 2008, check out this Microsoft TechNet article.

To continue this series on Installing SQL Server 2008 on a Windows Server 2008 Cluster, we will look at installing SQL Server 2008 in a failover cluster. Firstly, we have completed the installation of the Application Server role in both of the servers that we will be using as part of our cluster. Secondly walked you through the installation of the Failover Cluster Feature, validating the servers that will be a part of the cluster, and creating the cluster. In this tip, we will proceed to install SQL Server 2008 in a clustered Windows Server 2008 environment.

Installing and Configuring MSDTC

The Microsoft Distributed Transaction Coordinator (MSDTC) is a transaction manager that permits client applications to include several different data sources in one transaction and which then coordinates committing the distributed transaction across all the servers that are enlisted in the transaction. A lot of people ask why we need to install MSDTC prior to installing SQL Server. If you are using distributed transactions or running SQL Server on a cluster, this is definitely a must. SQL Server uses the MSDTC service for distributed queries and two-phase commit transactions, as well as for some replication functionality.

Configuring MS DTC in Windows Server 2003 clusters as defined in this Microsoft KB article is not pretty straight-forward. Windows Server 2008 made it simpler by providing a more straightforward process with fewer steps and less configuration.

To install and configure MSDTC:

Open the Failover Cluster Management console on any of the cluster node.

Under the cluster name, right-click on Server and Applications and select Configure a Service or Application. This will run the High Availability Wizard

In the Service or Application dialog box, select Distributed Transaction Coordinator (DTC) and click Next.

In the Client Access Point dialog box, enter the name and IP address of the clustered MSDTC. This should be a different IP addresses and host name from the one that the Windows Server 2008 cluster is already using. Click Next.

In the Select Storage dialog box, select the disk subsystem that will be used by MSDTC. These disk subsystems have to be defined as available storage in your cluster. In the example below, I have used the disk volume F:\ and left the disk volume E:\ for SQL Server later in the installation process. Click Next

In the Confirmation dialog box, validate the configuration you have selected for MSDTC and click Next

In the Summary dialog box, click Close. This completes the installation of MSDTC on the cluster.

You can validate your installation of MSDTC by expanding the Services and Applications node and check the cluster name of MSDTC.  Make sure that all of the dependency resources are online

Installing SQL Server 2008 on a Windows Server 2008 cluster

You’ve gone this far, don’t stop now. Only after we have managed to prepare everything can we proceed to install SQL Server 2008 on this cluster. Since we’ve already installed .NET Framework 3.5 with Service Pack 1 and Windows Installer 4.5, we no longer have to worry about them as they both are prerequisites whether you are doing a single server or a cluster installation. There are two options to install SQL Server 2008 on a cluster. The first one is by using the Integrated failover cluster install with Add Node option and the second one is the Advanced/Enterprise installation option. The process outlined below will take into account the first option.

To install SQL Server 2008:

Run setup.exe from the installation media to launch SQL Server Installation Center. Click on the Installation link on the left-hand side

Click the New SQL Server failover cluster installation link. This will run the SQL Server 2008 Setup wizard

In the Setup Support Rules dialog box, validate that the checks return successful results and click Next.

In the Product Key dialog box, enter the product key that came with your installation media and click Next.

In the License Terms dialog box, click the I accept the license terms check box and click Next. You probably haven’t read one of these, but if you feel inclined go for it.

In the Setup Support Rules dialog box, click Install. Validate that the checks return successful results. If the checks returned a few warnings, make sure you fix them before proceeding with the installation. An example of this is the Network binding order. The public network cards should be first on both nodes. Also, you can disable NETBIOS and DNS registration on the network cards to avoid network overhead. Be sure to check your binding order as well.  For more details on the network binding order warning, see Microsoft KB 955963.

For the Windows Firewall, make sure that you open the appropriate port number on which SQL Server will communicate. You can do this after the installation. Alternatively, you can disable Windows Firewall during the installation and enable it later with the proper configuration. Click Next to proceed.

In the Feature Selection dialog box, select only the components that you want installed. For the Shared feature directory, you can keep the default path if you have sufficient disk space on your C:\ drive or anywhere that is a local disk as this will be used by the SQL Server installation process later on. The directory for the clustered database engine will be different. Click Next.

In the Instance Configuration dialog box, enter the SQL Server Network Name. This is the name that will be available on the network for the clients. This will vary depending on your selection of whether it is a default or named instance. In this example, default instance is selected.

A couple of things need highlighting in this section. By default, the instance name is used as the Instance ID. This is used to identify installation directories and registry keys for your instance of SQL Server and is helpful when you want to run multiple instances in a cluster. This is the case for default instances and named instances. For a default instance, the instance name and instance ID would be MSSQLSERVER. To use a non-default instance ID, you should select the Instance ID box and specify a value.

The section on Detected SQL Server instances and features on this computer would make sense if there are other SQL Server instances running on your server.

In the Disk Space Requirements dialog box, check that you have enough space on your local disks to install the SQL Server 2008 binaries and click Next.

In the Cluster Resource Group dialog box, check the resources available on your Windows Server 2008 cluster. This will tell you that a new Resource Group will be created on your cluster for SQL Server. To specify the SQL Server cluster resource group name, you can either use the drop-down box to specify an existing group to use or type the name of a new group to create it. Click Next.

In the Cluster Disk Selection dialog box, select the available disk groups that are on the cluster for SQL Server 2008 to use. In this example, two clustered disk groups – APPS and APPS2 – have been selected to be used by SQL Server 2008. I will be using one disk resource for the system databases while the other one for the user databases. Click Next.

In the Cluster Network Configuration dialog box, enter the IP address and subnet mask that your SQL Server 2008 cluster will use. Deselect the checkbox under the DHCP column as you will be using static IP addresses. If you have not disabled your IPv6 adapters and protocols, it would be better to uncheck the row for IPv6

In the Cluster Security Policy dialog box, accept the default value of Use service SIDs (recommended). In Windows Server 2003, we specify domain groups for all SQL Server services but in Windows Server 2008, this is the recommended option. For more information on using service SIDs for SQL Server 2008, check out this MSDN article

In the Server Configuration dialog box, enter the credentials that you will use for your SQL Server service accounts in the Service Accounts tab. In the Collation tab, select the appropriate collation to be used by SQL Server. Note that the startup type is set to manual for all cluster-aware services and cannot be changed during the installation process. Click Next.

In the Database Engine Configuration dialog box, select the appropriate Authentication Mode. If you want to add the currently logged on user to be a part of the SQL Server administrators group, click the Add Current User button.

On the Data Directories tab, enter the path where your system and user database files will be created. This will default to the first shared disk in the cluster so in case you want to change it to the other shared disks to be used by SQL Server 2008, modify accordingly. If you intend to use the new FILESTREAM feature, click the FILESTREAM tab and set the appropriate configurations. Click Next

In the Error and Usage Reporting dialog box, click Next.

In the Cluster Installation Rules dialog box, verify that all checks are successful and click Next.

In the Ready to Install dialog box, verify that all configurations are correct. Click Next.

In the Complete dialog box, click Close. This concludes the installation of a SQL Server 2008 Failover Cluster

At the completion of a successful installation and configuration of the node, you now have a fully functional failover cluster instance. To validate, open the Failover Cluster Management console, and click on SQL Server (MSSQLSERVER) under Services and Applications. Make sure that all dependencies are online

Although we do have a fully functioning SQL Server 2008 failover cluster, it does not have high-availability at this point in time because there is only one node in the failover cluster. We still have to add the second node to the SQL Server 2008 cluster. In the last part of this series, we will add the second node in the failover cluster and install the latest cumulative update

To continue this series on Installing SQL Server 2008 on a Windows Server 2008 Cluster, we will look at adding a node in a SQL Server 2008 failover cluster. In this third step, we will proceed to add a node in a SQL Server 2008 failover cluster and apply the latest cumulative updates.

Adding a node on a SQL Server 2008 Failover Cluster

Now that you have a working failover cluster, we will make it highly available by adding nodes. The number of nodes you can add in a failover cluster depends on the editions of SQL Server that you will use. A Standard Edition of SQL Server 2008 can support up to two (2) nodes in a failover cluster while the Enterprise Edition supports up to sixteen (16) nodes, which is practically the limit for the Enterprise Edition for Windows Server 2008. As most of the steps in this process are similar to the one when you were installing the failover cluster, I’ve skipped most of the screenshots.

To add a node on a SQL Server 2008 failover cluster:

Run setup.exe from the installation media to launch SQL Server Installation Center

Click on the Installation link on the left-hand side. Click the Add node to a SQL Server failover cluster link. This will run the SQL Server 2008 Setup wizard.

There are a couple of glitches when you get to this point. One of them is a popup error with an error message “failed to retrieve data for this request” while in this step. I’ve seen a Microsoft Connect item on this but refers to CTP6 so I was thinking it has already been resolved. After a few searches and questions asked, SQL Server MVP Geoff Hiten advised that prior to adding another node in the cluster, any cumulative update should be pre-applied to the node before the main installation as the cluster install of the RTM version has some bugs. This creates a patched install script for the RTM installer to use. The fix started with cumulative update 1 so, technically, you can apply any cumulative update. Sounds weird, but it works. You still have to apply the patch after the installation.

In the Setup Support Rules dialog box, validate that the checks return successful results and click OK.

In the Product Key dialog box, enter the product key that came with your installation media and click Next.

Again, a few glitches on this step. This might seem unusual as you are only being asked about the Product Key. There is also a Microsoft Connect item for this which basically asks you to run the setup.exe in command prompt. There is a popup error with an error message “The current SKU is invalid” while in this step. This usually happens when you use a media with a supplied product key, like the one that comes with an MSDN subscription. What worked for me was to copy the installation media on a local disk, locate the file DefaultSetup.ini file from the installation files and delete it or move it to different location. If you opt to delete the file, make sure you note down the product key written on this file as you will need to manually key this in during the installation process. This forum post will give you quite a few options to solve this issue

In the License Terms dialog box, click the I accept the license terms check box and click Next.

In the Setup Support Rules dialog box, click Install. Validate that the checks return successful results. Again, make sure to fix any errors returned by this check before proceeding with the installation.

In the Cluster Node Configuration dialog box, validate that the information for the existing SQL Server 2008 cluster is correct.

In the Service Accounts dialog box, verify that the information is the same as what you have used to configure the first node.

In the Error and Usage Reporting dialog box, click Next

In the Add Node Rules dialog box, verify that all checks are successful and click Next

In the Ready to Add Node dialog box, verify that all configurations are correct and click Install

In the Complete dialog box, click Close. This concludes adding a node to a SQL Server 2008 Failover Cluster

You can validate your cluster installation by expanding the Services and Applications node and check the cluster name of your SQL Server instance.  You can now see an option to move the service to another node, in this case, the node you’ve just added in your failover cluster

Applying patches on a SQL Server 2008 cluster

Part of the tasks of a DBA is to apply patches on the database engine and a SQL Server 2008 failover cluster is no exception. In fact, it is not as straight-forward as applying patches and service packs on a stand-alone server. It is important to note that when applying patches or service packs to a SQL Server failover cluster, you should apply them first on the passive node. After completing the installation on the passive node, failover the SQL Server 2008 cluster resource to this node making it the active node. Once the SQL Server service and all other dependencies are up, you can, then, apply the patches on the new passive node. The latest available patch for SQL Server 2008 is cumulative update 4 and is available for request from Microsoft.  For more information, check out this Microsoft KB article. You will have to request for the patch from Microsoft as it is not available from the Microsoft Download Center. The screenshots below show cumulative update 3 (version 10.0.1600.22) but the process is basically the same. Also, note that even though you may have already applied the cumulative update due to the bug mentioned above for adding a node in a failover cluster, you still have to apply the patch on both nodes

To apply patches on a SQL Server 2008 failover cluster node:

Run SQLServer2008-KB960484-x86.exe (this would depend on the cumulative update that you want to apply) from the hotfix package you have requested from Microsoft

In the Welcome dialog box, validate that the checks return successful results.

In the License Terms dialog box, click the I accept the license terms check box and click Next

In the Select Features dialog box, validate the SQL Server 2008 components by clicking on the check box.  The Upgrade Status field will tell you whether or not the patch has already been applied. Click Next

In the Ready to Update dialog box, verify that all configurations are correct and click Patch

In the Update Progress dialog box, validate that the installation was successful.

In the Complete dialog box, click Close. This concludes patching the passive node of a SQL Server 2008 Failover Cluster


After successfully installing the patch on the passive node, move the SQL Server 2008 cluster resource to this node so it will become the new active node. Make sure that all the SQL Server 2008 cluster dependencies are online prior to applying the patch on the other node.  Repeat the process outlined above to the new passive node. A more comprehensive approach for applying a SQL Server 2008 patch to a failover cluster instance is defined in this Microsoft KB article

Congratulations! You now have a working two-node SQL Server 2008 failover cluster running on Windows Server 2008.

RHEL LVS setup for MySQL DB Nodes

11 Comments

I was configuring MySQL Cluster where the application servers use a properties file to connect to the MySQL Data/Storage node (I configured both Data and Storage nodes on same physical server).
I want the application to use a single IP address, to access the DB servers in cluster. Since if any of the nodes fail, the application servers should still be able to query the databases.
Hence I thought of using LVS so that multiple application servers can access the DB servers through VIP. To achieve the same I have configured LVS, thought of sharing the same.

RHEL Linux LVS setup:
=====================

Pre-Requisites:
===============

REDHAT Linux Cluster Packages
VIRTUAL IP

Configuration for LB server:
============================

Configure packet forwarding on LB server

vi /etc/sysctl.conf

net.ipv4.ip_forward = 1        # change to 1,

Install piranha package from redhat cluster group

Once installed, configure the password:

# piranha-passwd

for autostart the LVS services, below are the commands

# chkconfig pulse on
# chkconfig piranha-gui on
# chkconfig httpd on

Start the http and piranha services

# service httpd start
# service piranha-gui start

Open http://localhost:3636 in a Web browser to access the Piranha Configuration Tool.
Click on the Login button and enter piranha for the Username and the administrative password you created in the Password field.

OR

Configure file to set up LVS

vi /etc/sysconfig/ha/lvs.cf        # Sample configuration file for 11.1 and 11.4

serial_no = 47
primary = 172.16.11.1
service = lvs
backup_active = 1
backup = 172.16.11.4
heartbeat = 1
heartbeat_port = 539
keepalive = 3
deadtime = 6
network = direct
debug_level = NONE
monitor_links = 0
syncdaemon = 0
virtual MySQL {
active = 1
address = 172.16.11.10 bond0:1
vip_nmask = 255.255.255.255
port = 3306
expect = “UP”
use_regex = 1
send_program = “/root/mysql_mon.sh %h”
load_monitor = none
scheduler = wlc
protocol = tcp
timeout = 5
reentry = 3
quiesce_server = 0
server Server1 {
address = 172.16.11.2
active = 1
weight = 1
}
server Server2 {
address = 172.16.11.3
active = 1
weight = 1
}
}

Restart the pulse services on the LVS Routers

# service pulse restart

Verify the LVS. Check the ipvsadm entries:

[root@ServerLB|172.16.11.1~]~ # ipvsadm

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.11.10:mysql wlc
-> Server1  :mysql              Route   1      0          0
-> Server2  :mysql              Route   1      0          0

MySQL Real Servers configuration:
=================================

Run below commands on both the MySQL nodes

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

Configure loopback on both the MySQL nodes

ifconfig lo:0 172.16.11.10 netmask 255.255.255.255 up

OR (Permanent changes)

vi /etc/sysconfig/network-scripts/ifcfg-lo:0

DEVICE=lo:0
IPADDR=172.16.11.10
NETMASK=255.255.255.255
NETWORK=172.16.11.0
BROADCAST=172.16.11.255
ONBOOT=yes
NAME=loopback

To bring up the IP alias the ifup command is used:
/sbin/ifup lo

Automate MySQL Dumps using Linux CRON Job

16 Comments

MySQL Dump Using Linux CRON Job

If you are a database administrator who would like to automate you tasks. Here is a simple and very basic task that can be automated,
MySQL Database Dumps are the very basic task every administrator does, no matter how simple it sounds, it is most useful in failure scenarios. Hence you would have to perform this task very often.

It is very likely to miss on taking dumps on daily routine, hence you can come up with an alternative to dump your databases by scheduling it to run automatically. This will let you concentrate on your other task which might need more attention.

There are several ways to dump a database, you have many utilities and tools to do so. Also many tools give you the option to schedule the dumps through a GUI.

Follow the below steps to automate your MySQL dump.

Firstly, you need create a .sh file with these entries,

>vi MySQLdump.sh

mysqldump -u root -pmysql123 –all-databases –routines| gzip > /root/MySQLDB_`date ‘+%m-%d-%Y’`.sql.gz
mysqldump -h 172.16.21.3 -u root -pmysql123 –all-databases –routines | gzip > /root/MySQLDB.3_`date ‘+%m-%d-%Y’`.sql.gz

In above two lines, I’m scheduling the cron job to dump my database on the same machine, also the second line is dumping a databsase from a remote host.
You need to provide access to the local system on the remote host to perform the task.

Secondly, create a CRON job to schedule the dumps.

>crontab -e

30 15 * * * /root/MySQLdump.sh 2>&1>> /root/MySQLdump.log

The above will dump the database every day at 15:30.

You can schedule it your way.

NDB ENGINE

3 Comments

For latest developments and updates on cluster visit below.

Many limitations of previous versions addressed.

http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-development-5-1-ndb-7-1.html

Start MySQL at bootup

6 Comments

At the time of Server failure, you would want your MySQL to start at the boot.

To achieve the same follow the below steps at the prompt.

>cd /etc/rc.d/rc3.d
>ln -s /etc/rc.d/init.d/mysql S98mysql

The second line will create a symlink in the above directory. By doing this you are asking your server to run it during level 3 start up.

However you can also run the below commands manually post startup.

Using files in /etc/rc.d/init.d directly, for example:

/etc/rc.d/init.d/mysql start
/etc/rc.d/init.d/mysql stop

-Death

Older Entries