ZCS Multi-Server Installation Guide, Network Edition 4.0
Table of Contents Previous Next Index


Zimbra Cluster Installation - Multi-Node ConfigurationFor Red Hat Cluster Suite Integration

Zimbra Cluster Installation - Multi-Node Configuration
For Red Hat Cluster Suite Integration
Zimbra Collaboration Suite can be integrated with Red Hat® Enterprise Linux® Cluster Suite version 4, update 3 to provide high availability.
In a cluster implementation, all Zimbra mailbox servers are part of a cluster under the control of the Red Hat Cluster Manager.
Note: Red Hat Cluster Suite consists of Red Hat Cluster Manager and Linux Virtual Server Cluster. For Zimbra, only Red Hat Cluster Manager is used. In this guide, Red Hat Cluster Suite refers only to Cluster Manager.
Pre-configuration Requirements
All servers must meet the requirements described in the installation prerequisites chapter, in addition to the requirements described here.
Go to the Red Hat Cluster Suite website, https://www.redhat.com/software/rha/cluster to view specific system requirements for cluster configurations using Red Hat Cluster Suite. If you are not familiar with the Red Hat Cluster Suite, read the documentation to understand how each of the components work to provide high availability.
Hardware for the Cluster Environment
For Red Hat Cluster Suite integration, the following hardware is required.
SAN (shared disk storage device) to store the data for each of the Zimbra mailbox servers. The size of the shared storage device depends on your expected site capacity.
Network power control switch to connect cluster nodes. The power control switch is used as the fence device for I/O fencing during a failover. Use either a APC or a WTI network power switch.
Configure the network power control switch according to the manufacturer’s requirements.
Software Requirements For Clustering
The Red Hat Enterprise Linux 4, Update 3 operating system installed on each mailbox server node configured with the same netmask and broadcast address.
Preparing the SAN
Configure the SAN device and create the partitions for the volumes. Refer to the Red Hat Cluster Suite documentation for configuration requirements. The SAN device must be partitioned to provide the following volumes for each Zimbra mailbox server in the cluster.
Overview of Cluster Installation
Red Hat Cluster Suite integration requires planning the cluster design and precisely executing the configuration. The Zimbra Cluster software automates the setup on the nodes. The scripts in the Zimbra Cluster software configure the Zimbra Collaboration Suite servers for Red Hat Cluster integration. In most cases, you may not need to use Red Hat’s graphical Cluster Configuration Tool to configure the Zimbra cluster. If you do, refer to the Red Hat Cluster Suite documentation for detailed configuration and management instructions.
The Zimbra Cluster software includes:
Zimbra Cluster install script, used before the Zimbra Collaboration Suite installer to create the mount points for the SAN volumes.
Zimbra Cluster post install script, used after Zimbra Collaboration Suite is installed on the servers to move the data files from the local disk to the volumes created on the SAN.
Zimbra Cluster configurator script that runs on one active node. The configurator script automates the Red Hat Cluster configuration process, taking you through the steps to create the /etc/cluster/cluster. conf file. In addition, the configurator script copies the cluster.conf file to each node.
Cluster Scenario
The screen-shots in this chapter describe configuring a cluster environment with two active nodes, one standby node, and two cluster services and separate LDAP and MTA servers that are not under the control of Red Hat Cluster Suite. The domain name is example.com.
The following Zimbra servers are configured:
One Zimbra LDAP server, ldap.example.com
One Zimbra MTA server, mta.example.com ()
Active mailbox node 1, node1.example.com
Active mailbox node 2, node2.example.com
Standby mailbox node, node3.example.com
Cluster Service 1, mail1.example.com
Cluster Service 2, mail2.example.com
Sixteen volumes are configured on the SAN for this example cluster, eight for each of the two services.
Installing and Configuring the Software
You should install and configure ZCS servers in the following order:
1.
2.
3.
MTA servers. The MTA server is last because you need to configure one of the active cluster services’ hostname as the MTA auth host.
See the Multiple-Server Installation chapter, for instructions about how to install the Zimbra LDAP and Zimbra MTA servers.
Install the Active Mailbox Nodes
For each active mailbox node, install and configure the following software:
Installing the Red Hat Cluster Suite Software
On each node, install the required RPMs and the rgmanager RPM for Red Hat Cluster Suite with DLM. See the Red Hat Cluster Suite documentation, Determining RPMs To Install Determining section for descriptions and the installation instructions.
Installing the Zimbra Cluster Software
The Zimbra Cluster software consists of install.pl, postinstall.pl, and configure-cluster.pl scripts to automate the cluster configuration process and files that are used during the Zimbra cluster service operation.
The software is a standard compressed tar file. Save the file to the computer from which you will install the software.
1.
Log in as root to the Zimbra server and cd to the directory where the Zimbra zcs-cluster.tgz file is saved. Type the following commands:
tar xzvf zcs-cluster.tgz to unpack the file
cd zcs-cluster to change to the correct directory
./install.pl to begin the installation
The necessary scripts, files, and Red Hat Cluster Suite patches are installed.
 
Each cluster node needs zimbra user and zimbra group. The same user ID and group ID must be used on all cluster nodes to allow files on SAN owned by zimbra user/group to be accessible on every node.
2.
Type the Zimbra group ID (GID) to be used. The same group ID number must be configured on every node. The default is 500. Change the default, if this group ID is not available on all the nodes in the cluster.
3.
Type the Zimbra user ID (UID) to be used. The same user ID number must be configured on every node. The default is 500. Change the default, if this user ID is not available on all the nodes in the cluster.
4.
Type the first cluster service name, press Enter. Type mail1.example.com. This is the public hostname. The eight volume mount points for the cluster service are created.
5.
6.
Type Done, when finished.
 
On every mailbox server node you need to create mount points for all cluster services. Enter one service name per prompt.
Installing the Zimbra Collaboration Suite Software
Important: Before proceeding, review Planning for the Installation to learn about the Zimbra packages that are installed. If you install the Logger package, it must be installed on each mailbox node but only enabled on the first active node.
For each active node in the cluster, install the Zimbra Collaboration Suite as follows. For a smooth installation, note these configuration points.
When the Zimbra software is installed, the installation detects the hostname configured for the server and automatically inserts this name as the default hostname for various values. The server hostname must be changed to the cluster service name configured in Step 4 in Installing the Zimbra Cluster Software section.
The LDAP server name and LDAP password are required. To find the LDAP password, after the LDAP server is installed, on the LDAP server, type su - zimbra, then type zmlocalconfig -s ldap_root_password.
1.
Log in as root to the server and cd to the directory where the Zimbra zcs.tgz file is saved. Type the following commands.
tar xzvf zcs.tgz to unpack the file
cd zcs to change to the correct directory
./install.sh to begin the installation
The installation process checks to see if Sendmail, Postfix, and MySQL software are running. If any of these are, you are asked to disable them. The default is Yes to disable them. Disabling MySQL is optional, but highly recommended.
The install.sh script displays a reference to the Zimbra Public License with an address to view the license, and then reviews the installed software to verify that the prerequisite software is installed. If any is missing, the installation stops.
2.
When asked to select the packages to install, type N for the Zimbra-LDAP, and Zimbra MTA packages. Zimbra Store, Zimbra SNMP, Zimbra Logger and Zimbra Spell should be marked Y. Press Enter. (Of these packages, only Zimbra Store is required.)
 
The selected packages are installed on the mailbox server.
At this point the Main menu displays the default entries for the mailbox server you are installing.
3.
Change the Hostname to one of the cluster service names entered in Step 4 in Installing the Zimbra Cluster Software section (In our example, this is mail1.example.com). Type 1, and then type the cluster service name, press Enter.
4.
Type 2 and then type the LDAP host name.
Type 4 and then type the LDAP password.
As you enter each of these values the server tries to contact the LDAP server. You can proceed when the LDAP server is successfully contacted.
5.
Modify Zimbra-store. Type 5 to configure the SMTP host and set the web server mode, if it is not http.
Type 2 and then type the Zimbra MTA host name.
Type 3, if you are changing the default mode. The communication protocol options are HTTP, HTTPS, or mixed. Mixed mode uses HTTPS for logging in and HTTP for normal session traffic. All modes use SSL encryption for back-end administrative traffic.
Important: For clustering, the Web mode must be identical on all nodes.
 
9) Spell server URL: http://mail1.example.com:7780/aspell.php
6.
If you installed the SNMP package, you will need to modify the default notification addresses. Type 6 to modify the SNMP packages.
Configure whether to be notified by SNMP or SMTP. The default is No. If you enter yes, you must enter additional information.
For SMTP, enter the SMTP source email address and destination email address. Type the same host address as configured in the LDAP server.
 
7.
When Logger is installed, it must be enabled on the first node. All other nodes must install but disable Logger. To disable logger, type the menu number for Logger and press Enter.
8.
If you have no other changes, type a to apply the configuration changes. Press Enter, after Save configuration data? displays.
9.
When The system will be modified - continue? appears, type Y and press Enter.
10.
After the Operations logged to /tmp/zmsetup.log.xxx, press Enter. The server is modified. Installing all the components and configuring the server can take a few minutes.
11.
When Installation complete - press return to exit displays, press Enter.
Mounting Volumes for Cluster Service
Important: Verify that the mounted volumes are empty before proceeding.
Running Zimbra Cluster Post Install Script
1.
To start the Zimbra post install cluster configuration script, cd to the zcs-cluster directory created in the Installing the Zimbra Cluster Software section. Type ./postinstall.pl to begin post install.
 
2.
Type Y to confirm that the SAN volumes are mounted for the selected service.
The Zimbra processes are stopped, various cluster-specific adjustments are made to the Zimbra Collaboration Suite installation, and the data files are moved to the service-specific volumes.
 
.... chown zimbra:zimbra /opt/zimbra-cluster/mountpoints/mail1.example.com/store... mv -f /opt/zimbra/index/* /opt/zimbra-cluster/mountpoints/mail1.example.com/index
.... chown zimbra:zimbra /opt/zimbra-cluster/mountpoints/mail1.example.com/index... mv -f /opt/zimbra/backup/* /opt/zimbra-cluster/mountpoints/mail1.example.com/backup
.... mv -f /opt/zimbra/logger/db/data/* /opt/zimbra-cluster/mountpoints/mail1.example.com/logger/db/data
3.
 
Repeat these steps for every active node in the cluster.
Configuring the Standby Mailbox Server Node
For the standby mailbox server node, install and configure the following software:
Installing the Red Hat Cluster Suite Software
install the required RPMs and the rgmanager RPM for Red Hat Cluster Suite with DLM. See the Red Hat Cluster Suite documentation, Determining RPMs To Install Determining section for descriptions and the installation instructions.
Installing the Zimbra Cluster Software
The Zimbra Cluster software is installed and run on each standby node. The software automates the cluster configuration process. The software is a standard compressed tar file. Save the file to the computer from which you will install the software.
The stand-by node is configured exactly the same as the active nodes. You define the same group ID and user ID and identify the cluster service names.
1.
Log in as root to the Zimbra mailbox server and go to the directory where the Zimbra zcs-cluster.tgz file is saved. Untar and type ./install.pl to begin.
2.
Type the Zimbra group ID (GID) to be used. The same group ID number must be configured on every node. The default is 500. Change the default, if you changed it for the active nodes.
3.
Type the Zimbra user ID (UID) to be used. The same user ID number must be configured on every node. The default is 500. Change the default, if you changed it for the active nodes.
4.
Type the first cluster service name, press Enter. Type as mail1.example.com. Mount points are created.
5.
6.
Type Done, when finished.
Installing the Zimbra Collaboration Suite Software on the Standby Node
Install the Zimbra Collaboration Suite on the standby node. For detailed description of the installation process, review the Zimbra Collaboration Suite Multi-Server Installation Guide.
Important: For a smooth installation, note these configuration points.
When the Zimbra software is installed, the installation detects the hostname configured for the server and automatically inserts this name as the default hostname for various values. For the standby node, do not change from the default.
The LDAP server name and LDAP password are required. To find the LDAP password, after the LDAP server is installed, on the LDAP server, type su - zimbra, then type zmlocalconfig -s ldap_root_password.
1.
Log in as root to the Zimbra server and cd to the directory where the Zimbra zcs.tgz file is saved. Type the following commands.
tar xzvf zcs.tgz to unpack the file
cd zcs to change to the correct directory
./install.sh to begin the installation
The installation process checks to see if Sendmail, Postfix, and MySQL software are running. If any of these are, you are asked to disable them. The default is Yes to disable them. Disabling MySQL is optional, but highly recommended.
The install.sh script displays a reference to the Zimbra Public License with an address to view the license, and then reviews the installed software to verify that the prerequisite software is installed. If any is missing, the installation stops.
2.
When asked to select the packages to install, install the same packages you installed on the active nodes. In our example, type N for the Zimbra-LDAP, and Zimbra MTA packages. Zimbra store, Zimbra SNMP, Zimbra Logger and Zimbra Spell should be marked Y. Press Enter. (Logger, Spell, and SNMP packages are optional, but if installed on the active nodes, must be installed on the standby node.)
 
3.
The selected packages are installed on the mailbox server. At this point, the Main menu displays the default entries for the mailbox server you are installing.
4.
Type 2, and then type the LDAP host name.
Type 4, and then type the LDAP password.
As you enter each of these values, the server tries to contact the LDAP server. You can proceed when the LDAP server is successfully contacted.
5.
Modify zimbra-store. Type 5 to configure the SMTP host and set the web server mode, if it is not http.
Type 2, for SMTP host, and then type the Zimbra MTA host name.
Type 3, if you are changing the default mode. The communication protocol options are HTTP, HTTPS, or mixed. Mixed mode uses HTTPS for logging in and HTTP for normal session traffic. All modes use SSL encryption for back-end administrative traffic.
Important: For clustering, the Web mode must be identical on all nodes.
 
9) Spell server URL: http://mail1.example.com:7780/aspell.php
6.
If you installed the SNMP package, you will need to modify the default notification addresses. Type 6 to modify the SNMP packages.
Configure whether to be notified by SNMP or SMTP. The default is No. If you enter yes, you must enter additional information.
For SMTP, enter the SMTP source email address and destination email address. Type the same address as configured in the LDAP server.
 
3) SNMP Trap hostname:                      snmptrap.com
5) SMTP Source email address:               admin@example.com
6) SMTP Destination email address:          admin@example.com
7.
If Logger is installed, it must be disabled on all standby nodes. To disable logger, type the menu number for logger and press Enter.
8.
When you have no other changes, type a to apply the configuration changes. Press Enter after Save configuration data? displays.
9.
When The system will be modified - continue? appears, type Y and press Enter.
10.
After the Operations logged to /tmp/zmsetup.log.xxx, press Enter. The server is modified. Installing all the components and configuring the server can take a few minutes.
11.
When Installation complete - press return to exit displays, press Enter.
Running the Cluster Post Install Script
Now you prepare this server to be the standby server in the cluster. Start the Zimbra cluster post install script.
Note: Unlike installation of active nodes, no SAN volumes are mounted on standby nodes prior to running the post install script.
1.
Log in as root to the Zimbra server and cd to the directory where the Zimbra zcs-cluster.tgz file is saved. Type the following commands:
cd zcs-cluster to change to the correct directory
./postinstall.pl to begin the post install. The Zimbra processes are stopped, various cluster-specific adjustments are made to the Zimbra Collaboration Suite installation, and unnecessary data files are deleted.
 
Modify Zimbra LDAP and Zimbra MTA Servers for Logger Service
You must modify the syslog setup on the Zimbra LDAP server and Zimbra MTA servers.
1.
On the LDAP server, as root, run /opt/zimbra/bin/zmsyslogsetup.
2.
On the MTA server, as root, run /opt/zimbra/bin/zmsyslogsetup.
Configuring Red Hat Cluster for Zimbra Collaboration Suite
When all the software is installed and the Zimbra installation on the servers configured, use the Zimbra cluster configurator script to prepare Red Hat Cluster Suite to run the Zimbra Collaboration Suite. The cluster configurator script is run on only one of the active mailbox nodes.
The cluster configurator asks a series of questions to gather information about the cluster and generate the cluster configuration file, /etc/cluster/cluster.conf. This is the main configuration file of Red Hat Cluster Suite.
The cluster configurator installs the generated configuration file on each cluster node as /etc/cluster/cluster.conf.
Note: The Zimbra cluster configurator should generate correct configuration file for most installations, but some cases are more complicated. For instance if you are using multiple fence devices or highly customized SAN setup, the configurator script will not work. In those cases, use the configurator to generate an initial cluster.conf. Then run the graphical Red Hat Cluster Configuration Tool, to make the necessary changes. Using the Zimbra Cluster configurator script first is recommended, because the script automates the steps for the basic configuration. After using the Red Hat Cluster Configuration Tool, you must manually copy the final cluster.conf file to each cluster host.
The Zimbra configurator script guides you through creating the cluster configuration file. The following is configured:
Fence Device - This is the network power switch. Each mailbox node in the cluster is plugged into the fence device. The cluster uses the fence device for I/O fencing during a failover.
Cluster Nodes - This section is used to add members to the cluster and configure a fence device setting for each member.
Managed Resources - The preferred node for each service and the list of volumes to be mounted from the SAN are configured
To use the configurator script
1.
To start the Zimbra configuration script, cd to the zcs-cluster directory createdin the Installing the Zimbra Cluster Software section. Type
./configure-cluster.pl. The configurator checks to verify that the server installation is correct.
2.
All servers in the cluster must be installed before you can proceed. When Is installation finished on all cluster nodes? displays, type y to continue.
3.
Enter a name to identify this cluster. Press Enter. Each cluster on the same network must have a distinct name.
Important: Make sure you enter a name that is not in use! Each Red Hat Cluster Suite cluster on the same network must have a distinct name to avoid interfering with another Red Hat Cluster Suite cluster.
 
[root@node1 zcs-cluster]# ./configure-cluster.pl
This script will guide you through creating an initial configuration file for Red Hat Cluster Suite. A series of questions will be asked to collect the necessary information. At the end, the configuration data will be saved to a file and the file will be copied to all cluster nodes, as /etc/cluster/cluster.conf on each node.
You must finish installation on all cluster nodes before configuring the cluster. Is installation finished on all cluster nodes? (Y/N) y
4.
Select the network power switch type that is used as the fence device. Configure the fence device host name/IP address, login, and password.
 
A fence device is needed by the cluster for I/O fencing during a failover. The power cord of each cluster node must be plugged into an APC or WTI network power switch device, and the cluster will control the power switch to reboot the node being fenced. While Red Hat Cluster Suite supports a variety of fence devices, for the purpose of this configuration process assume you are using APC or WTI, and also assume all nodes are plugged into a single device. If you are using a different fence device or more than one device, you can correct the generated configuration file later with the system-config-cluster GUI tool.
5.
Enter the fully-qualified hostname for each of the nodes in the cluster and the plug number associated with the node’s power cord. When all the nodes are identified, type Done.
 
6.
Note: You can place all service data on a single volume or chose to place the service data in eight volumes. Single volume is recommended for testing environments only. A more customized volume configuration is possible, but the configurator script only supports single- or eight-volume volume sets. This is a limitation of the configurator script, not of Zimbra Collaboration Suite or of Red Hat Cluster Suite.
 
For each service you need to choose a preferred node to run on, and enter the list of volumes to be mounted from the SAN.
A Zimbra cluster service must mount service-specific data volumes. Two choices are provided in this configuration process. All service data can be placed on a single volume, or multiple volumes can be used for different types of data files. In the multiple-volumes case eight volumes are used per service.
7.
A prompt is displayed for each volume in the service’s volume set. Enter the SAN volume device name for the mount point in the prompt. (These names are the volumes defined when you created the 8 volumes on the SAN as described in "Preparing the SAN” .)
 
8.
9.
When finished choosing the services, select Done. Press Enter, and then press Enter again to view a summary of the configuration.
 
10.
After viewing the summary, save the configuration to a file. You can either accept the default or rename the configuration file.
Note: If you made a mistake, press Ctrl-C to abort the configurator script and start over.
Copying the files to all cluster nodes
The configuration file must now be copied to all cluster nodes. The Zimbra configurator script can copy the files, or you can do it manually. This is a continuation of the configurator script.
11.
The script offers to do the copy via scp. To automatically copy the cluster.conf file to all nodes, type y. Enter the root password of each node when asked.
 
 
This file must be copied to all cluster nodes now. This script can do it for you using scp, or you can do it manually.
Copying /tmp/cluster.conf.17815 to node1.example.com:/etc/cluster/cluster.conf .... scp /tmp/cluster.conf.17815 root@node1.example.com:/etc/cluster/cluster.conf
Copying /tmp/cluster.conf.17815 to node2.example.com:/etc/cluster/cluster.conf.... scp /tmp/cluster.conf.17815 root@node2.example.com:/etc/cluster/cluster.conf
Copying /tmp/cluster.conf.17815 to node3.example.com:/etc/cluster/cluster.conf.... scp /tmp/cluster.conf.17815 root@node3.example.com:/etc/cluster/cluster.conf
If necessary, use system-config-cluster GUI tool to further customize the cluster configuration. You must manually copy the updated cluster.conf to all nodes.
Important: Use the Red Hat Cluster Configuration Tool if you want to further customize the cluster configuration after the configuration file is generated and copied to all cluster nodes. If you customize the configuration file, you must then manually copy the updated cluster.conf to all nodes.
Start the Red Hat Cluster Suite Daemons
After the cluster configuration file is copied to every node, you can start the Red Hat Cluster Suite daemons.
Important: In order to start the cluster daemons correctly, you must be logged on to each node before proceeding, and to see any errors, you should have two sessions open for each node. In our example, you would have six screens opened. You enter a command for one node, then enter the same command for the second, and so forth. You must enter each command on all nodes, before proceeding to the next command.
Run tail -f /var/log/messages, on each node to watch for any errors.
To start the Red Hat Cluster Service on a member, type the following commands in this order. Remember to enter the command on all nodes before proceeding to the next command.
1.
service ccsd start. This is the cluster configuration system daemon that synchronizes configuration between cluster nodes.
2.
service cman start. This is the cluster heartbeat daemon. The command may not complete on all nodes immediately. It returns when all nodes have established heartbeat with one another.
3.
service fenced start. This is the cluster I/O fencing system that allows cluster nodes to reboot a failed node during failover.
4.
service rgmanager start. This manages cluster services and resources.
The service rgmanager start command returns immediately, but initializing the cluster and bringing up the Zimbra Collaboration Suite application for the defined cluster services may take some time.
After all commands have been issued on all nodes, run clustat command on one node, to verify all cluster services have been started.
Continue to enter the clustat command, until it reports all nodes have joined the cluster, and all services have been started.
Because nodes may not join the cluster in sequence, some of the services may start on nodes that are different from the configured preferred nodes. This is expected and eventually will be restarted on the configured preferred node.
When clustat shows all services are running on the preferred nodes, the cluster configuration is complete.
What to do if cluster services does not relocate to preferred node
If the services does not relocate to the preferred nodes after several minutes, you can issue Red Hat Cluster Suite utility commands to manually correct the situation.
Note: Not starting correctly on the preferred nodes usually is an issue that happens only the first time the cluster is started.
For each cluster service that is not running on the correct preferred node, run clusvcadm -d <cluster service name>, as root on one of the cluster nodes.
 
[root@node1.example.com]#clusvcadm -d mail1.example.com
This disables the service by stopping all associated Zimbra processes, releasing the service IP address, and unmounting the service’s SAN volumes.
To enable a disabled service, run clusvcadm -e <service name> -m <node name>. This command can be run on any cluster node. It instructs the specified node to mount the SAN volumes of the service, bring up the service IP address, and start the Zimbra processes.
 
[root@node1.example.com]#clusvcadm -e mail1.example.com -m node1.example.com
Testing the Cluster Set up
To perform a quick test to see if failover works:
1.
2.
3.
Run tail -f /var/log/messages. You will observe the cluster becomes aware of the failed node, I/O fence it, and bring up the failed service on a standby node.
View Zimbra Cluster Status
Go the Zimbra administration console to check the status of the Zimbra cluster. The Server Status page shows the cluster server, the node, the services running on the cluster server, and the time the cluster was last checked. The standby nodes are displayed as standby. If a service is not running, it is shown as disabled. Managing and maintaining the Zimbra Cluster is through the Red Hat Cluster Manager.
 
 
 
 

Zimbra Cluster Installation - Multi-Node ConfigurationFor Red Hat Cluster Suite Integration

Table of Contents Previous Next Index
ZCS Multi-Server Installation Guide, Network Edition 4.0
Copyright © 2006 Zimbra Inc.