Hi I am looking to install 4 x (1+1) clusters and I am trying to get my head around how this works. My initial installation was to install /opt/zimbra on a shared disk and have redhat cluster manager manage the shared disk, the cluster IP address and the service. The problem I had with this was that it takes too long to relocate the service. The stopping of the zimbra service on the one node and starting on the other takes minutes.

So then I decided to read the documentation... And this is how it looks to me.
On active node I install zimbra with the -c active argument to the install script. It asks me for the name of the service and creates a mount point from /opt/zimbra-cluster/mountpoints/<servicename> to /opt/zimbra.

It then asks you to mount all of your mount points (in my case, 7) and then run the installer script again - to which it proceeds to install the binaries etc on the shared disk. Good.

Now it came to the second node of the cluster, for some reason it creates the same dir structure as the active node but in addition creates a standby directory and installs all the binaries there. Why?

What is the purpose of installing a set of binaries that have no use whatsoever?

This is my understanding and I appreciate that I could be wrong here, but any advice / clarity would greatly help.

Also, there are no tools to setup up the /etc/cluster.conf I see, so should I just create the same resources - ie shared IP, mount points and init scripts and associate them with a service for the cluster setup?

thanks in advance.