Now 6 has arrived I'm really kicking off our efforts to migrate from MS SBS 2003. But I've been looking to try and use our server hardware better.
We've got a beast of a machine running our NAS/intranet DB at the moment and I'd like to go for some virtualisation as our intranet DB is very small (70MB) and sharing files out to our network doesn't need much grunt.
The server is used by between 10 and 20 users.
Here's the specs: Intel Quad-core Xeon x3360 2.83GHz processor, Tyan S5211 (Toledo i3210W) motherboard, 8GB ECC memory
Storage is a 3Ware 9690SA-8i card with 5x1TB SATA drives in a RAID 6 (for our intranet/NAS) and 2x1TB SATA drives in a RAID 1 (for Zimbra storage)... and another 1TB as a hot-spare.
Initially I was thinking of going with VMware Server and having 2 VMs:
- Intranet DB (Apache, PHP, MySQL) and NAS configurations (Samba, Netatalk)
- Zimbra mail
But having read up on this a bit I started to get concerned about how the VMs would actually access the storage and the I/O performance.
Here's the various storage options I came up with:
1) Big massive VMware virtual disks:
I didn't want to just create a 3TB virtual disk for our NAS, and a 1TB virtual disk for Zimbra... if something happened to the VM (corruption etc) then it would be more difficult to get at our data
2) VMware SCSI pass-through:
So then I started looking into VMware Server's SCSI passthrough feature which sounded great as the arrays would just appear to the VMs as a regular /dev/sdb1 etc partition... but I was concerned about performance/reliability of the SCSI pass through. Last thing I want is VMware munging our data as it goes from the VM through the SCSI passthrough system.
But it does have the advantage that we could always just mount the arrays under another OS installation and the data should be readable
3) Create a basic SAN:
Take the 3Ware RAID card and drives/backplane units and move them to a separate box connected via gigabit then access the arrays from within the VMs as NFS shares.
Means I can then just use VMware ESXi as the host OS and mount the NFS shares as regular drives within the VMs. ESXi should give near native performance on the hardware as well... better than VMware Server anyway.
What's the best way of doing it?
I'm thinking #3 would be cool.
I have no experience in NFS/SAN setups but we've got the time and spare hardware to be able to do it.
We're only a small business with 15-20 potential Zimbra users but I was a bit concerned about the performance of Zimbra constantly hammering a big database and all the message blobs stored over a gigabit ethernet connection.
But then I thought, most SANs must be connected via gigabit! And the bottleneck in all of this is probably the writing data to relatively slow 7200rpm SATA disks - not the gigabit connection!
Also I'm guessing NFS is pretty scalable
I might even set up 802.3ad dynamic LACP link aggregation between the ESXi and the SAN box as well so I can get a 2gbps link which would be more than enough.
What does everyone think is the best method?
Should I just stick with a VMware Server and SCSI passthrough approach? Or go SAN with NFS and a separate storage box?