I'm looking to start doing to test migrations from MS SBS 2003 to Zimbra and for my test installations I was thinking of creating a sort of "mini SAN"
I'd like to add a couple of drives to our existing NAS RAID card (3Ware 9690SA-8i) on one machine and have another machine to host my test Zimbra installation.
This box will eventually have 15-20 users.
1) So if I export a 500GB unit (2x500GB in a RAID 1) as NFS from our storage box, and mount it on the Zimbra machine will that perform ok?
I'm planning to connect the 2 boxes through a dedicated VLAN on our Netgear L2 switch (GSM7224), both connected over a dynamic 802.3ad LACP bonded link so that should give 2Gbps between the boxes.
2) That should be more than enough bandwidth for our relatively slow 7200rpm SATA drives right?
I'm assuming the writing/reading the data on the actual disks will be the bottleneck in all this, not the network.
3) Also I've read many articles on here with people talking about SAN storage and I guess the only way they connect storage to the Zimbra box is through gigabit ethernet connections?
4) My main concern is latency... will doing this give me some horrible latency since all packets will have to fly through the switch rather than just going to a RAID card connected to the motherboard.
If latency was a problem though I assume noone would use a "SAN" method... but thought I would check on here first.
Since the zimbra box will be constantly accessing big mysql db files and all the binary attachment blob files I thought latency could be a problem.
5) If I do this, how strongly should I consider RAID 10 over RAID 1?
I'd have to change our NAS from a RAID 6 to a RAID 5 to give me 4 spare drive bays for a RAID 10
Mainly I'm looking at this as an experiment, since I've never messed around with NFS exporting and creating VLANs on our switch before.
But if you all think this is never going to work then I'll go for local storage instead.
So I'd appreciate some answers to my Qs and any other comments/suggestions anyone might have!