Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: RAID setup for ~50 users

  1. #1
    jpbuse is offline Loyal Member
    Join Date
    Jan 2008
    Location
    San Diego, CA
    Posts
    88
    Rep Power
    7

    Default RAID setup for ~50 users

    I've read various posts, threads and wiki articles on which RAID config to use for the ZCS server. Looks like RAID5 is not recommended for installs over 100 users. We have around 40 users right now and 60 would be our top end.

    System is running ZCS NE 5.0.15, IMAP no POP. Average user account is 5GBs and current quota is 15GBs. We're an advertising/marketing agency and email usage and storage is very high. I'm currently planning for 20GB/account quota at 50 users so 1TB of available storage needed. The reality is many users will have less than 5GBs of mail storage.

    Server is CentOS 5.2 on Dell PowerEdge 2850/2950 hardware. I'll be using 15K SAS drives throughout. Ok with RAID5 for mail store or is RAID 10 a better way to go? Will most likely put server OS onto RAID1 cfg w/ 73GB 15K SAS drives and keep /opt on the larger RAID.

    Backups will be handled either via NFS or attached storage and not kept on the local RAID.

    Do I need to spend the extra money to go for RAID 10 for a ~50 user configuration? It seems that my user storage is much higher than the normal ZCS setup I've seen.

    Thanks.

  2. #2
    Klug's Avatar
    Klug is offline Moderator
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    On a 2950, there's room for six 3.5" HD.

    If you're using Network Edition, _one_ pair of 15K SAS (146 GB) is enough. In RAID1.

    Get a pair of 1TB 7.2K for HSM, in RAID1 too.
    You can get either SAS (NearLine) or SATA2 for these.

    And you'll have two slots free for upgrading your HSM volume.

    Additional info, added later
    Dell might say mixing SAS and SATA2 is not (officialy) supported, so get 7.2K SAS (NearLine) for the HSM.
    Last edited by Klug; 04-30-2009 at 11:15 AM.

  3. #3
    jpbuse is offline Loyal Member
    Join Date
    Jan 2008
    Location
    San Diego, CA
    Posts
    88
    Rep Power
    7

    Default

    Thanks. 146GB won't give me enough storage though. /opt/zimbra is currently at 220GBs and /opt/zimbra/store is at 170GBs. I need to allow room for growth. I assume both index and store should be located on fast Raid1 config, correct? I could go with 450GB 15K SAS but again I think that is a little limiting.

    Where is the HSM actually stored, or is that up to me? I'm a tad new to the NE version of ZCS.

  4. #4
    Klug's Avatar
    Klug is offline Moderator
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    The idea of HSM is to automagically move "old" mails (more than 30 days old by default) to another volume.

    For this other volume, we can you slower disks (in a slower RAID pattern) because these emails are not accessed a lot and because they're only emails (no index, no database, just emails).

    However, and that's the beauty of HSM, the emails are still accessible for the users. They won't even notice the emails are on a slower volume 8)

    On one of my servers (Dell 2950 with 2x146GB SAS + 2x750GB SATA2), there are 80 active users (with lower quotas than your users).
    I've setup the HSM volume (through the webadmin UI) to ponit to the SATA2 volume.
    /opt/zimbra is 27 GB and HSM is 52 GB...

  5. #5
    fcash is offline Elite Member
    Join Date
    Jun 2007
    Location
    BC, Canada
    Posts
    281
    Rep Power
    8

    Default

    Throw a pair of CompactFlash disks in there, using CF-IDE or CF-to-SATA adapters. Get the biggest you can afford. Anything over 2 GB will be fine. Configure them as a RAID1 (software RAID is fine).

    Install the OS to the CF disks. I've yet to see a Linux install that needs more than a couple GB.

    Then you can use 4x SAS disks in a RAID10 for /opt/zimbra, and 2x SATA disks in a RAID1 for HSM. You'll also want to mount /home and /var off one of these arrays, so that you don't churn the CF too quickly. And consider using tmpfs for /tmp.

    That way, you get the most disk space for long-term storage (can use up to 2 TB SATA disks), but you also get a lot of fast disk space for the message store.

    There's no sense "wasting" disk space on the OS install.
    Freddie

  6. #6
    Klug's Avatar
    Klug is offline Moderator
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    CF is not supported in Dell hardware (by Dell support).

  7. #7
    LMStone's Avatar
    LMStone is offline Moderator
    Join Date
    Sep 2006
    Location
    477 Congress Street | Portland, ME 04101
    Posts
    1,374
    Rep Power
    10

    Default

    Quote Originally Posted by jpbuse View Post
    I've read various posts, threads and wiki articles on which RAID config to use for the ZCS server. Looks like RAID5 is not recommended for installs over 100 users. We have around 40 users right now and 60 would be our top end.

    System is running ZCS NE 5.0.15, IMAP no POP. Average user account is 5GBs and current quota is 15GBs. We're an advertising/marketing agency and email usage and storage is very high. I'm currently planning for 20GB/account quota at 50 users so 1TB of available storage needed. The reality is many users will have less than 5GBs of mail storage.

    Server is CentOS 5.2 on Dell PowerEdge 2850/2950 hardware. I'll be using 15K SAS drives throughout. Ok with RAID5 for mail store or is RAID 10 a better way to go? Will most likely put server OS onto RAID1 cfg w/ 73GB 15K SAS drives and keep /opt on the larger RAID.

    Backups will be handled either via NFS or attached storage and not kept on the local RAID.

    Do I need to spend the extra money to go for RAID 10 for a ~50 user configuration? It seems that my user storage is much higher than the normal ZCS setup I've seen.

    Thanks.
    Another reason not to use RAID5 (or even RAID6) aside from performance is that the survivability rate is not as high as you might think. We've seen articles in the trades claiming four-disk RAID5 arrays survive a single disk failure as little as 80% of the time.

    Plus, once you start using big disks, the likelihood of a write failure during the RAID rebuild (after you have replaced the disk) increases. We saw one article where the author had a six-disk RAID5 array with a hot spare, comprised of 1TB disks, and after he pulled a disk to simulate a failure, the array was never able to rebuild itself.

    So, the only time we continue to use RAID6 is on enterprise-grade disk shelfs with battery backed write caches in the hardware RAID controllers. With disks so cheap now, RAID1/RAID10 is a better option in our view.

    With 60 users and a 15GB quota, you'll need to plan for a terabyte just for the mail store. Then there are the indexes, backups, etc., so I think you will be better off getting a SATA JBOD shelf with a separate RAID controller for this server right from the start.

    Dropping the six 450GB fast 15K SAS drives into the box itself in a RAID10 array will enable you to keep much of the mail store and indexes on fast storage. You can use the slower SATA disks in the shelf for /opt/zimbra/backup and for HSM.

    Hope that helps,
    Mark

  8. #8
    fcash is offline Elite Member
    Join Date
    Jun 2007
    Location
    BC, Canada
    Posts
    281
    Rep Power
    8

    Default

    They don't support adding additional IDE or SATA drives to a server? After all, that's how it will appear to the system. Why add extra IDE or SATA ports to a system if you can't use them?

    That's one of the main reasons we don't buy name-brand servers anymore ... too limiting in what they'll support, and use any kind of modification as a reason to cancel a warranty.
    Freddie

  9. #9
    Klug's Avatar
    Klug is offline Moderator
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    Quote Originally Posted by fcash View Post
    They don't support adding additional IDE or SATA drives to a server? After all, that's how it will appear to the system. Why add extra IDE or SATA ports to a system if you can't use them?
    You are allowed to add Dell stuff, not "other brands" stuff...

    Quote Originally Posted by fcash View Post
    That's one of the main reasons we don't buy name-brand servers anymore ... too limiting in what they'll support, and use any kind of modification as a reason to cancel a warranty.
    True.
    OTOH, my servers are in datacentres 800 km (500 miles) away from me and I know there'll be a guy with a replacement HD there within 4 hours in case of a crash (even 2 hours for some servers)...

  10. #10
    LMStone's Avatar
    LMStone is offline Moderator
    Join Date
    Sep 2006
    Location
    477 Congress Street | Portland, ME 04101
    Posts
    1,374
    Rep Power
    10

    Default

    Quote Originally Posted by fcash View Post
    They don't support adding additional IDE or SATA drives to a server? After all, that's how it will appear to the system. Why add extra IDE or SATA ports to a system if you can't use them?

    That's one of the main reasons we don't buy name-brand servers anymore ... too limiting in what they'll support, and use any kind of modification as a reason to cancel a warranty.
    Not to start a brand flame-war, but we've found HP servers are very configurable in their options, and like Klug, we and our clients generally need to know a hardware failure can be addressed via manufacturer support without question within the timeframe of the support contract. Plenty of HP server RAID controllers support both SAS and SATA in the same physical box.

    Hope that helps,
    Mark

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Best Drive Raid and Partitions for 3 Drive Server
    By raj in forum Administrators
    Replies: 10
    Last Post: 10-27-2008, 09:27 AM
  2. [SOLVED] Fast RAID Setup for Zimbra...
    By daneturner in forum Administrators
    Replies: 20
    Last Post: 08-27-2008, 10:54 AM
  3. Replies: 4
    Last Post: 06-20-2008, 04:57 AM
  4. need advice on configuring zimbra to work with fax server
    By pheonix1t in forum Administrators
    Replies: 0
    Last Post: 07-11-2007, 07:46 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •