Results 1 to 9 of 9

Thread: Which directories in /opt/zimbra benefit from separate partitions?

  1. #1
    fultonj is offline Senior Member
    Join Date
    Feb 2008
    Location
    Easton PA
    Posts
    63
    Rep Power
    7

    Default Which directories in /opt/zimbra benefit from separate partitions?

    Which directories in /opt/zimbra benefit from separate partitions?

    Other admins have claimed it's nice to have multiple partitions within /opt/zimbra. I'm looking around and seem to have found five. Do you agree with the five I've come up with and do you know of benefits to even more or perhaps less?

    Fiber: (RAID 10 LUN)
    * /opt/zimbra/db
    * /opt/zimbra/index
    * /opt/zimbra/redolog

    SATA:
    * /opt/zimbra/store (RAID 10 LUN)
    * /opt/zimbra/backup (RAID 5 LUN)

    Note that this is only for a store server (my MTA, LDAP and IMAP proxy are separate servers).

    Also, I'll be using LVM to adjust for if I don't correctly predict how to distribute size, but with that in mind, is there a best practice percentage I could apply to each? Especially for the fiber. A Zimbra engineer once suggested I use a 75% SATA and 25% fiber ballpark.

    Thanks,
    John

  2. #2
    fultonj is offline Senior Member
    Join Date
    Feb 2008
    Location
    Easton PA
    Posts
    63
    Rep Power
    7

    Default

    The proposed partitions in my last post confused store with hsm. I'd rather have the store on the fast disk for current mail and use HSM to store older mail on SATA. I've also added /opt/zimbra/log. Are there any other directories which benefit from being separate partitions? Any predictions on what percentage of the disk is appropriate for each?

    Fiber: (RAID 10 LUN)
    * /opt/zimbra/db
    * /opt/zimbra/index
    * /opt/zimbra/redolog
    * /opt/zimbra/store
    * /opt/zimbra/log

    SATA:
    * /opt/zimbra/hsm (RAID 10 LUN)
    * /opt/zimbra/backup (RAID 5 LUN)

  3. #3
    uxbod's Avatar
    uxbod is offline Moderator
    Join Date
    Nov 2006
    Location
    UK
    Posts
    8,017
    Rep Power
    24

    Default

    Are you SATA disks also held within your fibre enclosure ? for example a HP EVA 4400 can use a mixture of these technologies when presenting LUNs. It would also help if you let us know whether the fibre is 1/2/4GB ? Are you using multiple HBAs ? Are you using any IO load balancing software ?

  4. #4
    fultonj is offline Senior Member
    Join Date
    Feb 2008
    Location
    Easton PA
    Posts
    63
    Rep Power
    7

    Default

    Fibre and SATA disks are in a separate drawer within an EMC Clarion CX3-20. I have a 2G switch and a 4G switch each server has dual HBAs. I'll be using I/O load ballancing software: PowerPath or MPIO.

    As an aside on the I/O balancing: I've used PowerPath in the past but it had serious stability problems on rhel4 when it couldn't queue I/O and drove the load to 500 requiring a hard bounce and e2fsck. I since mounted one device directly (instead of /dev/emcpowerX) forgoing failover and load ballancing in favor of stablity as a tempory fix. EMC says the new powerpath won't have this bug and I'm also looking into changing to multipathd, though I hear it confused LVM (by incremening device numbers).

  5. #5
    fultonj is offline Senior Member
    Join Date
    Feb 2008
    Location
    Easton PA
    Posts
    63
    Rep Power
    7

    Default

    I'll attempt to answer my own question and welcome
    comments. Especially if this post could mislead anyone.

    I've surveyed a single system where I'm piloting 50 users. The
    approximate space used and percentages I computed are below. I expect
    these users (a mix of faculty,staff and students) to represent what my
    production system will be like:

    * /opt/zimbra 2G 8%
    * /opt/zimbra/store 7G 27%
    * /opt/zimbra/db 1G 4%
    * /opt/zimbra/backup 16G 57%
    * /opt/zimbra/index 500M 2%
    * /opt/zimbra/redolog 15M 1%
    * /opt/zimbra/log 160M 1%

    I applied these percentages back to my available space (3TB) for
    production. It wasn't a direct application of the percentages for the
    following reasons:

    * I didn't scale /opt/zimbra itself since the directories I expect to
    grow will be the separate partitions listed above.

    * I rounded up to the nearest 1%.

    * The pilot system is using a single disk so I differed for SATA vs
    fiber and different RAID numbers where applicable.

    * The above didn't take HSM into account so my store is much smaller
    but there's plenty on the HSM SATA volume. I plan to HSM as low as
    every 14 or perhaps even 7 days.

    * My backup is a little short but is actually made of three 500G
    disks LVM'd together. I can add these more cheaply and add them to
    the LVM in the future, presumably before HSM fills up. I also have
    the option of backing up each week, not every two weeks.

    With all this in mind I've come up with the following:

    20G /opt/zimbra
    30G /opt/zimbra/log
    30G /opt/zimbra/redolog
    60G /opt/zimbra/index
    120G /opt/zimbra/db
    380G /opt/zimbra/store
    980G /opt/zimbra/hsm
    1600G /opt/zimbra/backup

    Please let me know if you see problems with the above.

    Thanks,
    John

  6. #6
    fultonj is offline Senior Member
    Join Date
    Feb 2008
    Location
    Easton PA
    Posts
    63
    Rep Power
    7

    Default

    I've made my partitions. [crickets chirping in background] I'm still open to comments.

    # df -h | grep zimbra
    /dev/sde1 19G 445M 18G 3% /opt/zimbra
    /dev/sde3 350G 467M 342G 1% /opt/zimbra/store
    /dev/sde2 111G 461M 108G 1% /opt/zimbra/db
    /dev/sdh1 905G 473M 886G 1% /opt/zimbra/hsm
    /dev/sde7 56G 453M 54G 1% /opt/zimbra/index
    /dev/sde6 28G 445M 27G 2% /opt/zimbra/log
    /dev/sde5 28G 445M 27G 2% /opt/zimbra/redolog
    /dev/sdi1 1.5T 470M 1.5T 1% /opt/zimbra/backup
    #

  7. #7
    Mistoffeles is offline Senior Member
    Join Date
    Oct 2007
    Posts
    70
    Rep Power
    7

    Default

    Quote Originally Posted by fultonj View Post
    I've made my partitions. [crickets chirping in background] I'm still open to comments.

    # df -h | grep zimbra
    /dev/sde1 19G 445M 18G 3% /opt/zimbra
    /dev/sde3 350G 467M 342G 1% /opt/zimbra/store
    /dev/sde2 111G 461M 108G 1% /opt/zimbra/db
    /dev/sdh1 905G 473M 886G 1% /opt/zimbra/hsm
    /dev/sde7 56G 453M 54G 1% /opt/zimbra/index
    /dev/sde6 28G 445M 27G 2% /opt/zimbra/log
    /dev/sde5 28G 445M 27G 2% /opt/zimbra/redolog
    /dev/sdi1 1.5T 470M 1.5T 1% /opt/zimbra/backup
    #
    How has this worked for you over time?

    I imagine /opt/zimbra has changed very little, /opt/zimbra/backup has become quite large, and everything else has fluctuated but steadily increased on average over time.

    I am building a system now with a RAID10 and using LVM, I will probably be using relatively small amounts compared to yours but leaving unused space on the drive for expansion through LVM. I don't have any real use for anything fancy, though I may someday add some external high-speed storage if the need presents itself.
    Last edited by Mistoffeles; 07-02-2009 at 03:32 PM.
    - Misty

  8. #8
    fultonj is offline Senior Member
    Join Date
    Feb 2008
    Location
    Easton PA
    Posts
    63
    Rep Power
    7

    Default

    This has worked very well for me. The sizes today (9 months later) are below. What is below contains a larger /opt/zimbra because I had not installed it when I did my original post. I am also using HSM rather agressively and have it migrate data older than 14 days every morning after backups. The devices names also show that I am using the device-mapper-multipath package for RHEL5.3, which has also been working very well.

    /dev/mapper/mpath0p1 19G 4.8G 14G 27% /opt/zimbra
    /dev/mapper/mpath0p2 111G 12G 97G 12% /opt/zimbra/db
    /dev/mapper/mpath0p3 350G 45G 298G 14% /opt/zimbra/store
    /dev/mapper/mpath0p5 28G 859M 27G 4% /opt/zimbra/redolog
    /dev/mapper/mpath0p6 28G 12G 16G 42% /opt/zimbra/log
    /dev/mapper/mpath0p7 56G 20G 35G 36% /opt/zimbra/index
    /dev/mapper/mpath1p1 905G 282G 605G 32% /opt/zimbra/hsm
    /dev/mapper/mpath2p1 1.5T 519G 933G 36% /opt/zimbra/backup

  9. #9
    Mistoffeles is offline Senior Member
    Join Date
    Oct 2007
    Posts
    70
    Rep Power
    7

    Default

    Hmm...I could be having a problem, then, as I have only allocated 3GB for /opt/zimbra. If just the program itself requires about 5GB as in your install, I could be in trouble.

    At least I used LVM so I can just allocate some more space and attach it to the partition.
    - Misty

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •