Results 1 to 7 of 7

Thread: my zimbra server I/O wait is very high

  1. #1
    old-cow is offline Intermediate Member
    Join Date
    Dec 2007
    Location
    China
    Posts
    22
    Rep Power
    7

    Unhappy my zimbra server I/O wait is very high

    Hi all.

    My zimbra server got a high I/O wait .

    The zimbra run on a HP ML100 G5 server.

    Code:
    [zimbra@sub ~]$ cat /proc/cpuinfo
    processor       : 0
    vendor_id       : GenuineIntel
    cpu family      : 15
    model           : 6
    model name      : Intel(R) Pentium(R) D CPU 2.80GHz
    stepping        : 4
    cpu MHz         : 2793.930
    cache size      : 2048 KB
    physical id     : 0
    siblings        : 2
    core id         : 0
    cpu cores       : 2
    fdiv_bug        : no
    hlt_bug         : no
    f00f_bug        : no
    coma_bug        : no
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 6
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm pni monitor ds_cpl est cid xtpr
    bogomips        : 5592.73
    
    processor       : 1
    vendor_id       : GenuineIntel
    cpu family      : 15
    model           : 6
    model name      : Intel(R) Pentium(R) D CPU 2.80GHz
    stepping        : 4
    cpu MHz         : 2793.930
    cache size      : 2048 KB
    physical id     : 0
    siblings        : 2
    core id         : 1
    cpu cores       : 2
    fdiv_bug        : no
    hlt_bug         : no
    f00f_bug        : no
    coma_bug        : no
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 6
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm pni monitor ds_cpl est cid xtpr
    bogomips        : 5585.22
    
    [zimbra@sub ~]$
    
    [zimbra@sub ~]$ free
                 total       used       free     shared    buffers     cached
    Mem:       2073512    2047760      25752          0       2208     415636
    -/+ buffers/cache:    1629916     443596
    Swap:      2008040     156536    1851504
    [zimbra@sub ~]$
    
    [zimbra@sub ~]$ df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda2             9.4G  1.1G  7.9G  12% /
    /dev/sda1              99M   13M   82M  14% /boot
    none                 1013M     0 1013M   0% /dev/shm
    /dev/sda6             3.8G  328M  3.3G   9% /home
    /dev/sda9              84G   36G   45G  45% /mailstore
    /dev/sda3              19G  4.8G   14G  27% /opt
    /dev/sda5             3.8G  409M  3.2G  12% /var
    [zimbra@sub ~]$
    
    
    
    
    top - 11:32:31 up 101 days, 15:45,  1 user,  load average: 10.77, 7.38, 8.28
    Tasks: 176 total,   1 running, 175 sleeping,   0 stopped,   0 zombie
    Cpu0  : 32.2% us,  7.3% sy,  0.0% ni,  0.0% id, 60.1% wa,  0.3% hi,  0.0% si
    Cpu1  : 21.9% us,  4.7% sy,  0.0% ni, 30.2% id, 43.2% wa,  0.0% hi,  0.0% si
    Mem:   2073512k total,  1931216k used,   142296k free,     4480k buffers
    Swap:  2008040k total,   154288k used,  1853752k free,   296160k cached
    
      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
     9265 zimbra    16   0 1088m 623m 6836 S 11.6 30.8  25:22.09 java
     8673 zimbra    16   0  121m  13m 3652 S 10.9  0.7   2:29.58 mysqld
     9284 zimbra    15   0  7300 4352 1500 S  6.0  0.2   0:18.24 nginx
     8793 zimbra    16   0  9328 3104 2204 S  5.6  0.1   0:13.28 zmlogger
    29678 zimbra    17   0  400m  13m 7540 S  4.6  0.7   0:00.14 java
     4548 zimbra    17   0 67076  56m 6160 D  4.3  2.8   0:24.67 amavisd
     5259 zimbra    15   0 66652  55m 6172 D  3.6  2.8   0:37.36 amavisd
     8688 zimbra    15   0 10912 3524 1884 S  3.0  0.2   0:07.71 perl
       44 root      15   0     0    0    0 S  0.7  0.0  52:58.73 kswapd0
     8817 zimbra    16   0  630m 186m 4064 S  0.7  9.2   0:15.14 mysqld
     9411 zimbra    16   0  188m 164m 1048 S  0.7  8.1   0:24.36 clamd
     8693 zimbra    16   0  9128 4120 1844 S  0.3  0.2   0:04.67 zmmtaconfig
     9835 zimbra    17   0  7044 3560 1640 S  0.3  0.2   0:04.42 zmstat-proc
     9845 zimbra    16   0  6872 3404 1644 D  0.3  0.2   0:00.61 zmstat-mysql
        1 root      16   0  1752  484  456 S  0.0  0.0   0:03.16 init
        2 root      RT   0     0    0    0 S  0.0  0.0   1:16.18 migration/0
    My zimbra server have about 250 active user.

    Is that mean my HP ML100 G5 server's performance not good enough?

    Thanks !
    Last edited by old-cow; 02-05-2009 at 08:35 PM.
    Code:
    [zimbra@sub ~]$ zmcontrol -v
    
    
    Release 5.0.2_GA_1975.RHEL4_20080130212006 RHEL4 FOSS edition
    
    [zimbra@sub ~]$

  2. #2
    old-cow is offline Intermediate Member
    Join Date
    Dec 2007
    Location
    China
    Posts
    22
    Rep Power
    7

    Default

    My HP ML110 server have tow 160G STAT HD ,and make a raid 1 .
    Code:
    [zimbra@sub ~]$ zmcontrol -v
    
    
    Release 5.0.2_GA_1975.RHEL4_20080130212006 RHEL4 FOSS edition
    
    [zimbra@sub ~]$

  3. #3
    sn00p's Avatar
    sn00p is offline Active Member
    Join Date
    Sep 2008
    Location
    Russia Federation, Novosibirsk
    Posts
    32
    Rep Power
    6

    Default

    It seems that you low on memory. First you could tune your sysctl.conf or somthing that suit your linux distribution to optimize memory. Or install more RAM.

    I could share my important settings of sysctl.conf including tcp\ip stack and so on if you interested.
    With best regards,

  4. #4
    old-cow is offline Intermediate Member
    Join Date
    Dec 2007
    Location
    China
    Posts
    22
    Rep Power
    7

    Default

    Thanks !

    I am planning to add more memery to the Server.
    Last edited by old-cow; 02-05-2009 at 09:58 PM.
    Code:
    [zimbra@sub ~]$ zmcontrol -v
    
    
    Release 5.0.2_GA_1975.RHEL4_20080130212006 RHEL4 FOSS edition
    
    [zimbra@sub ~]$

  5. #5
    uxbod's Avatar
    uxbod is offline Moderator
    Join Date
    Nov 2006
    Location
    UK
    Posts
    8,017
    Rep Power
    24

    Default

    Definitely add more RAM as these days it is cheap If I/O issues persist after that then you could look at adding two additional drives (RAID1) and moving the data store to them so that the application is split from the message store. A good read would be :- Performance Tuning Guidelines for Large Deployments - Zimbra :: Wiki

  6. #6
    old-cow is offline Intermediate Member
    Join Date
    Dec 2007
    Location
    China
    Posts
    22
    Rep Power
    7

    Default

    Thanks !

    I think move the data store to a new raid device is a good way.

    Code:
    [zimbra@sub ~]$ zmcontrol -v
    
    
    Release 5.0.2_GA_1975.RHEL4_20080130212006 RHEL4 FOSS edition
    
    [zimbra@sub ~]$

  7. #7
    LMStone's Avatar
    LMStone is offline Moderator
    Join Date
    Sep 2006
    Location
    477 Congress Street | Portland, ME 04101
    Posts
    1,367
    Rep Power
    10

    Default

    And once you add more RAM, then you can set up a RAM disk for Amavis's temp folder, which will take BIG load away from your hard disks.

    Hope that helps,
    Mark

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Upgrade to ZCS 5.10
    By blozancic in forum Installation
    Replies: 0
    Last Post: 10-21-2008, 08:03 AM
  2. [SOLVED] parts_decode_ext error
    By jsabater in forum Administrators
    Replies: 7
    Last Post: 10-13-2008, 07:24 AM
  3. [SOLVED] Clamav problem ? What's happening ?
    By aNt1X in forum Installation
    Replies: 23
    Last Post: 02-14-2008, 05:43 AM
  4. Can't start Zimbra!
    By zibra in forum Administrators
    Replies: 5
    Last Post: 03-22-2007, 11:34 AM
  5. Unable to start tomcat
    By chanck in forum Administrators
    Replies: 11
    Last Post: 06-11-2006, 12:58 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •