Results 1 to 4 of 4

Thread: load doubles after upgrade from 5.0.14 to 6.0.1

  1. #1
    gruad23 is offline Senior Member
    Join Date
    Dec 2008
    Location
    vienna/austria
    Posts
    59
    Rep Power
    6

    Default load doubles after upgrade from 5.0.14 to 6.0.1

    I just upgraded from 5.0.14-GA to 6.0.1-GA and now my system seems less responsive and has double load and cpu-consumption (see graphs attached).

    Load is average 4 now which is quite high. Hostsystem is ubuntu8.04-64 installed as guest in vmware.

    if I watch the output of top then "java" and "zmlogger" use up to 50% cpu or even more every few seconds.

    I didnt reboot yet, cause thats what I usually want to avoid.

    The upgrade went flawlessly, only I had to install "sysstat" as recommended prerequisite that was not installed before.

    While zimbralogs looks "normal" my syslog is flooded with what looks like debug-messages or profiling-messages (see below) and I wonder if this could be a problem.

    Any hints where to start troubleshooting are very recommended !! thnx.

    attached you find the graphs of my system before (low), during (peak) and after the upgrade (high)

    and here a second out of my syslog. Its really flooded with this messages but I didnt knowingly turn on any debugging-stuff during the upgrade.

    l_reads, Innodb_buffer_pool_wait_free, Innodb_buffer_pool_write_requests, Innodb_data_fsyncs, Innodb_data_pending_fsyncs, Innodb_data_pending_reads, Innodb_data_pending_writes, Innodb_data_read, Innodb_data_reads, Innodb_data_writes, Innodb_data_written, Innodb_dblwr_pages_written, Innodb_dblwr_writes, Innodb_log_waits, Innodb_log_write_requests, Innodb_log_writes, Innodb_os_log_fsyncs, Innodb_os_log_pending_fsyncs, Innodb_os_log_pending_writes, Innodb_os_log_written, Innodb_page_size, Innodb_pages_created, Innodb_pages_read, Innodb_pages_written, Innodb_row_lock_current_wai:::428F4C3A-AEA0-11DE-8C49-5F7331B00D35:::
    Oct 1 17:37:00 zimbra zimbramon[25801]: 25801:info: :::428F4C3A-AEA0-11DE-8C49-5F7331B00D35:::ts, Innodb_row_lock_time, Innodb_row_lock_time_avg, Innodb_row_lock_time_max, Innodb_row_lock_waits, Innodb_rows_deleted, Innodb_rows_inserted, Innodb_rows_read, Innodb_rows_updated, Key_blocks_not_flushed, Key_blocks_unused, Key_blocks_used, Key_read_requests, Key_reads, Key_write_requests, Key_writes, Last_query_cost, Max_used_connections, Not_flushed_delayed_rows, Open_files, Open_streams, Open_tables, Opened_tables, Prepared_stmt_count, Qcache_free_blocks, Qcache_free_memory, Qcache_hits, Qcache_inserts, Qcache_lowmem_prunes, Qcache_not_cached, Qcache_queries_in_cache, Qcache_total_blocks, Queries, Questions, Rpl_status, Select_full_join, Select_full_range_join, Select_range, Select_range_check, Select_scan, Slave_open_temp_tables, Slave_retried_transactions, Slave_running, Slow_launch:::428F51E4-AEA0-11DE-8C49-5F7331B00D35:::
    Oct 1 17:37:00 zimbra zimbramon[25801]: 25801:info: :::428F51E4-AEA0-11DE-8C49-5F7331B00D35:::_threads, Slow_queries, Sort_merge_passes, Sort_range, Sort_rows, Sort_scan, Ssl_accept_renegotiates, Ssl_accepts, Ssl_callback_cache_hits, Ssl_cipher, Ssl_cipher_list, Ssl_client_connects, Ssl_connect_renegotiates, Ssl_ctx_verify_depth, Ssl_ctx_verify_mode, Ssl_default_timeout, Ssl_finished_accepts, Ssl_finished_connects, Ssl_session_cache_hits, Ssl_session_cache_misses, Ssl_session_cache_mode, Ssl_session_cache_overflows, Ssl_session_cache_size, Ssl_session_cache_timeouts, Ssl_sessions_reused, Ssl_used_session_cache_entries, Ssl_verify_depth, Ssl_verify_mode, Ssl_version, Table_locks_immediate, Table_locks_waited, Tc_log_max_pages_used, Tc_log_page_size, Tc_log_page_waits, Threads_cached, Threads_connected, Threads_created, Threads_running, Uptime:: 10/01/2009 17:37:00,1,0,0,0,19718424,4:::428F55E0-AEA0-11DE-8C49-5F7331B00D35:::
    Oct 1 17:37:00 zimbra zimbramon[25801]: :::428F55E0-AEA0-11DE-8C49-5F7331B00D35:::3067789,4,0,0,0,0,0,0,0,0,0,0,26370 ,0,0,0,2,0,0,4126,0,0,0,0,0,2,0,0,0,0,0,0,0,0,1102 0,3,0,0,0,0,0,0,0,0,0,0,0,0,4644,0,0,0,0,0,26852,0 ,19024,80571,0,0,0,6,0,0,0,0,0,0,0,107,0,0,0,0,0,0 ,0,0,0,0,338,0,0,0,6,0,0,0,0,0,0,0,0,0,0,0,16609,4 5,0,0,0,0,0,0,OFF,345,45,5,359,0,0,0,1,62777,0,0,0 ,65,2748088,2990826,57832,7329,57685,344,0,0,571,7 2986,11800,3,15592,2497,487,14784,4,57,9741587,923 9,0,971995,7916,0,0,0,182718464,10098,15798,550877 184,15592,765,0,89546,4809,5331,0,0,39690752,16384 ,789,11019,15592,0,1,1,1,1,84349,13357,4392550,201 07,0,6694,6,1028,16,629,0,0.000000,4,0,27,0,67,75, 0,0,0,0,0,0,0,0,0,189942,189942,NULL,0,0,3738,0,24 1,0,0,OFF,0,34,0,336,1617,41,0,0,0,,,0,0,0,0,0,0,0 ,0,0,NONE,0,0,0,0,0,0,0,,60636,0,0,0,0,0,4,4,1,666 4
    Oct 1 17:37:01 zimbra zimbramon[25781]: 25781:info: zmstat vm.csv: timestamp, r, b, swpd, free, buff, cache, si, so, bi, bo, in, cs, us, sy, id, wa, MemTotal, MemFree, Buffers, Cached, SwapCached, Active, Inactive, SwapTotal, SwapFree, Dirty, Writeback, AnonPages, Mapped, Slab, SReclaimable, SUnreclaim, PageTables, NFS_Unstable, Bounce, CommitLimit, Committed_AS, VmallocTotal, VmallocUsed, VmallocChunk, HugePages_Total, HugePages_Free, HugePages_Rsvd, HugePages_Surp, Hugepagesize, loadavg:: 10/01/2009 17:37:01, 1, 0, 527172, 146684, 85600, 298092, 1, 0, 1, 169, 30, 601, 39, 27, 34, 0, 2038680, 146772, 85600, 298092, 159588, 1431308, 389376, 4883720, 4356548, 740, 0, 1404544, 27780, 36312, 20812, 15500, 13524, 0, 0, 5903060, 3150972, 34359738367, 12092, 34359725987, 0, 0, 0, 0, 2048, 1.90
    Oct 1 17:37:01 zimbra zimbramon[25793]: 25793:info: zmstat fd.csv: timestamp, fd_count:: 10/01/2009 17:37:01, 3200
    Oct 1 17:37:01 zimbra zimbramon[25789]: 25789:info: zmstat io.csv: timestamp, sda:tps, sda:kB_read/s, sda:kB_wrtn/s, sda:kB_read, sda:kB_wrtn, sr0:tps, sr0:kB_read/s, sr0:kB_wrtn/s, sr0:kB_read, sr0:kB_wrtn:: 10/01/2009 17:37:01, 23.17, 1.33, 168.80, 40, 5064, 0.00, 0.00, 0.00, 0, 0
    Oct 1 17:37:01 zimbra zimbramon[25783]: 25783:info: zmstat io-x.csv: timestamp, sda:rrqm/s, sda:wrqm/s, sda:r/s, sda:w/s, sda:rkB/s, sda:wkB/s, sda:avgrq-sz, sda:avgqu-sz, sda:await, sda:svctm, sda:%util, sr0:rrqm/s, sr0:wrqm/s, sr0:r/s, sr0:w/s, sr0:rkB/s, sr0:wkB/s, sr0:avgrq-sz, sr0:avgqu-sz, sr0:await, sr0:svctm, sr0:%util:: 10/01/2009 17:37:01, 0.23, 14.23, 0.10, 23.07, 1.33, 168.80, 14.69, 0.03, 1.11, 1.04, 2.40, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00
    Oct 1 17:37:01 zimbra zimbramon[25779]: 25779:info: zmstat cpu.csv: timestamp, cpu:user, cpu:nice, cpu:sys, cpu:idle, cpu:iowait, cpu:irq, cpu:softirq, cpu0:user, cpu0:nice, cpu0:sys, cpu0:idle, cpu0:iowait, cpu0:irq, cpu0:softirq:: 10/01/2009 17:37:01, 39.2, 0.0, 26.7, 34.0, 0.0, 0.0, 0.1, 39.2, 0.0, 26.7, 34.0, 0.0, 0.0, 0.1
    Attached Images Attached Images

  2. #2
    kamus is offline Junior Member
    Join Date
    Jan 2009
    Posts
    8
    Rep Power
    6

    Exclamation

    I did the same and have this problems too while we upgrade from 5.0.14 to 6.0.1, I can't see nothing innusual in log files and installation finished without problems. I try to turn off zmlogger but cpu usage is still high.

    Please any advice or clue for repair this issue I really appreciated.

    Regards

  3. #3
    kamus is offline Junior Member
    Join Date
    Jan 2009
    Posts
    8
    Rep Power
    6

    Default

    ok, I think that I did it. I checked zimbra log files and I saw nginx.log file has a innusual size. So I look it and at least 50% was a "cannot connect to memcached server 127.0.1.1:11211 (rc:-1)", so I google a while and I found a wordaround that works for me.

    Look this graph:



    I run as zimbra user:
    Code:
    zmproxyctl stop
    zmproxyctl start
    That's all

    I hope this can help you to reduce CPU usage very drastically
    Attached Images Attached Images

  4. #4
    gruad23 is offline Senior Member
    Join Date
    Dec 2008
    Location
    vienna/austria
    Posts
    59
    Rep Power
    6

    Default not solved

    thnx for your reply, but I dont use zmproxy cause I've a singlehost-installation. So neither zmproxy nor memcached are even installed on my machine.

    I disabled zmlogger and zmstats which makes the strange logs vanish, but the load stays high which increasingly worries me cause I dont have any clue why and dont know how the system will scale if the number of users will significantely increase over the next weeks.

    I even did a complete system-reboot now which didnt change anything but reminds me of me being win-sysadmin and dont know whats going on the system and reboot.

    Before zimbra I used cyrus-imapd and horde which cannot compete in comfort and I love zimbra reading my emails as user but it worries me that my systemload is high and noone can help and I dont know whats going on.

    Anyway : it seems that various people have this problem so maybe its adressed in a future release.

    thnx

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. High Load amavisd after upgrade to 6.0.1
    By Nextasy in forum Installation
    Replies: 11
    Last Post: 05-19-2010, 10:51 AM
  2. good upgrade from RHEL4 32bit 6.0.0 NE to 6.0.1 NE
    By lars_BE in forum Zimbra Success Stories
    Replies: 0
    Last Post: 09-15-2009, 11:58 AM
  3. Replies: 4
    Last Post: 04-20-2009, 12:31 PM
  4. [SOLVED] Upgrade to 5.0.14 FOSS
    By sdrury in forum Administrators
    Replies: 3
    Last Post: 03-15-2009, 01:30 AM
  5. Replies: 5
    Last Post: 08-27-2008, 05:58 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •