Hello,

I've recently noticed a sudden increase in our server's CPU usage for no apparent reason. The server has been stable for the past month or so but for the last 3 days or so CPU usage has skyrocketed:

Captura de tela de 2013-08-26 19:05:15.jpg

The culprit appears to be Jetty's process:

Code:
Cpu0  : 60.3%us, 39.4%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.3%st
Cpu1  :  1.3%us,  0.7%sy,  0.0%ni, 96.0%id,  2.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   7629492k total,  7393868k used,   235624k free,    69548k buffers
Swap:        0k total,        0k used,        0k free,   181348k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                      
12813 zimbra    20   0 3706m 2.4g 8912 S  101 33.3   4858:13 java
Code:
root@zcs:/opt/zimbra/log# ps auxww|grep 12813
zimbra   12813 34.0 33.3 3795068 2544460 ?     Sl   Aug16 4859:12 /opt/zimbra/java/bin/java -Dfile.encoding=UTF-8 -server -Djava.awt.headless=true -Dsun.net.inetaddr.ttl=60 -XX:+UseConcMarkSweepGC -XX:PermSize=128m -XX:MaxPermSize=350m -XX:SoftRefLRUPolicyMSPerMB=1 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-OmitStackTraceInFastThrow -Djava.net.preferIPv4Stack=true -Dorg.apache.jasper.compiler.disablejsr199=true -Xss256k -Xms1868m -Xmx1868m -Xmn467m -Djava.io.tmpdir=/opt/zimbra/mailboxd/work -Djava.library.path=/opt/zimbra/lib -Djava.endorsed.dirs=/opt/zimbra/mailboxd/common/endorsed -Dzimbra.config=/opt/zimbra/conf/localconfig.xml -Djetty.home=/opt/zimbra/mailboxd -DSTART=/opt/zimbra/mailboxd/etc/start.config -jar /opt/zimbra/mailboxd/start.jar OPTIONS=Server,jsp,jmx,resources,websocket,ext,jta,plus,rewrite,setuid /opt/zimbra/mailboxd/etc/jetty.properties /opt/zimbra/mailboxd/etc/jetty-setuid.xml /opt/zimbra/mailboxd/etc/jetty.xml
It seems that process is always using 100% (or close to) of one of the CPU cores. A quick strace revealed this:

Code:
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 38.29   50.149016        5113      9808      3467 futex
 29.10   38.111460       34901      1092           epoll_wait
 27.88   36.522288     1106736        33        25 restart_syscall
  4.62    6.056379      504698        12           accept
  0.06    0.076696           0    205231           gettimeofday
  0.02    0.021100           2     10627           clock_gettime
  0.01    0.012069         170        71           poll
  0.01    0.009934          24       407           write
  0.00    0.005047         841         6           fdatasync
  0.00    0.004217           9       479       244 read
  0.00    0.004001        4001         1           fsync
  0.00    0.001408          16        86        14 epoll_ctl
  0.00    0.000414          59         7           connect
  0.00    0.000264           9        31           getsockname
  0.00    0.000182           5        40           setsockopt
  0.00    0.000123           6        22           dup2
  0.00    0.000085           1       105        36 stat
  0.00    0.000080           2        45           rt_sigprocmask
  0.00    0.000000           0         8         2 open
  0.00    0.000000           0        28           close
  0.00    0.000000           0         8           fstat
  0.00    0.000000           0        61         3 lstat
  0.00    0.000000           0        13           lseek
  0.00    0.000000           0        23           mmap
  0.00    0.000000           0        47           mprotect
  0.00    0.000000           0         9           rt_sigreturn
  0.00    0.000000           0       112           ioctl
  0.00    0.000000           0        68           sched_yield
  0.00    0.000000           0        12           madvise
  0.00    0.000000           0         7           socket
  0.00    0.000000           0        81           sendto
  0.00    0.000000           0       131           recvfrom
  0.00    0.000000           0        12         5 shutdown
  0.00    0.000000           0         9           getsockopt
  0.00    0.000000           0        11           clone
  0.00    0.000000           0        48           fcntl
  0.00    0.000000           0         2           getdents
  0.00    0.000000           0         1           link
  0.00    0.000000           0         1           unlink
  0.00    0.000000           0        11           gettid
  0.00    0.000000           0        22           sched_getaffinity
  0.00    0.000000           0         1           openat
  0.00    0.000000           0        11           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00  130.974763                228840      3796 total
We've also noticed a steady increase in memory usage over the weeks despite no increase in traffic or ammount of users:

Captura de tela de 2013-08-26 19:12:28.jpg

We're using Zimbra Community 8.0.4_GA_5739.NETWORK release (July 25) with the latest available patch applied.

For the time being we haven't felt any impact due to this high CPU usage but I've scheduled a service restart in order to avoid any possible trouble. Checking Zimbra logs provided no obvious clues as to what might be wrong. Does anyone have experienced this and/or have any suggestions?

Thanks!