Results 1 to 5 of 5

Thread: Imap Connections suddenly stops working.. Netstat shows pages of connections

  1. #1
    Mikedrhost is offline New Member
    Join Date
    Mar 2013
    Posts
    3
    Rep Power
    2

    Default Imap Connections suddenly stops working.. Netstat shows pages of connections

    Hopefully some one here knows the answer to this and im posting in the correct section..

    For some reason imap connections stop working all of a sudden... This just started in the last month. Sometimes with in 24 hours and sometimes its up for as long as 4 days but the symptom is always the same. Users are suddenly unable to access imap via iphone. Zmcontrol restart resolves the issue.

    At first I thought it was connection pool exhaustion but I have raised that 3 times now currently I am at 800
    zimbraImapNumThreads: 800

    It was at 200 previously. This new number seems high to me as there are only 8 imap users Total.

    Doing some further digging I was able to find the following error

    <--
    ImapSSLServer-106076] [ip=201.23.160.71;] ProtocolHandler - Exception occurred while handling connection
    java.net.SocketException: Connection timed out
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(SocketInputStream. java:129)
    at com.sun.net.ssl.internal.ssl.InputRecord.readFully (InputRecord.java:293)
    at com.sun.net.ssl.internal.ssl.InputRecord.read(Inpu tRecord.java:331)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRec ord(SSLSocketImpl.java:798)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.perform InitialHandshake(SSLSocketImpl.java:1138)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHa ndshake(SSLSocketImpl.java:1165)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHa ndshake(SSLSocketImpl.java:1149)
    at com.zimbra.cs.tcpserver.ProtocolHandler.startHands hake(ProtocolHandler.java:184)
    at com.zimbra.cs.tcpserver.ProtocolHandler.run(Protoc olHandler.java:134)
    at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Wo rker.run(Unknown Source)
    at java.lang.Thread.run(Thread.java:662)
    2013-02-21 18:23:03,387 INFO [ImapSSLServer-106076] [] ProtocolHandler - Handler exiting normally
    2013-02-21 18:23:06,754 INFO [Timer-Zimbra] [] session - WaitSet sweeper: 1 active WaitSets (1 accounts) - 1 sets with blocked callbacks
    --->

    But Im not sure what it exactly means...

    Netstat shows alot of 993 connections it seems from same address(if im reading this right)

    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31815 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30723 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:18368 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.85:31123 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31825 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28142 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:21592 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:27893 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32514 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30713 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31095 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30612 CLOSE_WAIT
    tcp 0 0 ::1:52063 ::1:35892 TIME_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29733 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:16429 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:33014 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.82:27951 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:51357 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34567 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31819 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.73:56997 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34419 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.82:34326 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32142 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34430 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29783 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31652 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:47634 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31552 CLOSE_WAIT
    tcp 184 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.66:59080 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29536 ESTABLISHED
    tcp 184 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.73:55132 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:33595 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.82:27963 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29029 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28299 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28504 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.85:31112 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.85:31088 ESTABLISHED
    tcp 187 0 ::ffff:66.240.174.67:993 ::ffff:74.198.9.107:62564 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32963 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:49454 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32845 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34376 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29334 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32342 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28420 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:48278 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31944 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30960 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.73:55185 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30653 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.85:31120 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.99:25649 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32543 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.17.185.1:44385 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.73:57823 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28529 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.148:51946 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:48204 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:21653 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34432 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:27369 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:16298 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:52686 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32707 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:27473 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29728 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28331 CLOSE_WAIT


    How can I verify if pool exhaustion is actually taking place or if its something else.

    I am currently on 6.0.10_GA_2692.F7 but would be willing to upgrade to the latest 6.x branch if it would help...

    Thanks!!

  2. #2
    Mikedrhost is offline New Member
    Join Date
    Mar 2013
    Posts
    3
    Rep Power
    2

    Default

    Quote Originally Posted by Mikedrhost View Post
    Hopefully some one here knows the answer to this and im posting in the correct section..

    For some reason imap connections stop working all of a sudden... This just started in the last month. Sometimes with in 24 hours and sometimes its up for as long as 4 days but the symptom is always the same. Users are suddenly unable to access imap via iphone. Zmcontrol restart resolves the issue.

    At first I thought it was connection pool exhaustion but I have raised that 3 times now currently I am at 800
    zimbraImapNumThreads: 800

    It was at 200 previously. This new number seems high to me as there are only 8 imap users Total.

    Doing some further digging I was able to find the following error

    <--
    ImapSSLServer-106076] [ip=201.23.160.71;] ProtocolHandler - Exception occurred while handling connection
    java.net.SocketException: Connection timed out
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(SocketInputStream. java:129)
    at com.sun.net.ssl.internal.ssl.InputRecord.readFully (InputRecord.java:293)
    at com.sun.net.ssl.internal.ssl.InputRecord.read(Inpu tRecord.java:331)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRec ord(SSLSocketImpl.java:798)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.perform InitialHandshake(SSLSocketImpl.java:1138)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHa ndshake(SSLSocketImpl.java:1165)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHa ndshake(SSLSocketImpl.java:1149)
    at com.zimbra.cs.tcpserver.ProtocolHandler.startHands hake(ProtocolHandler.java:184)
    at com.zimbra.cs.tcpserver.ProtocolHandler.run(Protoc olHandler.java:134)
    at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Wo rker.run(Unknown Source)
    at java.lang.Thread.run(Thread.java:662)
    2013-02-21 18:23:03,387 INFO [ImapSSLServer-106076] [] ProtocolHandler - Handler exiting normally
    2013-02-21 18:23:06,754 INFO [Timer-Zimbra] [] session - WaitSet sweeper: 1 active WaitSets (1 accounts) - 1 sets with blocked callbacks
    --->

    But Im not sure what it exactly means...

    Netstat shows alot of 993 connections it seems from same address(if im reading this right)

    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31815 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30723 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:18368 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.85:31123 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31825 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28142 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:21592 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:27893 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32514 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30713 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31095 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30612 CLOSE_WAIT
    tcp 0 0 ::1:52063 ::1:35892 TIME_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29733 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:16429 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:33014 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.82:27951 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:51357 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34567 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31819 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.73:56997 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34419 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.82:34326 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32142 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34430 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29783 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31652 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:47634 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31552 CLOSE_WAIT
    tcp 184 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.66:59080 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29536 ESTABLISHED
    tcp 184 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.73:55132 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:33595 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.82:27963 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29029 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28299 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28504 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.85:31112 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.85:31088 ESTABLISHED
    tcp 187 0 ::ffff:66.240.174.67:993 ::ffff:74.198.9.107:62564 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32963 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:49454 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32845 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34376 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29334 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32342 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28420 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:48278 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:31944 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30960 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.73:55185 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:30653 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.85:31120 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.99:25649 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32543 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.17.185.1:44385 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.73:57823 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28529 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.148:51946 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:48204 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:21653 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:34432 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:27369 CLOSE_WAIT
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.172:16298 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.160.168:52686 ESTABLISHED
    tcp 0 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:32707 ESTABLISHED
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:27473 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:29728 CLOSE_WAIT
    tcp 185 0 ::ffff:66.240.174.67:993 ::ffff:201.23.162.79:28331 CLOSE_WAIT


    How can I verify if pool exhaustion is actually taking place or if its something else.

    I am currently on 6.0.10_GA_2692.F7 but would be willing to upgrade to the latest 6.x branch if it would help...

    Thanks!!
    No one knows the answer to this? this cant be that unique can it? really looking for some help here

  3. #3
    tecnalb is offline Special Member
    Join Date
    Sep 2007
    Location
    Lexington, KY, USA
    Posts
    110
    Rep Power
    7

    Default

    I saw your thread and wanted to post something. I have been experiencing an issue with MobileSync clients and dropped connections. After working with Zimbra support and Verizon support, I was left to dig in on my own.

    My issue is that mobile clients like IOS and Android start to timeout "Cannot connect to mail server" after about 12-24 hours. Digging into it I found that iptables connection tracking is a possible culprit, and that simply resetting iptables will mitigate the problem for some amount of time.

    When you experience the issue, and if you are running iptables, cycle the iptables service. If your issue clears then you might dig in deeper and see if you are hitting a connection limit.

    I run Redhat 5

    cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count

    That will show you a connection count. My system is small, 300 users, but my connection count was 3500 and climbing.

    Digging further... cat /proc/net/ip_conntrack will show you the connection table. In my case I was seeing a lot of this:

    tcp 6 128 ESTABLISHED src=xx.xx.xx.xx dst=166.205.51.66 sport=443 dport=52185 packets=3 bytes=1098 [UNREPLIED] src=166.205.51.66 dst=xx.xx.xx.xx sport=52185 dport=443 packets=0 bytes=0 mark=0 secmark=0 use=1

    Because these connections are ESTABLISHED they have a default timeout of 5 days. The UNREPLIED means that I haven't heard back from the device. Mobile clients are banging my server, and not reusing old connections (Zimbra issue or something else I don't know). These build up. For my iPhone and iPad, I had 25 connections for myself in this state. At some point the server, not Zimbra, just denies the connection. I literally could not even get to the server via the web browser, from my device. As soon as I reset iptables, it worked flawlessly.

    To see the parameters available:

    ls /proc/sys/net/ipv4/netfilter/

    I change the timeout of ESTABLISHED. You'll need to research your based on your kernel version. So, to resolve, I changed the TTL on ESTABLISHED connections to 15 minutes. Now my server is humming along and only has about 250 connections. The old connections timeout in 15 minutes instead of 5 days. Not sure what the longterm affect will be, but I dont have clients calling about "server connect" issues.

    echo "900" > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_established


    I'm still digging, because 3500 connections, unused, should not block my mobile clients out... but until I find it, its at least working. You should dig into it and see if its similar.

  4. #4
    liverpoolfcfan's Avatar
    liverpoolfcfan is offline Outstanding Member
    Join Date
    Oct 2009
    Location
    Dublin, IRELAND
    Posts
    712
    Rep Power
    6

    Default

    Quote Originally Posted by tecnalb View Post
    So, to resolve, I changed the TTL on ESTABLISHED connections to 15 minutes.
    You should never set mobile connection timeouts down as low as 15 mins, as you will kill valid connections. ActiveSync can run connections for up to 60 minutes (or 59mins 50seconds to be precise)

  5. #5
    tecnalb is offline Special Member
    Join Date
    Sep 2007
    Location
    Lexington, KY, USA
    Posts
    110
    Rep Power
    7

    Default

    Quote Originally Posted by liverpoolfcfan View Post
    You should never set mobile connection timeouts down as low as 15 mins, as you will kill valid connections. ActiveSync can run connections for up to 60 minutes (or 59mins 50seconds to be precise)
    At this time, neither Zimbra Support or the mobile carriers have been able to give me a solution to the problem. And it was precisely the mobile connections that are were causing the issue. I was seeing up to 25 connections per account, per device, ONLY on mobile devices of both Verizon and ATT/Cingular. So being that the mobile devices are not reusing connections, but creating new ones anyway, I don't see an issue. In fact, not one person has had a complaint or a timeout since I set the 15 minute limit.

    So I see it as a win/win thus far, however, I may increase the timeout to 60, and see what effect it has. But this does work.

    One additional note: I appreciate your input, and did use it to search... and I did find that Microsoft's article 2469722 for ActiveSync connection issues also suggests setting the timeout to 30 minutes in some cases, and as low as 15, for a similar issues with mobile devices. They suggest it for troubleshooting, but not a permanent fix. It's dated Feb 24 2012. And as I mentioned, I have not been able to get an answer with any assistance I have requested, to date.
    Last edited by tecnalb; 03-22-2013 at 12:33 PM.

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. IMAP - Dropping connection (max connections exceeded)
    By ymarinov in forum Administrators
    Replies: 9
    Last Post: 12-25-2013, 06:14 AM
  2. Zimbra closing active IMAP connections
    By gharris@metacarta.com in forum Administrators
    Replies: 3
    Last Post: 01-30-2011, 09:20 AM
  3. Concurrent IMAP connections problem
    By bravedreamer in forum Developers
    Replies: 3
    Last Post: 02-05-2009, 10:05 AM
  4. Concurrent IMAP connections problem
    By bravedreamer in forum Administrators
    Replies: 1
    Last Post: 02-05-2009, 12:16 AM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •