On Saturday, 14 July 2012 21:18:46 UTC+5:30, shubham srivastava wrote:
>
> All,
>
>
> We are getting the below exceptions on all our systems where Memcache is 
> getting used. This is ultimately causing the the whole system becoming 
> unresponsive .
>
>
> *
> *
>
> *2012-07-14 11:00:55,841 ERROR [MaintThread] 
> com.danga.MemCached.SockIOPool:1435 - ++++ failed to close SockIO obj from 
> deadPool*
>
> *2012-07-14 11:00:55,841 ERROR [MaintThread] 
> com.danga.MemCached.SockIOPool:1436 - ++++ socket or its streams already 
> null in trueClose call*
>
> *java.io.IOException: ++++ socket or its streams already null in 
> trueClose call*
>
> *        at 
> com.danga.MemCached.SockIOPool$SockIO.trueClose(SockIOPool.java:1704) 
> ~[MemCached-2.0.1.jar:na]*
>
> *        at 
> com.danga.MemCached.SockIOPool.selfMaint(SockIOPool.java:1432) 
> ~[MemCached-2.0.1.jar:na]*
>
> *        at 
> com.danga.MemCached.SockIOPool$MaintThread.run(SockIOPool.java:1497) 
> [MemCached-2.0.1.jar:na]*
>
>
>
> We are using Memcache version : 1.4.5. We are using danga client version 
> 2.0.1 . Below are the connection properties of Memcache
>
>
> mmt.cache.initialConnections=100
>
> mmt.cache.minSpareConnections=100
>
> mmt.cache.maxSpareConnections=1000
>
> mmt.cache.maxIdleTime=100000
>
> mmt.cache.maxBusyTime=300000
>
> mmt.cache.maintThreadSleep=5000
>
> mmt.cache.socketTimeOut=15000
>
> mmt.cache.socketConnectTO=3000
>
> mmt.cache.failover=false
>
> mmt.cache.nagleAlg=false
>
> mmt.cache.aliveCheck=false
>
> mmt.cache.enableCompression=false
>
>
> This is causing real issues as the System on Memcache is the base of an 
> Production  site.   We have looked into memcache stats and nothing looks 
> unreasonable inclusive of "listen_disabled_num" which was 0. Memcache 
> server Instance has 64Cores and 12Gb RAM . We are using hashing as 
> NEW_COMPAT_HASH 
> and have 3 servers of memcache on which data is distributed.
>
>
> Any help around the same would be highly appreciated.
>
>
> Regards,
>
> Shubham
>
>
> Please let us know what t look in.
>
>

Reply via email to