TCP keep alives (by the setTimeout) are notoriously useless...  The default
2 hours is generally far longer then any timeout in NAT translation tables
(generally ~5 min) and even if you decrease the keep alive to a sane value
a log of networks actually throw away TCP keep alive packets.  You see that
a lot more in cell networks though.  Its almost always a good idea to have
a software keep alive although it seems to be not implemented in this
protocol.  You can make a super simple CF with 1 value and query it every
minute a connection is idle or something.  i.e. "select * from DummyCF
where id = 1"

-- 
*Chris Lohfink*
Engineer
415.663.6738  |  Skype: clohfink.blackbirdit
*Blackbird **[image: favicon]*

775.345.3485  |  www.blackbirdIT.com <http://www.blackbirdit.com/>

*"Formerly PalominoDB/DriveDev"*


On Fri, Apr 11, 2014 at 3:04 AM, Phil Luckhurst <
phil.luckhu...@powerassure.com> wrote:

> We are also seeing this in our development environment. We have a 3 node
> Cassandra 2.0.5 cluster running on Ubuntu 12.04 and are connecting from a
> Tomcat based application running on Windows using the 2.0.0 Cassandra Java
> Driver. We have setKeepAlive(true) when building the cluster in the
> application and this does keep one connection open on the client side to
> each of the 3 Cassandra nodes, but we still see the build up of 'old'
> ESTABLISHED connections on each of the Cassandra servers.
>
> We are also getting that same "Unexpected exception during request"
> exception appearing in the logs
>
> ERROR [Native-Transport-Requests:358378] 2014-04-09 12:31:46,824
> ErrorMessage.java (line 222) Unexpected exception during request
> java.io.IOException: Connection reset by peer
>         at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>         at sun.nio.ch.SocketDispatcher.read(Unknown Source)
>         at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
>         at sun.nio.ch.IOUtil.read(Unknown Source)
>         at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
>         at
> org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
>         at
>
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
>         at
>
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
>         at
>
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
>         at
> org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>         at java.lang.Thread.run(Unknown Source)
>
> Initially we thought this was down to a firewall that is between our
> development machines and the Cassandra nodes but that has now been
> configured not to 'kill' any connections on port 9042. We also have the
> Windows firewall on the client side turned off.
>
> We still think this is down to our environment as the same application
> running in Tomcat hosted on a Ubuntu 12.04 server does not appear to be
> doing this but up to now we can't track down the cause.
>
>
>
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/binary-protocol-server-side-sockets-tp7593879p7593937.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>



-- 
*Chris Lohfink*
Engineer
415.663.6738  |  Skype: clohfink.blackbirdit

*Blackbird **[image: favicon]*

775.345.3485  |  www.blackbirdIT.com <http://www.blackbirdit.com/>

*"Formerly PalominoDB/DriveDev"*

<<inline: image001.png>>

Reply via email to