Re: Lattest driver and netty issues...

2016-05-25 Thread Tony Anecito
Ok I found the additional handler file and I added but at runtime I am still 
getting the error message about the timer class not found and I looked in the 
jar and did not see that class. I was using netty-3.0.9.0.Final with Cassandra 
driver cassandra-driver-core-3.0.2.jar and netty-handler-4.0.33.jar.
So what is the right combination to not get this timer class missing exception?
Thanks.
 

On Wednesday, May 25, 2016 8:26 PM, Tony Anecito  
wrote:
 

 Hi All,
I downloaded the latest cassandra driver but when used I get an error about 
class io.netty.util.timer (netty-3.9.0.Final) not being found during runtime. 
If I get the latest netty-alll-4.0.46.Final.jar during runtime I get an 
exception about not having a java.security.cert.x509Certificate class.
So what to do?
Thanks!


   

Re: Cassandra

2016-05-25 Thread Alain Rastoul

On 25/05/2016 17:56, bastien dine wrote:

Hi,

I'm running a 3 nodes Cassandra 2.1.x cluster. Each node has 8vCPU and 30
Go RAM.
Replication factor = 3 for my keyspace.


...



Is there a problem with the Java Driver ? The load balancing is not

Hi Bastien,

A replication factor of 3 for a 3 node cluster does not balance the 
load: since you ask for 3 copies of the data (rf=3) on 3 nodes cluster,

each node will have a copy of the data and you are overloading all nodes.
May be you should try with a rf = 2 or add nodes to your cluster ?


"working" ? How can I list connections on a node ?

For a 3.x (I think also 2.x) you can trace  requests at the query level 
with enableTracing() method.

something like : (uncomment the line with .enableTracing() )

session.execute( boundedInsertEventStatement.bind( aggregateId, 
aggregateType, eventType, payload )

.setConsistencyLevel(ConsistencyLevel.ONE)
//.enableTracing()
);
see the doc for other classes and tracing or consistency options,
and have a look at nodetool settraceprobability if you cannot change the 
code


The queries and query plans appear in the system_traces.sessions and 
system_traces.events tables. It can be very verbose for query plans 
(events table), may be you should truncate the sessions and events 
tables before running your load (on 3.x tables are  truncated on startup)




Regards,
Bastien



HTH,
--
best,
Alain


Lattest driver and netty issues...

2016-05-25 Thread Tony Anecito
Hi All,
I downloaded the latest cassandra driver but when used I get an error about 
class io.netty.util.timer (netty-3.9.0.Final) not being found during runtime. 
If I get the latest netty-alll-4.0.46.Final.jar during runtime I get an 
exception about not having a java.security.cert.x509Certificate class.
So what to do?
Thanks!


Re: Internal Handling of Map Updates

2016-05-25 Thread kurt Greaves
Literally just encountered this exact same thing. I couldn't find anything
in the official docs related to this but there is at least this blog that
explains it:
http://www.jsravn.com/2015/05/13/cassandra-tombstones-collections.html
and this entry in ScyllaDB's documentation:
http://www.scylladb.com/kb/sstable-interpretation/
Can confirm what Tyler mentioned, updating a single element does not cause
a tombstone.

On 25 May 2016 at 15:37, Tyler Hobbs  wrote:

> If you replace an entire collection, whether it's a map, set, or list, a
> range tombstone will be inserted followed by the new collection.  If you
> only update a single element, no tombstones are generated.
>
> On Wed, May 25, 2016 at 9:48 AM, Matthias Niehoff <
> matthias.nieh...@codecentric.de> wrote:
>
>> Hi,
>>
>> we have a table with a Map Field. We do not delete anything in this
>> table, but to updates on the values including the Map Field (most of the
>> time a new value for an existing key, Rarely adding new keys). We now
>> encounter a huge amount of thumbstones for this Table.
>>
>> We used sstable2json to take a look into the sstables:
>>
>>
>> {"key": "Betty_StoreCatalogLines:7",
>>
>>  "cells": [["276-1-6MPQ0RI-276110031802001001:","",1463820040628001],
>>
>>["276-1-6MPQ0RI-276110031802001001:last_modified","2016-05-21 
>> 08:40Z",1463820040628001],
>>
>>
>> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463040069753999,"t",1463040069],
>>
>>
>> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463120708590002,"t",1463120708],
>>
>>
>> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463145700735007,"t",1463145700],
>>
>>
>> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463157430862000,"t",1463157430],
>>
>>
>> [„276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_“,“276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!“,1463164595291002,"t",1463164595],
>>
>> . . .
>>
>>   
>> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463820040628000,"t",1463820040],
>>
>>
>> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:62657474795f73746f72655f636174616c6f675f6c696e6573","0154d265c6b0",1463820040628001],
>>
>>
>> [„276-1-6MPQ0RI-276110031802001001:payload“,"{\"payload\":{\"Article 
>> Id\":\"276110031802001001\",\"Row Id\":\"1-6MPQ0RI\",\"Article 
>> #\":\"31802001001\",\"Quote Item Id\":\"1-6MPWPVC\",\"Country 
>> Code\":\"276\"}}",1463820040628001]
>>
>>
>>
>> Looking at the SStables it seem like every update of a value in a Map
>> breaks down to a delete and insert in the corresponding SSTable (see all
>> the thumbstone flags „t“ in the extract of sstable2json above).
>>
>> We are using Cassandra 2.2.5.
>>
>> Can you confirm this behavior?
>>
>> Thanks!
>> --
>> Matthias Niehoff | IT-Consultant | Agile Software Factory  | Consulting
>> codecentric AG | Zeppelinstr 2 | 76185 Karlsruhe | Deutschland
>> tel: +49 (0) 721.9595-681 | fax: +49 (0) 721.9595-666 | mobil: +49 (0)
>> 172.1702676
>> www.codecentric.de | blog.codecentric.de | www.meettheexperts.de |
>> www.more4fi.de
>>
>> Sitz der Gesellschaft: Solingen | HRB 25917| Amtsgericht Wuppertal
>> Vorstand: Michael Hochgürtel . Mirko Novakovic . Rainer Vehns
>> Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen
>> Schütz
>>
>> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält
>> vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht
>> der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben,
>> informieren Sie bitte sofort den Absender und löschen Sie diese E-Mail und
>> evtl. beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder
>> Öffnen evtl. beigefügter Dateien sowie die unbefugte Weitergabe dieser
>> E-Mail ist nicht gestattet
>>
>
>
>
> --
> Tyler Hobbs
> DataStax 
>



-- 
Kurt Greaves
k...@instaclustr.com
www.instaclustr.com


Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread Paulo Motta
If increasing or disabling streaming_socket_timeout_in_ms on the source
node does not fix it, you may want to have a look on your tcp keep alive
settings on the source and destination nodes as intermediate
routers/firewalls may be killing the connections due to inactivity. See
this for more information:
https://docs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html

This will ultimately fixed by CASSANDRA-11841 by adding keep-alive to the
streaming protocol.

2016-05-25 18:09 GMT-03:00 George Sigletos :

> Thanks a lot for your help. I will try that tomorrow. The first time that
> I tried to rebuild, streaming_socket_timeout_in_ms was 0 and still failed.
> Below is the directly previous error on the source node:
>
> ERROR [STREAM-IN-/172.31.22.104] 2016-05-24 22:32:20,437
> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
> Streaming error occurred
> java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> ~[na:1.7.0_79]
> at sun.nio.ch.SocketDispatcher.read(Unknown Source) ~[na:1.7.0_79]
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
> ~[na:1.7.0_79]
> at sun.nio.ch.IOUtil.read(Unknown Source) ~[na:1.7.0_79]
> at sun.nio.ch.SocketChannelImpl.read(Unknown Source) ~[na:1.7.0_79]
> at
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:250)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>
> On Wed, May 25, 2016 at 10:28 PM, Paulo Motta 
> wrote:
>
>> > Workaround is to set to a larger streaming_socket_timeout_in_ms **on
>> the source node**., the new default will be 8640ms (1 day).
>>
>> 2016-05-25 17:23 GMT-03:00 Paulo Motta :
>>
>>> Was there any other ERROR preceding this on this node (in particular the
>>> last few lines of [STREAM-IN-/172.31.22.104])? If it's a
>>> SocketTimeoutException, then what is happening is that the default
>>> streaming socket timeout of 1 hour is not sufficient to stream a single
>>> file and the stream session is failed. Workaround is to set to a larger
>>> streaming_socket_timeout_in_ms, the new default will be 8640ms (1
>>> day).
>>>
>>> We are addressing this on
>>> https://issues.apache.org/jira/browse/CASSANDRA-11839.
>>>
>>> 2016-05-25 16:42 GMT-03:00 George Sigletos :
>>>
 Hello again,

 Here is the error message from the source

 INFO  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,275
 StreamResultFuture.java:180 - [Stream
 #2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /172.31.22.104 is
 complete
 WARN  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,276
 StreamResultFuture.java:207 - [Stream
 #2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
 ERROR [STREAM-OUT-/172.31.22.104] 2016-05-25 00:44:57,353
 StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
 Streaming error occurred
 java.lang.AssertionError: Memory was freed
 at
 org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]

 On Wed, May 25, 2016 at 8:49 PM, Paulo Motta 

Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread George Sigletos
Thanks a lot for your help. I will try that tomorrow. The first time that I
tried to rebuild, streaming_socket_timeout_in_ms was 0 and still failed.
Below is the directly previous error on the source node:

ERROR [STREAM-IN-/172.31.22.104] 2016-05-24 22:32:20,437
StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
Streaming error occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_79]
at sun.nio.ch.SocketDispatcher.read(Unknown Source) ~[na:1.7.0_79]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
~[na:1.7.0_79]
at sun.nio.ch.IOUtil.read(Unknown Source) ~[na:1.7.0_79]
at sun.nio.ch.SocketChannelImpl.read(Unknown Source) ~[na:1.7.0_79]
at
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:250)
~[apache-cassandra-2.1.13.jar:2.1.13]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]

On Wed, May 25, 2016 at 10:28 PM, Paulo Motta 
wrote:

> > Workaround is to set to a larger streaming_socket_timeout_in_ms **on
> the source node**., the new default will be 8640ms (1 day).
>
> 2016-05-25 17:23 GMT-03:00 Paulo Motta :
>
>> Was there any other ERROR preceding this on this node (in particular the
>> last few lines of [STREAM-IN-/172.31.22.104])? If it's a
>> SocketTimeoutException, then what is happening is that the default
>> streaming socket timeout of 1 hour is not sufficient to stream a single
>> file and the stream session is failed. Workaround is to set to a larger
>> streaming_socket_timeout_in_ms, the new default will be 8640ms (1
>> day).
>>
>> We are addressing this on
>> https://issues.apache.org/jira/browse/CASSANDRA-11839.
>>
>> 2016-05-25 16:42 GMT-03:00 George Sigletos :
>>
>>> Hello again,
>>>
>>> Here is the error message from the source
>>>
>>> INFO  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,275
>>> StreamResultFuture.java:180 - [Stream
>>> #2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /172.31.22.104 is
>>> complete
>>> WARN  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,276
>>> StreamResultFuture.java:207 - [Stream
>>> #2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
>>> ERROR [STREAM-OUT-/172.31.22.104] 2016-05-25 00:44:57,353
>>> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>> Streaming error occurred
>>> java.lang.AssertionError: Memory was freed
>>> at
>>> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>>>
>>> On Wed, May 25, 2016 at 8:49 PM, Paulo Motta 
>>> wrote:
>>>
 This is the log of the destination/rebuilding node, you need to check
 what is the error message on the stream source node (192.168.1.140).


 2016-05-25 15:22 GMT-03:00 George Sigletos :

> Hello,
>
> Here is additional stack trace from system.log:
>
> ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
> StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
> Remote peer 192.168.1.140 failed stream session.
> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
> StreamSession.java:505 - [Stream 

Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread Paulo Motta
> Workaround is to set to a larger streaming_socket_timeout_in_ms **on the
source node**., the new default will be 8640ms (1 day).

2016-05-25 17:23 GMT-03:00 Paulo Motta :

> Was there any other ERROR preceding this on this node (in particular the
> last few lines of [STREAM-IN-/172.31.22.104])? If it's a
> SocketTimeoutException, then what is happening is that the default
> streaming socket timeout of 1 hour is not sufficient to stream a single
> file and the stream session is failed. Workaround is to set to a larger
> streaming_socket_timeout_in_ms, the new default will be 8640ms (1
> day).
>
> We are addressing this on
> https://issues.apache.org/jira/browse/CASSANDRA-11839.
>
> 2016-05-25 16:42 GMT-03:00 George Sigletos :
>
>> Hello again,
>>
>> Here is the error message from the source
>>
>> INFO  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,275
>> StreamResultFuture.java:180 - [Stream
>> #2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /172.31.22.104 is
>> complete
>> WARN  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,276
>> StreamResultFuture.java:207 - [Stream
>> #2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
>> ERROR [STREAM-OUT-/172.31.22.104] 2016-05-25 00:44:57,353
>> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>> Streaming error occurred
>> java.lang.AssertionError: Memory was freed
>> at
>> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>>
>> On Wed, May 25, 2016 at 8:49 PM, Paulo Motta 
>> wrote:
>>
>>> This is the log of the destination/rebuilding node, you need to check
>>> what is the error message on the stream source node (192.168.1.140).
>>>
>>>
>>> 2016-05-25 15:22 GMT-03:00 George Sigletos :
>>>
 Hello,

 Here is additional stack trace from system.log:

 ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
 StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
 Remote peer 192.168.1.140 failed stream session.
 ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
 StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
 Streaming error occurred
 java.io.IOException: Connection timed out
 at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
 ~[na:1.7.0_79]
 at sun.nio.ch.SocketDispatcher.write(Unknown Source)
 ~[na:1.7.0_79]
 at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
 ~[na:1.7.0_79]
 at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
 at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
 ~[na:1.7.0_79]
 at
 org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
 ~[apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
 [apache-cassandra-2.1.13.jar:2.1.13]
 at
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:323)
 [apache-cassandra-2.1.13.jar:2.1.13]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
 INFO  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,625
 

Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread Paulo Motta
Was there any other ERROR preceding this on this node (in particular the
last few lines of [STREAM-IN-/172.31.22.104])? If it's a
SocketTimeoutException, then what is happening is that the default
streaming socket timeout of 1 hour is not sufficient to stream a single
file and the stream session is failed. Workaround is to set to a larger
streaming_socket_timeout_in_ms, the new default will be 8640ms (1 day).

We are addressing this on
https://issues.apache.org/jira/browse/CASSANDRA-11839.

2016-05-25 16:42 GMT-03:00 George Sigletos :

> Hello again,
>
> Here is the error message from the source
>
> INFO  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,275
> StreamResultFuture.java:180 - [Stream
> #2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /172.31.22.104 is
> complete
> WARN  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,276
> StreamResultFuture.java:207 - [Stream
> #2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
> ERROR [STREAM-OUT-/172.31.22.104] 2016-05-25 00:44:57,353
> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
> Streaming error occurred
> java.lang.AssertionError: Memory was freed
> at
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>
> On Wed, May 25, 2016 at 8:49 PM, Paulo Motta 
> wrote:
>
>> This is the log of the destination/rebuilding node, you need to check
>> what is the error message on the stream source node (192.168.1.140).
>>
>>
>> 2016-05-25 15:22 GMT-03:00 George Sigletos :
>>
>>> Hello,
>>>
>>> Here is additional stack trace from system.log:
>>>
>>> ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
>>> StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>> Remote peer 192.168.1.140 failed stream session.
>>> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
>>> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>> Streaming error occurred
>>> java.io.IOException: Connection timed out
>>> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>>> ~[na:1.7.0_79]
>>> at sun.nio.ch.SocketDispatcher.write(Unknown Source)
>>> ~[na:1.7.0_79]
>>> at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
>>> ~[na:1.7.0_79]
>>> at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
>>> at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
>>> ~[na:1.7.0_79]
>>> at
>>> org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>> at
>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:323)
>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>> at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>>> INFO  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,625
>>> StreamResultFuture.java:180 - [Stream
>>> #2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /192.168.1.140 is
>>> complete
>>> WARN  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,627
>>> StreamResultFuture.java:207 - [Stream
>>> #2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
>>> ERROR [RMI TCP 

Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread George Sigletos
Hello again,

Here is the error message from the source

INFO  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,275
StreamResultFuture.java:180 - [Stream
#2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /172.31.22.104 is
complete
WARN  [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,276
StreamResultFuture.java:207 - [Stream
#2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
ERROR [STREAM-OUT-/172.31.22.104] 2016-05-25 00:44:57,353
StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
Streaming error occurred
java.lang.AssertionError: Memory was freed
at
org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97)
~[apache-cassandra-2.1.13.jar:2.1.13]
at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
~[apache-cassandra-2.1.13.jar:2.1.13]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]

On Wed, May 25, 2016 at 8:49 PM, Paulo Motta 
wrote:

> This is the log of the destination/rebuilding node, you need to check what
> is the error message on the stream source node (192.168.1.140).
>
>
> 2016-05-25 15:22 GMT-03:00 George Sigletos :
>
>> Hello,
>>
>> Here is additional stack trace from system.log:
>>
>> ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
>> StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>> Remote peer 192.168.1.140 failed stream session.
>> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
>> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>> Streaming error occurred
>> java.io.IOException: Connection timed out
>> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>> ~[na:1.7.0_79]
>> at sun.nio.ch.SocketDispatcher.write(Unknown Source)
>> ~[na:1.7.0_79]
>> at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
>> ~[na:1.7.0_79]
>> at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
>> at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
>> ~[na:1.7.0_79]
>> at
>> org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>> [apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:323)
>> [apache-cassandra-2.1.13.jar:2.1.13]
>> at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>> INFO  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,625
>> StreamResultFuture.java:180 - [Stream
>> #2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /192.168.1.140 is
>> complete
>> WARN  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,627
>> StreamResultFuture.java:207 - [Stream
>> #2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
>> ERROR [RMI TCP Connection(24)-127.0.0.1] 2016-05-24 22:44:58,628
>> StorageService.java:1075 - Error while rebuilding node
>> org.apache.cassandra.streaming.StreamException: Stream failed
>> at
>> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>> at
>> com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
>> ~[guava-16.0.jar:na]
>> at
>> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>> ~[guava-16.0.jar:na]
>> at
>> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
>> ~[guava-16.0.jar:na]
>> 

Re: Increasing replication factor and repair doesn't seem to work

2016-05-25 Thread Luke Jolly
After thinking about it more, I have no idea how that worked at all.  I
must have not cleared out the working directory or something
Regardless, I did something weird  with my initial joining of the cluster
and then wasn't using repair -full.  Thank y'all very much for the info.

On Wed, May 25, 2016 at 3:11 PM Luke Jolly  wrote:

> So I figured out the main cause of the problem.  The seed node was
> itself.  That's what got it in a weird state.  The second part was that I
> didn't know the default repair is incremental as I was accidently looking
> at the wrong version documentation.  After running a repair -full, the 3
> other nodes are synced correctly it seems as they have identical loads.
> Strangely, now the problem 10.128.0.20 node has 10 GB of load (the others
> have 6 GB).  Since I now know I started it off in a very weird state, I'm
> going to just decommission it and add it back in from scratch.  When I
> added it, all working folders were cleared.
>
> I feel Cassandra should through an error if the seed node is set to itself
> and fail to bootstrap / join?
>
>
> On Wed, May 25, 2016 at 2:37 AM Mike Yeap  wrote:
>
>> Hi Luke, I've encountered similar problem before, could you please advise
>> on following?
>>
>> 1) when you add 10.128.0.20, what are the seeds defined in cassandra.yaml?
>>
>> 2) when you add 10.128.0.20, were the data and cache directories in
>> 10.128.0.20 empty?
>>
>>- /var/lib/cassandra/data
>>- /var/lib/cassandra/saved_caches
>>
>> 3) if you do a compact in 10.128.0.3, what is the size shown in "Load"
>> column in "nodetool status "?
>>
>> 4) when you do the full repair, did you use "nodetool repair" or
>> "nodetool repair -full"? I'm asking this because Incremental Repair is the
>> default for Cassandra 2.2 and later.
>>
>>
>> Regards,
>> Mike Yeap
>>
>> On Wed, May 25, 2016 at 8:01 AM, Bryan Cheng 
>> wrote:
>>
>>> Hi Luke,
>>>
>>> I've never found nodetool status' load to be useful beyond a general
>>> indicator.
>>>
>>> You should expect some small skew, as this will depend on your current
>>> compaction status, tombstones, etc. IIRC repair will not provide
>>> consistency of intermediate states nor will it remove tombstones, it only
>>> guarantees consistency in the final state. This means, in the case of
>>> dropped hints or mutations, you will see differences in intermediate
>>> states, and therefore storage footrpint, even in fully repaired nodes. This
>>> includes intermediate UPDATE operations as well.
>>>
>>> Your one node with sub 1GB sticks out like a sore thumb, though. Where
>>> did you originate the nodetool repair from? Remember that repair will only
>>> ensure consistency for ranges held by the node you're running it on. While
>>> I am not sure if missing ranges are included in this, if you ran nodetool
>>> repair only on a machine with partial ownership, you will need to complete
>>> repairs across the ring before data will return to full consistency.
>>>
>>> I would query some older data using consistency = ONE on the affected
>>> machine to determine if you are actually missing data.  There are a few
>>> outstanding bugs in the 2.1.x  and older release families that may result
>>> in tombstone creation even without deletes, for example CASSANDRA-10547,
>>> which impacts updates on collections in pre-2.1.13 Cassandra.
>>>
>>> You can also try examining the output of nodetool ring, which will give
>>> you a breakdown of tokens and their associations within your cluster.
>>>
>>> --Bryan
>>>
>>> On Tue, May 24, 2016 at 3:49 PM, kurt Greaves 
>>> wrote:
>>>
 Not necessarily considering RF is 2 so both nodes should have all
 partitions. Luke, are you sure the repair is succeeding? You don't have
 other keyspaces/duplicate data/extra data in your cassandra data directory?
 Also, you could try querying on the node with less data to confirm if
 it has the same dataset.

 On 24 May 2016 at 22:03, Bhuvan Rawal  wrote:

> For the other DC, it can be acceptable because partition reside on one
> node, so say  if you have a large partition, it may skew things a bit.
> On May 25, 2016 2:41 AM, "Luke Jolly"  wrote:
>
>> So I guess the problem may have been with the initial addition of the
>> 10.128.0.20 node because when I added it in it never synced data I
>> guess?  It was at around 50 MB when it first came up and transitioned to
>> "UN". After it was in I did the 1->2 replication change and tried repair
>> but it didn't fix it.  From what I can tell all the data on it is stuff
>> that has been written since it came up.  We never delete data ever so we
>> should have zero tombstones.
>>
>> If I am not mistaken, only two of my nodes actually have all the
>> data, 10.128.0.3 and 10.142.0.14 since they agree on the data amount.
>> 10.142.0.13 is 

Re: Increasing replication factor and repair doesn't seem to work

2016-05-25 Thread Luke Jolly
So I figured out the main cause of the problem.  The seed node was itself.
That's what got it in a weird state.  The second part was that I didn't
know the default repair is incremental as I was accidently looking at the
wrong version documentation.  After running a repair -full, the 3 other
nodes are synced correctly it seems as they have identical loads.
Strangely, now the problem 10.128.0.20 node has 10 GB of load (the others
have 6 GB).  Since I now know I started it off in a very weird state, I'm
going to just decommission it and add it back in from scratch.  When I
added it, all working folders were cleared.

I feel Cassandra should through an error if the seed node is set to itself
and fail to bootstrap / join?

On Wed, May 25, 2016 at 2:37 AM Mike Yeap  wrote:

> Hi Luke, I've encountered similar problem before, could you please advise
> on following?
>
> 1) when you add 10.128.0.20, what are the seeds defined in cassandra.yaml?
>
> 2) when you add 10.128.0.20, were the data and cache directories in
> 10.128.0.20 empty?
>
>- /var/lib/cassandra/data
>- /var/lib/cassandra/saved_caches
>
> 3) if you do a compact in 10.128.0.3, what is the size shown in "Load"
> column in "nodetool status "?
>
> 4) when you do the full repair, did you use "nodetool repair" or "nodetool
> repair -full"? I'm asking this because Incremental Repair is the default
> for Cassandra 2.2 and later.
>
>
> Regards,
> Mike Yeap
>
> On Wed, May 25, 2016 at 8:01 AM, Bryan Cheng 
> wrote:
>
>> Hi Luke,
>>
>> I've never found nodetool status' load to be useful beyond a general
>> indicator.
>>
>> You should expect some small skew, as this will depend on your current
>> compaction status, tombstones, etc. IIRC repair will not provide
>> consistency of intermediate states nor will it remove tombstones, it only
>> guarantees consistency in the final state. This means, in the case of
>> dropped hints or mutations, you will see differences in intermediate
>> states, and therefore storage footrpint, even in fully repaired nodes. This
>> includes intermediate UPDATE operations as well.
>>
>> Your one node with sub 1GB sticks out like a sore thumb, though. Where
>> did you originate the nodetool repair from? Remember that repair will only
>> ensure consistency for ranges held by the node you're running it on. While
>> I am not sure if missing ranges are included in this, if you ran nodetool
>> repair only on a machine with partial ownership, you will need to complete
>> repairs across the ring before data will return to full consistency.
>>
>> I would query some older data using consistency = ONE on the affected
>> machine to determine if you are actually missing data.  There are a few
>> outstanding bugs in the 2.1.x  and older release families that may result
>> in tombstone creation even without deletes, for example CASSANDRA-10547,
>> which impacts updates on collections in pre-2.1.13 Cassandra.
>>
>> You can also try examining the output of nodetool ring, which will give
>> you a breakdown of tokens and their associations within your cluster.
>>
>> --Bryan
>>
>> On Tue, May 24, 2016 at 3:49 PM, kurt Greaves 
>> wrote:
>>
>>> Not necessarily considering RF is 2 so both nodes should have all
>>> partitions. Luke, are you sure the repair is succeeding? You don't have
>>> other keyspaces/duplicate data/extra data in your cassandra data directory?
>>> Also, you could try querying on the node with less data to confirm if it
>>> has the same dataset.
>>>
>>> On 24 May 2016 at 22:03, Bhuvan Rawal  wrote:
>>>
 For the other DC, it can be acceptable because partition reside on one
 node, so say  if you have a large partition, it may skew things a bit.
 On May 25, 2016 2:41 AM, "Luke Jolly"  wrote:

> So I guess the problem may have been with the initial addition of the
> 10.128.0.20 node because when I added it in it never synced data I
> guess?  It was at around 50 MB when it first came up and transitioned to
> "UN". After it was in I did the 1->2 replication change and tried repair
> but it didn't fix it.  From what I can tell all the data on it is stuff
> that has been written since it came up.  We never delete data ever so we
> should have zero tombstones.
>
> If I am not mistaken, only two of my nodes actually have all the data,
> 10.128.0.3 and 10.142.0.14 since they agree on the data amount. 
> 10.142.0.13
> is almost a GB lower and then of course 10.128.0.20 which is missing
> over 5 GB of data.  I tried running nodetool -local on both DCs and it
> didn't fix either one.
>
> Am I running into a bug of some kind?
>
> On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal 
> wrote:
>
>> Hi Luke,
>>
>> You mentioned that replication factor was increased from 1 to 2. In
>> that case was the node bearing ip 

Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread Paulo Motta
This is the log of the destination/rebuilding node, you need to check what
is the error message on the stream source node (192.168.1.140).

2016-05-25 15:22 GMT-03:00 George Sigletos :

> Hello,
>
> Here is additional stack trace from system.log:
>
> ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
> StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
> Remote peer 192.168.1.140 failed stream session.
> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
> Streaming error occurred
> java.io.IOException: Connection timed out
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> ~[na:1.7.0_79]
> at sun.nio.ch.SocketDispatcher.write(Unknown Source) ~[na:1.7.0_79]
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
> ~[na:1.7.0_79]
> at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
> at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
> ~[na:1.7.0_79]
> at
> org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
> [apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:323)
> [apache-cassandra-2.1.13.jar:2.1.13]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
> INFO  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,625
> StreamResultFuture.java:180 - [Stream
> #2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /192.168.1.140 is
> complete
> WARN  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,627
> StreamResultFuture.java:207 - [Stream
> #2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
> ERROR [RMI TCP Connection(24)-127.0.0.1] 2016-05-24 22:44:58,628
> StorageService.java:1075 - Error while rebuilding node
> org.apache.cassandra.streaming.StreamException: Stream failed
> at
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
> ~[guava-16.0.jar:na]
> at
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
> ~[guava-16.0.jar:na]
> at
> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
> ~[guava-16.0.jar:na]
> at
> com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
> ~[guava-16.0.jar:na]
> at
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
> ~[guava-16.0.jar:na]
> at
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:208)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:184)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:415)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.StreamSession.sessionFailed(StreamSession.java:621)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:475)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:256)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_79]
> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:58,629
> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
> Streaming error occurred
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> ~[na:1.7.0_79]
> at sun.nio.ch.SocketDispatcher.write(Unknown Source) ~[na:1.7.0_79]
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
> ~[na:1.7.0_79]
> at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
> at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
> ~[na:1.7.0_79]
> at
> org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
> ~[apache-cassandra-2.1.13.jar:2.1.13]
> at
> 

Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread George Sigletos
Hello,

Here is additional stack trace from system.log:

ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
Remote peer 192.168.1.140 failed stream session.
ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
Streaming error occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
~[na:1.7.0_79]
at sun.nio.ch.SocketDispatcher.write(Unknown Source) ~[na:1.7.0_79]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
~[na:1.7.0_79]
at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
at sun.nio.ch.SocketChannelImpl.write(Unknown Source) ~[na:1.7.0_79]
at
org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:323)
[apache-cassandra-2.1.13.jar:2.1.13]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
INFO  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,625
StreamResultFuture.java:180 - [Stream
#2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /192.168.1.140 is
complete
WARN  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,627
StreamResultFuture.java:207 - [Stream
#2c290460-20d4-11e6-930f-1b05ac77baf9] Stream failed
ERROR [RMI TCP Connection(24)-127.0.0.1] 2016-05-24 22:44:58,628
StorageService.java:1075 - Error while rebuilding node
org.apache.cassandra.streaming.StreamException: Stream failed
at
org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
~[guava-16.0.jar:na]
at
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
~[guava-16.0.jar:na]
at
com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
~[guava-16.0.jar:na]
at
com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
~[guava-16.0.jar:na]
at
com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
~[guava-16.0.jar:na]
at
org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:208)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:184)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:415)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.StreamSession.sessionFailed(StreamSession.java:621)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:475)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:256)
~[apache-cassandra-2.1.13.jar:2.1.13]
at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_79]
ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:58,629
StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
Streaming error occurred
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
~[na:1.7.0_79]
at sun.nio.ch.SocketDispatcher.write(Unknown Source) ~[na:1.7.0_79]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
~[na:1.7.0_79]
at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
at sun.nio.ch.SocketChannelImpl.write(Unknown Source) ~[na:1.7.0_79]
at
org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
~[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
[apache-cassandra-2.1.13.jar:2.1.13]
at
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
[apache-cassandra-2.1.13.jar:2.1.13]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]


On Wed, May 25, 2016 at 5:23 PM, Paulo Motta 
wrote:

> The stack trace from the rebuild command not show the root cause of 

Re: Cassandra event notification on INSERT/DELETE of records

2016-05-25 Thread Laing, Michael
You could also follow this related issue:
https://issues.apache.org/jira/browse/CASSANDRA-8844

On Wed, May 25, 2016 at 12:04 PM, Aaditya Vadnere  wrote:

> Thanks Eric and Mark, we were thinking along similar lines. But we already
> need Cassandra for regular database purpose, so instead of having both
> Kafka and Cassandra, the possibility of using Cassandra alone was explored.
>
> Another usecase where update notification can be useful is when we want to
> synchronize two or more instances of same component. Say two threads of
> component 'A' can share the same database. When a record is updated in
> database by thread 1, a notification is sent to thread 2. After that thread
> 2, performs a read.
>
> I think this also is an anti-pattern.
>
> Regards,
> Aaditya
>
> On Tue, May 24, 2016 at 12:45 PM, Mark Reddy 
> wrote:
>
>> +1 to what Eric said, a queue is a classic C* anti-pattern. Something
>> like Kafka or RabbitMQ might fit your use case better.
>>
>>
>> Mark
>>
>> On 24 May 2016 at 18:03, Eric Stevens  wrote:
>>
>>> It sounds like you're trying to build a queue in Cassandra, which is one
>>> of the classic anti-pattern use cases for Cassandra.
>>>
>>> You may be able to do something clever with triggers, but I highly
>>> recommend you look at purpose-built queuing software such as Kafka to solve
>>> this instead.
>>>
>>> On Tue, May 24, 2016 at 9:49 AM Aaditya Vadnere 
>>> wrote:
>>>
 Hi experts,

 We are evaluating Cassandra as messaging infrastructure for a project.

 In our workflow Cassandra database will be synchronized across two
 nodes, a component will INSERT/UPDATE records on one node and another
 component (who has registered for the specific table) on second node will
 get notified of record change.

 The second component will then try to read the database to find out the
 specific message.

 Is it possible for Cassandra to support such workflow? Basically, is
 there a way for Cassandra to generate a notification anytime schema changes
 (so we can set processes to listen for schema changes). As I understand,
 polling the database periodically or database triggers might work but they
 are costly operations.


 --
 Aaditya Vadnere

>>>
>>
>
>
> --
> Aaditya Vadnere
>


Re: Cassandra event notification on INSERT/DELETE of records

2016-05-25 Thread Aaditya Vadnere
Thanks Eric and Mark, we were thinking along similar lines. But we already
need Cassandra for regular database purpose, so instead of having both
Kafka and Cassandra, the possibility of using Cassandra alone was explored.

Another usecase where update notification can be useful is when we want to
synchronize two or more instances of same component. Say two threads of
component 'A' can share the same database. When a record is updated in
database by thread 1, a notification is sent to thread 2. After that thread
2, performs a read.

I think this also is an anti-pattern.

Regards,
Aaditya

On Tue, May 24, 2016 at 12:45 PM, Mark Reddy  wrote:

> +1 to what Eric said, a queue is a classic C* anti-pattern. Something like
> Kafka or RabbitMQ might fit your use case better.
>
>
> Mark
>
> On 24 May 2016 at 18:03, Eric Stevens  wrote:
>
>> It sounds like you're trying to build a queue in Cassandra, which is one
>> of the classic anti-pattern use cases for Cassandra.
>>
>> You may be able to do something clever with triggers, but I highly
>> recommend you look at purpose-built queuing software such as Kafka to solve
>> this instead.
>>
>> On Tue, May 24, 2016 at 9:49 AM Aaditya Vadnere  wrote:
>>
>>> Hi experts,
>>>
>>> We are evaluating Cassandra as messaging infrastructure for a project.
>>>
>>> In our workflow Cassandra database will be synchronized across two
>>> nodes, a component will INSERT/UPDATE records on one node and another
>>> component (who has registered for the specific table) on second node will
>>> get notified of record change.
>>>
>>> The second component will then try to read the database to find out the
>>> specific message.
>>>
>>> Is it possible for Cassandra to support such workflow? Basically, is
>>> there a way for Cassandra to generate a notification anytime schema changes
>>> (so we can set processes to listen for schema changes). As I understand,
>>> polling the database periodically or database triggers might work but they
>>> are costly operations.
>>>
>>>
>>> --
>>> Aaditya Vadnere
>>>
>>
>


-- 
Aaditya Vadnere


Cassandra

2016-05-25 Thread bastien dine
Hi,

I'm running a 3 nodes Cassandra 2.1.x cluster. Each node has 8vCPU and 30
Go RAM.
Replication factor = 3 for my keyspace.

Recently, i'm using the Java Driver (within Storm) to read / write data and
I've encountered a problem :

All of my cluster nodes are sucessfully discovered by the driver.

When doing a pretty heavy load on my cluster (1k read & 3k write per
seconds) it appears that one of my node is getting overhelm.. a lot.. and
other nodes are OK :
Node 1 : load : 17
node 2 : load 3
node 3 : load 3

RAM usage is not a problem at all.

On the node1, the system.log, there is a lot of StatusLogger stuff..

INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.range_xfers0,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.compactions_in_progress 0,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.peers  0,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.schema_keyspaces   0,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.schema_usertypes   0,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.local  0,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.sstable_activity 632,27087
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.schema_columns 0,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
system.batchlog   0,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
keyspace1.Counter30,0
INFO  [Service Thread] 2016-05-25 15:35:04,530 StatusLogger.java:115 -
keyspace1.standard1   0,0
INFO  [Service Thread] 2016-05-25 15:35:04,531 StatusLogger.java:115 -
keyspace1.counter10,0
INFO  [Service Thread] 2016-05-25 15:35:04,531 StatusLogger.java:115 -
system_traces.sessions0,0
INFO  [Service Thread] 2016-05-25 15:35:04,532 StatusLogger.java:115 -
system_traces.events  0,0
INFO  [Service Thread] 2016-05-25 15:39:04,438 GCInspector.java:258 -
ParNew GC in 432ms.  CMS Old Gen: 2035104888 -> 2040946040; Par Eden Space:
671088640 -> 0; Par Survivor Space: 83884256 -> 83872168
INFO  [Service Thread] 2016-05-25 15:39:04,438 StatusLogger.java:51 - Pool
NameActive   Pending  Completed   Blocked  All Time
Blocked
INFO  [Service Thread] 2016-05-25 15:39:04,439 StatusLogger.java:66 -
MutationStage 0 0   12598562
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,439 StatusLogger.java:66 -
RequestResponseStage  0 09124551
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,440 StatusLogger.java:66 -
ReadRepairStage   0 0 286466
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,440 StatusLogger.java:66 -
CounterMutationStage  0 0  0
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,440 StatusLogger.java:66 -
ReadStage 0 03090180
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,440 StatusLogger.java:66 -
MiscStage 0 0  0
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,440 StatusLogger.java:66 -
HintedHandoff 0 0 14
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,440 StatusLogger.java:66 -
GossipStage   0 0  99815
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,440 StatusLogger.java:66 -
CacheCleanupExecutor  0 0  0
0 0
INFO  [Service Thread] 2016-05-25 15:39:04,440 StatusLogger.java:66 -
InternalResponseStage 0 0  0
0 0

There is more message of GCInspector like this :
INFO  [Service Thread] 2016-05-25 15:35:04,524 GCInspector.java:258 -
ParNew GC in 266ms.  CMS Old Gen: 2029659880 -> 2035104888; Par Eden Space:
671088640 -> 0; Par Survivor Space: 83885104 -> 83884256

All of my node are configured the exact same way.

With cassandra stress tool, I was able to hit 40k to 75k operations per
secondes pretty fine.

Can someone help me to debug this problem ?

Is there a problem with the Java Driver ? The load balancing is not
"working" ? How can I list connections on a node ?

Regards,
Bastien


Re: Internal Handling of Map Updates

2016-05-25 Thread Tyler Hobbs
If you replace an entire collection, whether it's a map, set, or list, a
range tombstone will be inserted followed by the new collection.  If you
only update a single element, no tombstones are generated.

On Wed, May 25, 2016 at 9:48 AM, Matthias Niehoff <
matthias.nieh...@codecentric.de> wrote:

> Hi,
>
> we have a table with a Map Field. We do not delete anything in this table,
> but to updates on the values including the Map Field (most of the time a
> new value for an existing key, Rarely adding new keys). We now encounter a
> huge amount of thumbstones for this Table.
>
> We used sstable2json to take a look into the sstables:
>
>
> {"key": "Betty_StoreCatalogLines:7",
>
>  "cells": [["276-1-6MPQ0RI-276110031802001001:","",1463820040628001],
>
>["276-1-6MPQ0RI-276110031802001001:last_modified","2016-05-21 
> 08:40Z",1463820040628001],
>
>
> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463040069753999,"t",1463040069],
>
>
> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463120708590002,"t",1463120708],
>
>
> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463145700735007,"t",1463145700],
>
>
> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463157430862000,"t",1463157430],
>
>
> [„276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_“,“276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!“,1463164595291002,"t",1463164595],
>
> . . .
>
>   
> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463820040628000,"t",1463820040],
>
>
> ["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:62657474795f73746f72655f636174616c6f675f6c696e6573","0154d265c6b0",1463820040628001],
>
>
> [„276-1-6MPQ0RI-276110031802001001:payload“,"{\"payload\":{\"Article 
> Id\":\"276110031802001001\",\"Row Id\":\"1-6MPQ0RI\",\"Article 
> #\":\"31802001001\",\"Quote Item Id\":\"1-6MPWPVC\",\"Country 
> Code\":\"276\"}}",1463820040628001]
>
>
>
> Looking at the SStables it seem like every update of a value in a Map
> breaks down to a delete and insert in the corresponding SSTable (see all
> the thumbstone flags „t“ in the extract of sstable2json above).
>
> We are using Cassandra 2.2.5.
>
> Can you confirm this behavior?
>
> Thanks!
> --
> Matthias Niehoff | IT-Consultant | Agile Software Factory  | Consulting
> codecentric AG | Zeppelinstr 2 | 76185 Karlsruhe | Deutschland
> tel: +49 (0) 721.9595-681 | fax: +49 (0) 721.9595-666 | mobil: +49 (0)
> 172.1702676
> www.codecentric.de | blog.codecentric.de | www.meettheexperts.de |
> www.more4fi.de
>
> Sitz der Gesellschaft: Solingen | HRB 25917| Amtsgericht Wuppertal
> Vorstand: Michael Hochgürtel . Mirko Novakovic . Rainer Vehns
> Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen Schütz
>
> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche
> und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige
> Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie
> bitte sofort den Absender und löschen Sie diese E-Mail und evtl.
> beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen
> evtl. beigefügter Dateien sowie die unbefugte Weitergabe dieser E-Mail ist
> nicht gestattet
>



-- 
Tyler Hobbs
DataStax 


Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread Paulo Motta
The stack trace from the rebuild command not show the root cause of the
rebuild stream error. Can you check the system.log for ERROR logs during
streaming and paste here?


Internal Handling of Map Updates

2016-05-25 Thread Matthias Niehoff
Hi,

we have a table with a Map Field. We do not delete anything in this table,
but to updates on the values including the Map Field (most of the time a
new value for an existing key, Rarely adding new keys). We now encounter a
huge amount of thumbstones for this Table.

We used sstable2json to take a look into the sstables:


{"key": "Betty_StoreCatalogLines:7",

 "cells": [["276-1-6MPQ0RI-276110031802001001:","",1463820040628001],

   ["276-1-6MPQ0RI-276110031802001001:last_modified","2016-05-21
08:40Z",1463820040628001],

   
["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463040069753999,"t",1463040069],

   
["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463120708590002,"t",1463120708],

   
["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463145700735007,"t",1463145700],

   
["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463157430862000,"t",1463157430],

   
[„276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_“,“276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!“,1463164595291002,"t",1463164595],

. . .

  
["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:_","276-1-6MPQ0RI-276110031802001001:last_modified_by_source:!",1463820040628000,"t",1463820040],

   
["276-1-6MPQ0RI-276110031802001001:last_modified_by_source:62657474795f73746f72655f636174616c6f675f6c696e6573","0154d265c6b0",1463820040628001],

   [„276-1-6MPQ0RI-276110031802001001:payload“,"{\"payload\":{\"Article
Id\":\"276110031802001001\",\"Row Id\":\"1-6MPQ0RI\",\"Article
#\":\"31802001001\",\"Quote Item Id\":\"1-6MPWPVC\",\"Country
Code\":\"276\"}}",1463820040628001]



Looking at the SStables it seem like every update of a value in a Map
breaks down to a delete and insert in the corresponding SSTable (see all
the thumbstone flags „t“ in the extract of sstable2json above).

We are using Cassandra 2.2.5.

Can you confirm this behavior?

Thanks!
-- 
Matthias Niehoff | IT-Consultant | Agile Software Factory  | Consulting
codecentric AG | Zeppelinstr 2 | 76185 Karlsruhe | Deutschland
tel: +49 (0) 721.9595-681 | fax: +49 (0) 721.9595-666 | mobil: +49 (0)
172.1702676
www.codecentric.de | blog.codecentric.de | www.meettheexperts.de |
www.more4fi.de

Sitz der Gesellschaft: Solingen | HRB 25917| Amtsgericht Wuppertal
Vorstand: Michael Hochgürtel . Mirko Novakovic . Rainer Vehns
Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen Schütz

Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche
und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige
Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie
bitte sofort den Absender und löschen Sie diese E-Mail und evtl.
beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen
evtl. beigefügter Dateien sowie die unbefugte Weitergabe dieser E-Mail ist
nicht gestattet


RE: UUID coming as int while using SPARK SQL

2016-05-25 Thread Rajesh Radhakrishnan

Found it!
ie how to convert or represent the C* uuid using Spark CQL.

uuid.UUID(int=idval)

So putting into the context

...
import uuid
...
 sparkSQLl ="SELECT distinct id, dept, workflow FROM samd WHERE 
workflow='testWK'
 new_df = sqlContext.sql(sparkSQLl)
 results  =  new_df.collect()
 for row in results:
print "dept=",row.dept
print "wk=",row.workflow
print 
"id=",row.id.int
print  
"uuid=",uuid.UUID(int=row.id.int)
...
The Python code above prints the following:
dept=blah
wk=testWK
id=293946894141093607334963674332192894528
uuid= 9547v26c-f528-12e5-da8b-001a4q3dac10

From: Laing, Michael [michael.la...@nytimes.com]
Sent: 24 May 2016 12:23
To: user@cassandra.apache.org
Subject: Re: UUID coming as int while using SPARK SQL

Yes - a UUID is just a 128 bit value. You can view it using any base or format.

If you are looking at the same row, you should see the same 128 bit value, 
otherwise my theory is incorrect :)

Cheers,
ml

On Tue, May 24, 2016 at 6:57 AM, Rajesh Radhakrishnan 
>
 wrote:
Hi Michael,

Thank you for the quick reply.
So you are suggesting to convert this int value(UUID comes back as int via 
Spark SQL) to hex?


And selection is just a example to highlight the UUID convertion issue.
So in Cassandra it should be
SELECT id, workflow FROM sam WHERE dept='blah';

And in Spark with Python:
SELECT distinct id, dept, workflow FROM samd WHERE dept='blah';


Best,
Rajesh R



From: Laing, Michael 
[michael.la...@nytimes.com]
Sent: 24 May 2016 11:40
To: 
user@cassandra.apache.org
Subject: Re: UUID coming as int while using SPARK SQL

Try converting that int from decimal to hex and inserting dashes in the 
appropriate spots - or go the other way.

Also, you are looking at different rows, based upon your selection criteria...

ml

On Tue, May 24, 2016 at 6:23 AM, Rajesh Radhakrishnan 
>
 wrote:
Hi,


I got a Cassandra keyspace, but while reading the data(especially UUID) via 
Spark SQL using Python is not returning the correct value.

Cassandra:
--
My table 'SAM'' is described below:

CREATE table ks.sam (id uuid, dept text, workflow text, type double primary  
key (id, dept))

SELECT id, workflow FROM sam WHERE dept='blah';

The above example  CQL gives me the following
id   | workflow
--+
 9547v26c-f528-12e5-da8b-001a4q3dac10 |   testWK


Spark/Python:
--
from pyspark import SparkConf
from pyspark.sql import SQLContext
import pyspark_cassandra
from pyspark_cassandra import CassandraSparkContext


conf = 
SparkConf().set("spark.cassandra.connection.host",IP_ADDRESS).set("spark.cassandra.connection.native.port",PORT_NUMBER)
sparkContext = CassandraSparkContext(conf = conf)
sqlContext = SQLContext(sparkContext)

samTable =sparkContext.cassandraTable("ks", "sam").select('id', 
'dept','workflow')
samTable.cache()

samdf.registerTempTable("samd")

 sparkSQLl ="SELECT distinct id, dept, workflow FROM samd WHERE 
workflow='testWK'
 new_df = sqlContext.sql(sparkSQLl)
 results  =  new_df.collect()
 for row in results:
print "dept=",row.dept
print "wk=",row.workflow
print 
"id=",row.id
...
The Python code above prints the following:
dept=Biology
wk=testWK
id=293946894141093607334963674332192894528


You can see here that the id (uuid) whose correct value at Cassandra is ' 
9547v26c-f528-12e5-da8b-001a4q3dac10'  but via Spark I am getting an int 
'29394689414109360733496367433219289452'.
What I am doing wrong here? How to get the correct UUID value from 

Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread George Sigletos
Hi Mike,

Yes I am using NetworkTopologyStrategy. I checked
cassandra-rackdc.properties on the new node:
dc=DCamazon-1
rack=RACamazon-1

I also checked the jira link you sent me. My network topology seems
correct: I have 4 nodes in DC1 and 1 node in DCamazon-1 and I can verify
that when running "nodetool status".

Now I am running a full repair on the amazon node. I have given up
rebuilding

Kind regards,
George



On Wed, May 25, 2016 at 8:50 AM, Mike Yeap  wrote:

> Hi George, are you using NetworkTopologyStrategy as the replication
> strategy for your keyspace? If yes, can you check the
> cassandra-rackdc.properties of this new node?
>
> https://issues.apache.org/jira/browse/CASSANDRA-8279
>
>
> Regards,
> Mike Yeap
>
> On Wed, May 25, 2016 at 2:31 PM, George Sigletos 
> wrote:
>
>> I am getting this error repeatedly while I am trying to add a new DC
>> consisting of one node in AWS to my existing cluster. I have tried 5 times
>> already. Running Cassandra 2.1.13
>>
>> I have also set:
>> streaming_socket_timeout_in_ms: 360
>> in all of my nodes
>>
>> Does anybody have any idea how this can be fixed? Thanks in advance
>>
>> Kind regards,
>> George
>>
>> P.S.
>> The complete stack trace:
>> -- StackTrace --
>> java.lang.RuntimeException: Error while rebuilding node: Stream failed
>> at
>> org.apache.cassandra.service.StorageService.rebuild(StorageService.java:1076)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>> at java.lang.reflect.Method.invoke(Unknown Source)
>> at sun.reflect.misc.Trampoline.invoke(Unknown Source)
>> at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>> at java.lang.reflect.Method.invoke(Unknown Source)
>> at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
>> at
>> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown Source)
>> at
>> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown Source)
>> at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown
>> Source)
>> at com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
>> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown Source)
>> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown Source)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown Source)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown
>> Source)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown
>> Source)
>> at javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown
>> Source)
>> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>> at java.lang.reflect.Method.invoke(Unknown Source)
>> at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
>> at sun.rmi.transport.Transport$2.run(Unknown Source)
>> at sun.rmi.transport.Transport$2.run(Unknown Source)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at sun.rmi.transport.Transport.serviceCall(Unknown Source)
>> at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown
>> Source)
>> at
>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
>> at
>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(Unknown
>> Source)
>> at
>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(Unknown Source)
>> at
>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(Unknown Source)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at
>> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
>> Source)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
>> Source)
>> at java.lang.Thread.run(Unknown Source)
>>
>
>


Re: Error while rebuilding a node: Stream failed

2016-05-25 Thread Mike Yeap
Hi George, are you using NetworkTopologyStrategy as the replication
strategy for your keyspace? If yes, can you check the
cassandra-rackdc.properties of this new node?

https://issues.apache.org/jira/browse/CASSANDRA-8279


Regards,
Mike Yeap

On Wed, May 25, 2016 at 2:31 PM, George Sigletos 
wrote:

> I am getting this error repeatedly while I am trying to add a new DC
> consisting of one node in AWS to my existing cluster. I have tried 5 times
> already. Running Cassandra 2.1.13
>
> I have also set:
> streaming_socket_timeout_in_ms: 360
> in all of my nodes
>
> Does anybody have any idea how this can be fixed? Thanks in advance
>
> Kind regards,
> George
>
> P.S.
> The complete stack trace:
> -- StackTrace --
> java.lang.RuntimeException: Error while rebuilding node: Stream failed
> at
> org.apache.cassandra.service.StorageService.rebuild(StorageService.java:1076)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.reflect.misc.Trampoline.invoke(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
> at
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown Source)
> at
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown Source)
> at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown
> Source)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown Source)
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
> at
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown Source)
> at
> javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown Source)
> at
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown
> Source)
> at
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown
> Source)
> at javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown
> Source)
> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
> at sun.rmi.transport.Transport$2.run(Unknown Source)
> at sun.rmi.transport.Transport$2.run(Unknown Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Unknown Source)
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown
> Source)
> at
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
> at
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(Unknown
> Source)
> at
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(Unknown Source)
> at
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(Unknown Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
> at java.lang.Thread.run(Unknown Source)
>


Re: Increasing replication factor and repair doesn't seem to work

2016-05-25 Thread Mike Yeap
Hi Luke, I've encountered similar problem before, could you please advise
on following?

1) when you add 10.128.0.20, what are the seeds defined in cassandra.yaml?

2) when you add 10.128.0.20, were the data and cache directories in
10.128.0.20 empty?

   - /var/lib/cassandra/data
   - /var/lib/cassandra/saved_caches

3) if you do a compact in 10.128.0.3, what is the size shown in "Load"
column in "nodetool status "?

4) when you do the full repair, did you use "nodetool repair" or "nodetool
repair -full"? I'm asking this because Incremental Repair is the default
for Cassandra 2.2 and later.


Regards,
Mike Yeap

On Wed, May 25, 2016 at 8:01 AM, Bryan Cheng  wrote:

> Hi Luke,
>
> I've never found nodetool status' load to be useful beyond a general
> indicator.
>
> You should expect some small skew, as this will depend on your current
> compaction status, tombstones, etc. IIRC repair will not provide
> consistency of intermediate states nor will it remove tombstones, it only
> guarantees consistency in the final state. This means, in the case of
> dropped hints or mutations, you will see differences in intermediate
> states, and therefore storage footrpint, even in fully repaired nodes. This
> includes intermediate UPDATE operations as well.
>
> Your one node with sub 1GB sticks out like a sore thumb, though. Where did
> you originate the nodetool repair from? Remember that repair will only
> ensure consistency for ranges held by the node you're running it on. While
> I am not sure if missing ranges are included in this, if you ran nodetool
> repair only on a machine with partial ownership, you will need to complete
> repairs across the ring before data will return to full consistency.
>
> I would query some older data using consistency = ONE on the affected
> machine to determine if you are actually missing data.  There are a few
> outstanding bugs in the 2.1.x  and older release families that may result
> in tombstone creation even without deletes, for example CASSANDRA-10547,
> which impacts updates on collections in pre-2.1.13 Cassandra.
>
> You can also try examining the output of nodetool ring, which will give
> you a breakdown of tokens and their associations within your cluster.
>
> --Bryan
>
> On Tue, May 24, 2016 at 3:49 PM, kurt Greaves 
> wrote:
>
>> Not necessarily considering RF is 2 so both nodes should have all
>> partitions. Luke, are you sure the repair is succeeding? You don't have
>> other keyspaces/duplicate data/extra data in your cassandra data directory?
>> Also, you could try querying on the node with less data to confirm if it
>> has the same dataset.
>>
>> On 24 May 2016 at 22:03, Bhuvan Rawal  wrote:
>>
>>> For the other DC, it can be acceptable because partition reside on one
>>> node, so say  if you have a large partition, it may skew things a bit.
>>> On May 25, 2016 2:41 AM, "Luke Jolly"  wrote:
>>>
 So I guess the problem may have been with the initial addition of the
 10.128.0.20 node because when I added it in it never synced data I
 guess?  It was at around 50 MB when it first came up and transitioned to
 "UN". After it was in I did the 1->2 replication change and tried repair
 but it didn't fix it.  From what I can tell all the data on it is stuff
 that has been written since it came up.  We never delete data ever so we
 should have zero tombstones.

 If I am not mistaken, only two of my nodes actually have all the data,
 10.128.0.3 and 10.142.0.14 since they agree on the data amount. 10.142.0.13
 is almost a GB lower and then of course 10.128.0.20 which is missing
 over 5 GB of data.  I tried running nodetool -local on both DCs and it
 didn't fix either one.

 Am I running into a bug of some kind?

 On Tue, May 24, 2016 at 4:06 PM Bhuvan Rawal 
 wrote:

> Hi Luke,
>
> You mentioned that replication factor was increased from 1 to 2. In
> that case was the node bearing ip 10.128.0.20 carried around 3GB data
> earlier?
>
> You can run nodetool repair with option -local to initiate repair
> local datacenter for gce-us-central1.
>
> Also you may suspect that if a lot of data was deleted while the node
> was down it may be having a lot of tombstones which is not needed to be
> replicated to the other node. In order to verify the same, you can issue a
> select count(*) query on column families (With the amount of data you have
> it should not be an issue) with tracing on and with consistency local_all
> by connecting to either 10.128.0.3  or 10.128.0.20 and store it in a
> file. It will give you a fair amount of idea about how many deleted cells
> the nodes have. I tried searching for reference if tombstones are moved
> around during repair, but I didnt find evidence of it. However I see no
> reason to because if the 

Error while rebuilding a node: Stream failed

2016-05-25 Thread George Sigletos
I am getting this error repeatedly while I am trying to add a new DC
consisting of one node in AWS to my existing cluster. I have tried 5 times
already. Running Cassandra 2.1.13

I have also set:
streaming_socket_timeout_in_ms: 360
in all of my nodes

Does anybody have any idea how this can be fixed? Thanks in advance

Kind regards,
George

P.S.
The complete stack trace:
-- StackTrace --
java.lang.RuntimeException: Error while rebuilding node: Stream failed
at
org.apache.cassandra.service.StorageService.rebuild(StorageService.java:1076)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sun.reflect.misc.Trampoline.invoke(Unknown Source)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
at
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown Source)
at
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown Source)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
at com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown Source)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
at
javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown Source)
at javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown
Source)
at
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown
Source)
at
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown
Source)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown
Source)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
at sun.rmi.transport.Transport$2.run(Unknown Source)
at sun.rmi.transport.Transport$2.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(Unknown
Source)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(Unknown Source)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source)
at java.lang.Thread.run(Unknown Source)