Re: Bootstraping is failing

2020-05-07 Thread Surbhi Gupta
When we are starting the node, it is starting bootstrap automatically and
restreaming the whole data again.  It is not resuming .

On Thu, May 7, 2020 at 4:47 PM Adam Scott  wrote:

> I think you want to run `nodetool bootstrap resume` (
> https://cassandra.apache.org/doc/latest/tools/nodetool/bootstrap.html)
> to pick up where it last left off. Sorry for the late reply.
>
>
> On Thu, May 7, 2020 at 2:22 PM Surbhi Gupta 
> wrote:
>
>> So after failed bootstrapped , if we start cassandra again on the new
>> node , will it resume bootstrap or will it start over?
>>
>> On Thu, 7 May 2020 at 13:32, Adam Scott  wrote:
>>
>>> I recommend it on all nodes.  This will eliminate that as a source of
>>> trouble further on down the road.
>>>
>>>
>>> On Thu, May 7, 2020 at 1:30 PM Surbhi Gupta 
>>> wrote:
>>>
 streaming_socket_timeout_in_ms is 24 hour.
   So tcp settings should be changed on the new bootstrap node or on all
 nodes ?


 On Thu, 7 May 2020 at 13:23, Adam Scott  wrote:

>
> *edit
> /etc/sysctl.confnet.ipv4.tcp_keepalive_time=60 
> net.ipv4.tcp_keepalive_probes=3net.ipv4.tcp_keepalive_intvl=10*
> then run sysctl -p to cause the kernel to reload the settings
>
> 5 minutes (300) seconds is probably too long.
>
> On Thu, May 7, 2020 at 1:09 PM Surbhi Gupta 
> wrote:
>
>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_time
>>
>> 300
>>
>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>>
>> 30
>>
>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_probes
>>
>> 9
>>
>> On Thu, 7 May 2020 at 12:32, Adam Scott 
>> wrote:
>>
>>> Maybe a firewall killing a connection?
>>>
>>> What does the following show?
>>> cat /proc/sys/net/ipv4/tcp_keepalive_time
>>> cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>>> cat /proc/sys/net/ipv4/tcp_keepalive_probes
>>>
>>> On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta <
>>> surbhi.gupt...@gmail.com> wrote:
>>>
 Hi,

 We are trying to expand a datacenter and trying to add nodes but
 when node is bootstrapping , it goes half way through and then fail 
 with
 below error, We have increased stremthroughput from 200 to 400 when we 
 were
 trying for the 2nd time but still it failed. We are on 3.11.0 , using 
 G1GC
 with 31GB heap.

 ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
 CassandraDaemon.java:228 - Exception in thread
 Thread[MessagingService-Incoming-/10.X.X.X,main]

 java.io.IOError: java.io.EOFException: Stream ended prematurely

 at
 org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 Caused by: java.io.EOFException: Stream ended prematurely

 

Re: Bootstraping is failing

2020-05-07 Thread Adam Scott
I think you want to run `nodetool bootstrap resume` (
https://cassandra.apache.org/doc/latest/tools/nodetool/bootstrap.html)  to
pick up where it last left off. Sorry for the late reply.


On Thu, May 7, 2020 at 2:22 PM Surbhi Gupta 
wrote:

> So after failed bootstrapped , if we start cassandra again on the new node
> , will it resume bootstrap or will it start over?
>
> On Thu, 7 May 2020 at 13:32, Adam Scott  wrote:
>
>> I recommend it on all nodes.  This will eliminate that as a source of
>> trouble further on down the road.
>>
>>
>> On Thu, May 7, 2020 at 1:30 PM Surbhi Gupta 
>> wrote:
>>
>>> streaming_socket_timeout_in_ms is 24 hour.
>>>   So tcp settings should be changed on the new bootstrap node or on all
>>> nodes ?
>>>
>>>
>>> On Thu, 7 May 2020 at 13:23, Adam Scott  wrote:
>>>

 *edit
 /etc/sysctl.confnet.ipv4.tcp_keepalive_time=60 
 net.ipv4.tcp_keepalive_probes=3net.ipv4.tcp_keepalive_intvl=10*
 then run sysctl -p to cause the kernel to reload the settings

 5 minutes (300) seconds is probably too long.

 On Thu, May 7, 2020 at 1:09 PM Surbhi Gupta 
 wrote:

> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_time
>
> 300
>
> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>
> 30
>
> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_probes
>
> 9
>
> On Thu, 7 May 2020 at 12:32, Adam Scott 
> wrote:
>
>> Maybe a firewall killing a connection?
>>
>> What does the following show?
>> cat /proc/sys/net/ipv4/tcp_keepalive_time
>> cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>> cat /proc/sys/net/ipv4/tcp_keepalive_probes
>>
>> On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta <
>> surbhi.gupt...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We are trying to expand a datacenter and trying to add nodes but
>>> when node is bootstrapping , it goes half way through and then fail with
>>> below error, We have increased stremthroughput from 200 to 400 when we 
>>> were
>>> trying for the 2nd time but still it failed. We are on 3.11.0 , using 
>>> G1GC
>>> with 31GB heap.
>>>
>>> ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
>>> CassandraDaemon.java:228 - Exception in thread
>>> Thread[MessagingService-Incoming-/10.X.X.X,main]
>>>
>>> java.io.IOError: java.io.EOFException: Stream ended prematurely
>>>
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> Caused by: java.io.EOFException: Stream ended prematurely
>>>
>>> at
>>> net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218)
>>> ~[lz4-1.3.0.jar:na]
>>>
>>> at
>>> net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150)
>>> ~[lz4-1.3.0.jar:na]
>>>
>>> at
>>> 

Re: Bootstraping is failing

2020-05-07 Thread Surbhi Gupta
So after failed bootstrapped , if we start cassandra again on the new node
, will it resume bootstrap or will it start over?

On Thu, 7 May 2020 at 13:32, Adam Scott  wrote:

> I recommend it on all nodes.  This will eliminate that as a source of
> trouble further on down the road.
>
>
> On Thu, May 7, 2020 at 1:30 PM Surbhi Gupta 
> wrote:
>
>> streaming_socket_timeout_in_ms is 24 hour.
>>   So tcp settings should be changed on the new bootstrap node or on all
>> nodes ?
>>
>>
>> On Thu, 7 May 2020 at 13:23, Adam Scott  wrote:
>>
>>>
>>> *edit
>>> /etc/sysctl.confnet.ipv4.tcp_keepalive_time=60 
>>> net.ipv4.tcp_keepalive_probes=3net.ipv4.tcp_keepalive_intvl=10*
>>> then run sysctl -p to cause the kernel to reload the settings
>>>
>>> 5 minutes (300) seconds is probably too long.
>>>
>>> On Thu, May 7, 2020 at 1:09 PM Surbhi Gupta 
>>> wrote:
>>>
 [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_time

 300

 [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl

 30

 [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_probes

 9

 On Thu, 7 May 2020 at 12:32, Adam Scott  wrote:

> Maybe a firewall killing a connection?
>
> What does the following show?
> cat /proc/sys/net/ipv4/tcp_keepalive_time
> cat /proc/sys/net/ipv4/tcp_keepalive_intvl
> cat /proc/sys/net/ipv4/tcp_keepalive_probes
>
> On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta 
> wrote:
>
>> Hi,
>>
>> We are trying to expand a datacenter and trying to add nodes but when
>> node is bootstrapping , it goes half way through and then fail with below
>> error, We have increased stremthroughput from 200 to 400 when we were
>> trying for the 2nd time but still it failed. We are on 3.11.0 , using 
>> G1GC
>> with 31GB heap.
>>
>> ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
>> CassandraDaemon.java:228 - Exception in thread
>> Thread[MessagingService-Incoming-/10.X.X.X,main]
>>
>> java.io.IOError: java.io.EOFException: Stream ended prematurely
>>
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> Caused by: java.io.EOFException: Stream ended prematurely
>>
>> at
>> net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218)
>> ~[lz4-1.3.0.jar:na]
>>
>> at
>> net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150)
>> ~[lz4-1.3.0.jar:na]
>>
>> at
>> net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:117)
>> ~[lz4-1.3.0.jar:na]
>>
>> at
>> java.io.DataInputStream.readFully(DataInputStream.java:195) 
>> ~[na:1.8.0_242]
>>
>> at
>> java.io.DataInputStream.readFully(DataInputStream.java:169) 
>> ~[na:1.8.0_242]
>>
>> at
>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402)
>> 

Re: Bootstraping is failing

2020-05-07 Thread Adam Scott
I recommend it on all nodes.  This will eliminate that as a source of
trouble further on down the road.


On Thu, May 7, 2020 at 1:30 PM Surbhi Gupta 
wrote:

> streaming_socket_timeout_in_ms is 24 hour.
>   So tcp settings should be changed on the new bootstrap node or on all
> nodes ?
>
>
> On Thu, 7 May 2020 at 13:23, Adam Scott  wrote:
>
>>
>> *edit
>> /etc/sysctl.confnet.ipv4.tcp_keepalive_time=60 
>> net.ipv4.tcp_keepalive_probes=3net.ipv4.tcp_keepalive_intvl=10*
>> then run sysctl -p to cause the kernel to reload the settings
>>
>> 5 minutes (300) seconds is probably too long.
>>
>> On Thu, May 7, 2020 at 1:09 PM Surbhi Gupta 
>> wrote:
>>
>>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_time
>>>
>>> 300
>>>
>>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>>>
>>> 30
>>>
>>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_probes
>>>
>>> 9
>>>
>>> On Thu, 7 May 2020 at 12:32, Adam Scott  wrote:
>>>
 Maybe a firewall killing a connection?

 What does the following show?
 cat /proc/sys/net/ipv4/tcp_keepalive_time
 cat /proc/sys/net/ipv4/tcp_keepalive_intvl
 cat /proc/sys/net/ipv4/tcp_keepalive_probes

 On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta 
 wrote:

> Hi,
>
> We are trying to expand a datacenter and trying to add nodes but when
> node is bootstrapping , it goes half way through and then fail with below
> error, We have increased stremthroughput from 200 to 400 when we were
> trying for the 2nd time but still it failed. We are on 3.11.0 , using G1GC
> with 31GB heap.
>
> ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
> CassandraDaemon.java:228 - Exception in thread
> Thread[MessagingService-Incoming-/10.X.X.X,main]
>
> java.io.IOError: java.io.EOFException: Stream ended prematurely
>
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> Caused by: java.io.EOFException: Stream ended prematurely
>
> at
> net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218)
> ~[lz4-1.3.0.jar:na]
>
> at
> net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150)
> ~[lz4-1.3.0.jar:na]
>
> at
> net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:117)
> ~[lz4-1.3.0.jar:na]
>
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> ~[na:1.8.0_242]
>
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> ~[na:1.8.0_242]
>
> at
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> 

Re: Bootstraping is failing

2020-05-07 Thread Surbhi Gupta
streaming_socket_timeout_in_ms is 24 hour.
  So tcp settings should be changed on the new bootstrap node or on all
nodes ?


On Thu, 7 May 2020 at 13:23, Adam Scott  wrote:

>
> *edit
> /etc/sysctl.confnet.ipv4.tcp_keepalive_time=60 
> net.ipv4.tcp_keepalive_probes=3net.ipv4.tcp_keepalive_intvl=10*
> then run sysctl -p to cause the kernel to reload the settings
>
> 5 minutes (300) seconds is probably too long.
>
> On Thu, May 7, 2020 at 1:09 PM Surbhi Gupta 
> wrote:
>
>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_time
>>
>> 300
>>
>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>>
>> 30
>>
>> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_probes
>>
>> 9
>>
>> On Thu, 7 May 2020 at 12:32, Adam Scott  wrote:
>>
>>> Maybe a firewall killing a connection?
>>>
>>> What does the following show?
>>> cat /proc/sys/net/ipv4/tcp_keepalive_time
>>> cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>>> cat /proc/sys/net/ipv4/tcp_keepalive_probes
>>>
>>> On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta 
>>> wrote:
>>>
 Hi,

 We are trying to expand a datacenter and trying to add nodes but when
 node is bootstrapping , it goes half way through and then fail with below
 error, We have increased stremthroughput from 200 to 400 when we were
 trying for the 2nd time but still it failed. We are on 3.11.0 , using G1GC
 with 31GB heap.

 ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
 CassandraDaemon.java:228 - Exception in thread
 Thread[MessagingService-Incoming-/10.X.X.X,main]

 java.io.IOError: java.io.EOFException: Stream ended prematurely

 at
 org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 Caused by: java.io.EOFException: Stream ended prematurely

 at
 net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218)
 ~[lz4-1.3.0.jar:na]

 at
 net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150)
 ~[lz4-1.3.0.jar:na]

 at
 net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:117)
 ~[lz4-1.3.0.jar:na]

 at java.io.DataInputStream.readFully(DataInputStream.java:195)
 ~[na:1.8.0_242]

 at java.io.DataInputStream.readFully(DataInputStream.java:169)
 ~[na:1.8.0_242]

 at
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.rows.UnfilteredSerializer.readComplexColumn(UnfilteredSerializer.java:665)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

 at
 org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:606)
 

Re: Bootstraping is failing

2020-05-07 Thread Adam Scott
*edit
/etc/sysctl.confnet.ipv4.tcp_keepalive_time=60
net.ipv4.tcp_keepalive_probes=3net.ipv4.tcp_keepalive_intvl=10*
then run sysctl -p to cause the kernel to reload the settings

5 minutes (300) seconds is probably too long.

On Thu, May 7, 2020 at 1:09 PM Surbhi Gupta 
wrote:

> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_time
>
> 300
>
> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>
> 30
>
> [root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_probes
>
> 9
>
> On Thu, 7 May 2020 at 12:32, Adam Scott  wrote:
>
>> Maybe a firewall killing a connection?
>>
>> What does the following show?
>> cat /proc/sys/net/ipv4/tcp_keepalive_time
>> cat /proc/sys/net/ipv4/tcp_keepalive_intvl
>> cat /proc/sys/net/ipv4/tcp_keepalive_probes
>>
>> On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta 
>> wrote:
>>
>>> Hi,
>>>
>>> We are trying to expand a datacenter and trying to add nodes but when
>>> node is bootstrapping , it goes half way through and then fail with below
>>> error, We have increased stremthroughput from 200 to 400 when we were
>>> trying for the 2nd time but still it failed. We are on 3.11.0 , using G1GC
>>> with 31GB heap.
>>>
>>> ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
>>> CassandraDaemon.java:228 - Exception in thread
>>> Thread[MessagingService-Incoming-/10.X.X.X,main]
>>>
>>> java.io.IOError: java.io.EOFException: Stream ended prematurely
>>>
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> Caused by: java.io.EOFException: Stream ended prematurely
>>>
>>> at
>>> net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218)
>>> ~[lz4-1.3.0.jar:na]
>>>
>>> at
>>> net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150)
>>> ~[lz4-1.3.0.jar:na]
>>>
>>> at
>>> net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:117)
>>> ~[lz4-1.3.0.jar:na]
>>>
>>> at java.io.DataInputStream.readFully(DataInputStream.java:195)
>>> ~[na:1.8.0_242]
>>>
>>> at java.io.DataInputStream.readFully(DataInputStream.java:169)
>>> ~[na:1.8.0_242]
>>>
>>> at
>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredSerializer.readComplexColumn(UnfilteredSerializer.java:665)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:606)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at
>>> org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197)
>>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>>
>>> at 

RE: Bootstraping is failing

2020-05-07 Thread ZAIDI, ASAD
heck if  [streaming_socket_timeout_in_ms ] setting in Cassandra.yaml file if 
that sufficient enough before streaming is interrupted ?
~Asad




From: Surbhi Gupta [mailto:surbhi.gupt...@gmail.com]
Sent: Thursday, May 7, 2020 3:09 PM
To: user@cassandra.apache.org
Subject: Re: Bootstraping is failing


[root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_time

300

[root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl

30

[root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_probes

9

On Thu, 7 May 2020 at 12:32, Adam Scott 
mailto:adam.c.sc...@gmail.com>> wrote:
Maybe a firewall killing a connection?

What does the following show?
cat /proc/sys/net/ipv4/tcp_keepalive_time
cat /proc/sys/net/ipv4/tcp_keepalive_intvl
cat /proc/sys/net/ipv4/tcp_keepalive_probes

On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta 
mailto:surbhi.gupt...@gmail.com>> wrote:
Hi,

We are trying to expand a datacenter and trying to add nodes but when node is 
bootstrapping , it goes half way through and then fail with below error, We 
have increased stremthroughput from 200 to 400 when we were trying for the 2nd 
time but still it failed. We are on 3.11.0 , using G1GC with 31GB heap.


ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933 
CassandraDaemon.java:228 - Exception in thread 
Thread[MessagingService-Incoming-/10.X.X.X,main]

java.io.IOError: java.io.EOFException: Stream ended prematurely

at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

Caused by: java.io.EOFException: Stream ended prematurely

at 
net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218) 
~[lz4-1.3.0.jar:na]

at 
net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150) 
~[lz4-1.3.0.jar:na]

at 
net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:117) 
~[lz4-1.3.0.jar:na]

at java.io.DataInputStream.readFully(DataInputStream.java:195) 
~[na:1.8.0_242]

at java.io.DataInputStream.readFully(DataInputStream.java:169) 
~[na:1.8.0_242]

at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.rows.UnfilteredSerializer.readComplexColumn(UnfilteredSerializer.java:665)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:606)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at org.apache.cassandra.db.Columns.apply(Columns.java:377) 
~[apache-cassandra-3.11.0.jar:3.11.0]

at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:600)
 ~[apache-cassandra-3.11.0.jar:3.11.0]

at 

Re: Bootstraping is failing

2020-05-07 Thread Surbhi Gupta
[root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_time

300

[root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_intvl

30

[root@abc cassandra]# cat /proc/sys/net/ipv4/tcp_keepalive_probes

9

On Thu, 7 May 2020 at 12:32, Adam Scott  wrote:

> Maybe a firewall killing a connection?
>
> What does the following show?
> cat /proc/sys/net/ipv4/tcp_keepalive_time
> cat /proc/sys/net/ipv4/tcp_keepalive_intvl
> cat /proc/sys/net/ipv4/tcp_keepalive_probes
>
> On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta 
> wrote:
>
>> Hi,
>>
>> We are trying to expand a datacenter and trying to add nodes but when
>> node is bootstrapping , it goes half way through and then fail with below
>> error, We have increased stremthroughput from 200 to 400 when we were
>> trying for the 2nd time but still it failed. We are on 3.11.0 , using G1GC
>> with 31GB heap.
>>
>> ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
>> CassandraDaemon.java:228 - Exception in thread
>> Thread[MessagingService-Incoming-/10.X.X.X,main]
>>
>> java.io.IOError: java.io.EOFException: Stream ended prematurely
>>
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> Caused by: java.io.EOFException: Stream ended prematurely
>>
>> at
>> net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218)
>> ~[lz4-1.3.0.jar:na]
>>
>> at
>> net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150)
>> ~[lz4-1.3.0.jar:na]
>>
>> at
>> net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:117)
>> ~[lz4-1.3.0.jar:na]
>>
>> at java.io.DataInputStream.readFully(DataInputStream.java:195)
>> ~[na:1.8.0_242]
>>
>> at java.io.DataInputStream.readFully(DataInputStream.java:169)
>> ~[na:1.8.0_242]
>>
>> at
>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.readComplexColumn(UnfilteredSerializer.java:665)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:606)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at org.apache.cassandra.db.Columns.apply(Columns.java:377)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:600)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at
>> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeOne(UnfilteredSerializer.java:475)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>
>> at

Re: Bootstraping is failing

2020-05-07 Thread Adam Scott
Maybe a firewall killing a connection?

What does the following show?
cat /proc/sys/net/ipv4/tcp_keepalive_time
cat /proc/sys/net/ipv4/tcp_keepalive_intvl
cat /proc/sys/net/ipv4/tcp_keepalive_probes

On Thu, May 7, 2020 at 10:31 AM Surbhi Gupta 
wrote:

> Hi,
>
> We are trying to expand a datacenter and trying to add nodes but when node
> is bootstrapping , it goes half way through and then fail with below error,
> We have increased stremthroughput from 200 to 400 when we were trying for
> the 2nd time but still it failed. We are on 3.11.0 , using G1GC with 31GB
> heap.
>
> ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
> CassandraDaemon.java:228 - Exception in thread
> Thread[MessagingService-Incoming-/10.X.X.X,main]
>
> java.io.IOError: java.io.EOFException: Stream ended prematurely
>
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> Caused by: java.io.EOFException: Stream ended prematurely
>
> at
> net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218)
> ~[lz4-1.3.0.jar:na]
>
> at
> net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150)
> ~[lz4-1.3.0.jar:na]
>
> at
> net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:117)
> ~[lz4-1.3.0.jar:na]
>
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> ~[na:1.8.0_242]
>
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> ~[na:1.8.0_242]
>
> at
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.readComplexColumn(UnfilteredSerializer.java:665)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:606)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at org.apache.cassandra.db.Columns.apply(Columns.java:377)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:600)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeOne(UnfilteredSerializer.java:475)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:431)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:222)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>
> ... 11 common frames omitted
>
> Thanks
> Surbhi
>


Re: hints files at joining node

2020-05-07 Thread Osman Yozgatlıoğlu
Thank you all,
I assume, my new node is used as coordinator, since all other nodes
are under heavy write load.
I need to spare some space for hints folder.

Regards,
Osman

On Thu, 7 May 2020 at 08:29, Jeff Jirsa  wrote:
>
> Incremental bootstrap patch changed the logic here. A node can act as a 
> coordinator before it's fully joined. It's ... decidedly non-ideal and 
> probably needs to be changed. 
> https://issues.apache.org/jira/browse/CASSANDRA-8942
>
>
>
>
>
> On Wed, May 6, 2020 at 9:57 PM Erick Ramirez  
> wrote:
>>
>> The fact that a new node is acting as a coordinator suggests that (1) you 
>> are adding a node to a DC that is taking traffic from the app, and (2) you 
>> are likely adding the node using nodetool rebuild instead of the standard 
>> bootstrap method.
>>
>> If you are adding a node using the rebuild option (with auto_bootstrap set 
>> to false in cassandra.yaml), the node joins the cluster as a normal node 
>> except it doesn't have any data to serve read requests but it will accept 
>> writes. This isn't a recommended way of adding nodes to a DC that is 
>> actively serving requests from the app.
>>
>>> As I understand only coordinator nodes generates hints files. Is
>>> cluster uses this node as coordinator before complete join?
>>> Or this process is normal for joining and seen as repair?
>>
>>
>> The fact that the new node is storing hints is a concern because it 
>> indicates that other nodes in your cluster are unresponsive or down. You 
>> need to investigate why that is the case.
>>
>>>
>>> By the way, files not deleted after 3 hour period.
>>
>>
>> The node will collect hints for other nodes for 3 hours (default). If the 
>> replica has not come back online after 3 hours, hints will no longer be 
>> stored but it doesn't delete the hints but instead hand it off to the 
>> respective replica when it comes back online. Cheers!

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Bootstraping is failing

2020-05-07 Thread Surbhi Gupta
Hi,

We are trying to expand a datacenter and trying to add nodes but when node
is bootstrapping , it goes half way through and then fail with below error,
We have increased stremthroughput from 200 to 400 when we were trying for
the 2nd time but still it failed. We are on 3.11.0 , using G1GC with 31GB
heap.

ERROR [MessagingService-Incoming-/10.X.X.X] 2020-05-07 09:42:38,933
CassandraDaemon.java:228 - Exception in thread
Thread[MessagingService-Incoming-/10.X.X.X,main]

java.io.IOError: java.io.EOFException: Stream ended prematurely

at
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:227)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:215)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:839)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:814)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:371)
~[apache-cassandra-3.11.0.jar:3.11.0]

at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
~[apache-cassandra-3.11.0.jar:3.11.0]

Caused by: java.io.EOFException: Stream ended prematurely

at
net.jpountz.lz4.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:218)
~[lz4-1.3.0.jar:na]

at
net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:150)
~[lz4-1.3.0.jar:na]

at
net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:117)
~[lz4-1.3.0.jar:na]

at java.io.DataInputStream.readFully(DataInputStream.java:195)
~[na:1.8.0_242]

at java.io.DataInputStream.readFully(DataInputStream.java:169)
~[na:1.8.0_242]

at
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:437)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:245)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.rows.UnfilteredSerializer.readComplexColumn(UnfilteredSerializer.java:665)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$deserializeRowBody$1(UnfilteredSerializer.java:606)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242)
~[apache-cassandra-3.11.0.jar:3.11.0]

at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197)
~[apache-cassandra-3.11.0.jar:3.11.0]

at org.apache.cassandra.db.Columns.apply(Columns.java:377)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:600)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeOne(UnfilteredSerializer.java:475)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:431)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:222)
~[apache-cassandra-3.11.0.jar:3.11.0]

... 11 common frames omitted

Thanks
Surbhi


Re: [EXTERNAL] Re: Adding new DC results in clients failing to connect

2020-05-07 Thread João Reis
Hi,

I don't believe that the peers entry is responsible for that exception.
Looking at the driver code, I can't even think of a scenario where that
exception would be thrown... I will run some tests in the next couple of
days to try and figure something out.

One thing that is certain from those log messages is that the tokenmap
computation is very slow (20 seconds). With 100+ nodes and 256 vnodes per
node, we should expect the token map computation to be a bit slower but 20
seconds is definitely too much. I've opened CSHARP-901 to track this. [1]

João Reis

[1] https://datastax-oss.atlassian.net/browse/CSHARP-901

Gediminas Blazys  escreveu no dia
segunda, 4/05/2020 à(s) 11:13:

> Hello again,
>
>
>
> Looking into system.peers we found that some nodes contain entries about
> themselves with null values. Not sure if this could be an issue, maybe
> someone saw something similar? This state is there before including the
> funky DC into replication.
>
> peer
>
>  data_center
>
>  host_id
>
>  preferred_ip
>
>  rack
>
>  release_version
>
>  rpc_address
>
>  schema_version
>
>  tokens
>
> 
>
> null
>
>  null
>
>  192.168.104.111
>
>   null
>
> null
>
> null
>
> null
>
> null
>
>
>
> Have a wonderful day 
>
>
>
> Gediminas
>
>
>
> *From:* Gediminas Blazys 
> *Sent:* Monday, May 4, 2020 10:09
> *To:* user@cassandra.apache.org
> *Subject:* RE: [EXTERNAL] Re: Adding new DC results in clients failing to
> connect
>
>
>
> Hello,
>
>
>
> Thanks for the reply.
>
>
>
> Following your advice we took a look at system.local for seed nodes and
> compared that data with nodetool ring. Both sources contain the same tokens
> for these specific hosts. Will continue looking into system.peers.
>
>
>
> We have enabled more verbosity on the C# driver and this is the message
> that we get now:
>
>
>
> ControlConnection: 05/03/2020 14:28:42.346 +03:00 : Updating keyspaces
> metadata
>
> ControlConnection: 05/03/2020 14:28:42.377 +03:00 : Rebuilding token map
>
> ControlConnection: 05/03/2020 14:29:03.837 +03:00 : Finished building
> TokenMap for 7 keyspaces and 210 hosts. It took 19403 milliseconds.
>
> ControlConnection: 05/03/2020 14:29:03.901 +03:00 ALARMA: ENDPOINT:
> <>:9042 EXCEPTION: System.ArgumentException: The source argument
> contains duplicate keys.
>
>at
> System.Collections.Concurrent.ConcurrentDictionary`2.InitializeFromCollection(IEnumerable`1
> collection)
>
>at
> System.Collections.Concurrent.ConcurrentDictionary`2..ctor(IEnumerable`1
> collection, IEqualityComparer`1 comparer)
>
>at
> System.Collections.Concurrent.ConcurrentDictionary`2..ctor(IEnumerable`1
> collection)
>
>at Cassandra.TokenMap..ctor(TokenFactory factory, IReadOnlyDictionary`2
> tokenToHostsByKeyspace, List`1 ring, IReadOnlyDictionary`2 primaryReplicas,
> IReadOnlyDictionary`2 keyspaceTokensCache, IReadOnlyDictionary`2
> datacenters, Int32 numberOfHostsWithTokens)
>
>at Cassandra.TokenMap.Build(String partitioner, ICollection`1 hosts,
> ICollection`1 keyspaces)
>
>at Cassandra.Metadata.d__59.MoveNext()
>
> --- End of stack trace from previous location where exception was thrown
> ---
>
>at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task
> task)
>
>at
> System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task
> task)
>
>at
> System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
>
>at Cassandra.Connections.ControlConnection.d__44.MoveNext()
>
>
>
> The error occurs on Cassandra.TokenMap. We are analyzing objects that the
> driver initializes during the token map creation but we are yet to find
> that dictionary with duplicated keys.
>
> Just to note, once this new DC is added to replication python driver is
> unable to establish a connection either. cqlsh though, seems to be ok. It
> is hard to say for sure, but for now at least, this issue seems to be
> pointing to Cassandra.
>
>
>
> Gediminas
>
>
>
> *From:* Jorge Bay Gondra 
> *Sent:* Thursday, April 30, 2020 11:45
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Adding new DC results in clients failing to
> connect
>
>
>
> Hi,
>
> You can enable logging at driver to see what's happening under the hood:
> https://docs.datastax.com/en/developer/csharp-driver/3.14/faq/#how-can-i-enable-logging-in-the-driver
> 
>
> With logging information, it should be easy to track the issue down.
>
>
>
> Can you query system.local and system.peers on a seed node / contact point
> to see if all the node list / token info is expected. You can compare 

Re: Performance drop of current Java drivers

2020-05-07 Thread Matthias Pfau
Perfect, thanks for looking into this!

Best,
Matthias

5. Mai 2020, 20:01 von erik.mer...@datastax.com:

> Matthias,
>
> Thanks for sharing your findings and test code. We were able to track this to 
> a regression in the underlying Netty library and already have a similar issue 
> reported here:
> https://datastax-oss.atlassian.net/browse/JAVA-2676
>
> The regression seems to be with the upgrade to Netty version 4.1.45, which 
> affects driver versions 4.5.0, 4.5.1 and 4.6.0. You can work around this 
> issue by explicitly downgrading Netty to version 4.1.43. If you are using 
> Maven, you can do this by explicitly declaring the version of Netty in your 
> dependencies as follows:
>
>         
>             io.netty
>             netty-handler
>             4.1.43.Final
>         
>
> We will be addressing this issue and likely release a fixed version very soon.
>
> Many thanks again,
> Erik
>
> On Mon, May 4, 2020 at 6:58 AM Matthias Pfau  
> wrote:
>
>> Hi Chris and Adam,
>>  thanks for looking into this!
>>  
>>  You can find my tests for old/new client here:
>>  >> https://gist.github.com/mpfau/7905cea3b73d235033e4f3319e219d15
>>  >> https://gist.github.com/mpfau/a62cce01b83b56afde0dbb588470bc18
>>  
>>  
>>  May 1, 2020, 16:22 by >> adam.holmb...@datastax.com>> :
>>  
>>  > Also, if you can share your schema and benchmark code, that would be a 
>> good start.
>>  >
>>  > On Fri, May 1, 2020 at 7:09 AM Chris Splinter <> >> 
>> chris.splinter...@gmail.com>> > > wrote:
>>  >
>>  >> Hi Matthias,
>>  >>
>>  >> I have forwarded this to the developers that work on the Java driver and 
>> they will be looking into this first thing next week.
>>  >>
>>  >> Will circle back here with findings,
>>  >>
>>  >> Chris
>>  >>
>>  >> On Fri, May 1, 2020 at 12:28 AM Erick Ramirez <>> >> 
>> erick.rami...@datastax.com>> >> > wrote:
>>  >>
>>  >>> Matthias, I don't have an answer to your question but I just wanted to 
>> note that I don't believe the driver contributors actively watch this 
>> mailing list (I'm happy to be corrected  ) so I'd recommend you cross-post 
>> in the Java driver channels as well. Cheers!
>>  >>>
>>  >
>>  >
>>  > -- 
>>  > Adam Holmberg
>>  > e.>  > >> adam.holmb...@datastax.com
>>  >  > w.>  > >> www.datastax.com >>  <>> 
>> http://www.datastax.com>> >
>>  >
>>  >
>>  
>>  
>>  -
>>  To unsubscribe, e-mail: >> user-unsubscr...@cassandra.apache.org
>>  For additional commands, e-mail: >> user-h...@cassandra.apache.org
>>  
>>
>
>
> -- 
> Erik  Merkle
> e.>  > erik.mer...@datastax.com
>  > w.>  > www.datastax.com 
>
>


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



How to access list with nested maps with Cassandra Java Driver

2020-05-07 Thread Andreas R.
I am having trouble with getting a list including maps with |Cassandra 
Java Driver|.


For below version:

|List>myList 
=state.getList(5,TypeTokens.mapOf(Integer.class,Integer.class));|


The error is like:

|InvalidRequest:Errorfrom server:code=2200[Invalidquery]message="Java 
source compilation failed: Line 3: TypeTokens cannot be resolved"|


And for version like below:

|List>myList =state.getList(5,Map.class);|

The error is like:

|InvalidRequest:Errorfrom server:code=2200[Invalidquery]message="Java 
source compilation failed: Line 3: Type mismatch: cannot convert from 
List to List>|


|state| is a UDT defined as:

|CREATE TYPE count_min_udt(n int,m int,p bigint,hash_a list 
,hash_b list ,values list>>);|


Am I using them wrong? I'd appreciate some help.

Thanks, Andreas



Re: What does "PER PARTITION LIMIT" means in cql query in cassandra?

2020-05-07 Thread Pekka Enberg
Hi Chuck,

On Thu, May 7, 2020 at 10:14 AM Check Peck  wrote:

> I have a scylla table as shown below:
>

(Please note that this is the Apache Cassandra users mailing list. Of
course, the feature is the same, so let me answer it here.)


>
> cqlsh:sampleks> describe table test;
>
>
> CREATE TABLE test (
>
> client_id int,
>
> when timestamp,
>
> process_ids list,
>
> md text,
>
> PRIMARY KEY (client_id, when) ) WITH CLUSTERING ORDER BY (when
> DESC)
>
> AND bloom_filter_fp_chance = 0.01
>
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
>
> AND comment = ''
>
> AND compaction = {'class': 'TimeWindowCompactionStrategy',
> 'compaction_window_size': '1', 'compaction_window_unit': 'DAYS'}
>
> AND compression = {'sstable_compression':
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>
> AND crc_check_chance = 1.0
>
> AND dclocal_read_repair_chance = 0.1
>
> AND default_time_to_live = 0
>
> AND gc_grace_seconds = 172800
>
> AND max_index_interval = 1024
>
> AND memtable_flush_period_in_ms = 0
>
> AND min_index_interval = 128
>
> AND read_repair_chance = 0.0
>
> AND speculative_retry = '99.0PERCENTILE';
>
>
> And I see this is how we are querying it. It's been a long time I worked
> on cassandra so this “PER PARTITION LIMIT” is new thing to me (looks like
> recently added). Can someone explain what does this do with some example in
> a layman language? I couldn't find any good doc on that which explains
> easily.
>
>
> SELECT * FROM test WHERE client_id IN ? PER PARTITION LIMIT 1;
>

The "PER PARTITION LIMIT" option is documented here, although I do agree
it's a rather terse explanation:

https://cassandra.apache.org/doc/latest/cql/dml.html#limiting-results

What it does is it limits the number of returned rows *per partition*. So,
for example, with your schema, if you have the following data:

cqlsh:ks1> SELECT client_id, when FROM test;

 client_id | when
---+-
 1 | 2020-01-01 22:00:00.00+
 1 | 2019-12-31 22:00:00.00+
 2 | 2020-02-12 22:00:00.00+
 2 | 2020-02-11 22:00:00.00+
 2 | 2020-02-10 22:00:00.00+

(5 rows)

You can ask the query to limit the number of rows returned for each
"client_id". For example, with limit of "1", you'd have:

cqlsh:ks1> SELECT client_id, when FROM test PER PARTITION LIMIT 1;

 client_id | when
---+-
 1 | 2020-01-01 22:00:00.00+
 2 | 2020-02-12 22:00:00.00+

(2 rows)

Increasing limit to "2", would yield:

cqlsh:ks1> SELECT client_id, when FROM test PER PARTITION LIMIT 2;

 client_id | when
---+-
 1 | 2020-01-01 22:00:00.00+
 1 | 2019-12-31 22:00:00.00+
 2 | 2020-02-12 22:00:00.00+
 2 | 2020-02-11 22:00:00.00+

(4 rows)

Hope this helps!

Regards,

- Pekka


Re: What does "PER PARTITION LIMIT" means in cql query in cassandra?

2020-05-07 Thread Dor Laor
In your schema case, for each client_id you will get a single 'when'
row. Just one. Even when there are multiple rows (clustering keys)

On Thu, May 7, 2020 at 12:14 AM Check Peck  wrote:
>
> I have a scylla table as shown below:
>
>
> cqlsh:sampleks> describe table test;
>
>
> CREATE TABLE test (
>
> client_id int,
>
> when timestamp,
>
> process_ids list,
>
> md text,
>
> PRIMARY KEY (client_id, when) ) WITH CLUSTERING ORDER BY (when DESC)
>
> AND bloom_filter_fp_chance = 0.01
>
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
>
> AND comment = ''
>
> AND compaction = {'class': 'TimeWindowCompactionStrategy', 
> 'compaction_window_size': '1', 'compaction_window_unit': 'DAYS'}
>
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>
> AND crc_check_chance = 1.0
>
> AND dclocal_read_repair_chance = 0.1
>
> AND default_time_to_live = 0
>
> AND gc_grace_seconds = 172800
>
> AND max_index_interval = 1024
>
> AND memtable_flush_period_in_ms = 0
>
> AND min_index_interval = 128
>
> AND read_repair_chance = 0.0
>
> AND speculative_retry = '99.0PERCENTILE';
>
>
> And I see this is how we are querying it. It's been a long time I worked on 
> cassandra so this “PER PARTITION LIMIT” is new thing to me (looks like 
> recently added). Can someone explain what does this do with some example in a 
> layman language? I couldn't find any good doc on that which explains easily.
>
>
> SELECT * FROM test WHERE client_id IN ? PER PARTITION LIMIT 1;

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



What does "PER PARTITION LIMIT" means in cql query in cassandra?

2020-05-07 Thread Check Peck
I have a scylla table as shown below:


cqlsh:sampleks> describe table test;


CREATE TABLE test (

client_id int,

when timestamp,

process_ids list,

md text,

PRIMARY KEY (client_id, when) ) WITH CLUSTERING ORDER BY (when DESC)

AND bloom_filter_fp_chance = 0.01

AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}

AND comment = ''

AND compaction = {'class': 'TimeWindowCompactionStrategy',
'compaction_window_size': '1', 'compaction_window_unit': 'DAYS'}

AND compression = {'sstable_compression':
'org.apache.cassandra.io.compress.LZ4Compressor'}

AND crc_check_chance = 1.0

AND dclocal_read_repair_chance = 0.1

AND default_time_to_live = 0

AND gc_grace_seconds = 172800

AND max_index_interval = 1024

AND memtable_flush_period_in_ms = 0

AND min_index_interval = 128

AND read_repair_chance = 0.0

AND speculative_retry = '99.0PERCENTILE';


And I see this is how we are querying it. It's been a long time I worked on
cassandra so this “PER PARTITION LIMIT” is new thing to me (looks like
recently added). Can someone explain what does this do with some example in
a layman language? I couldn't find any good doc on that which explains
easily.


SELECT * FROM test WHERE client_id IN ? PER PARTITION LIMIT 1;