[ 
https://issues.apache.org/jira/browse/CASSANDRA-10291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14953376#comment-14953376
 ] 

Yuki Morishita commented on CASSANDRA-10291:
--------------------------------------------

Unfortunately not so much.

All I got were:

{code}
WARN  [STREAM-IN-/192.168.220.17] 2015-09-17 08:27:32,189 
StreamSession.java:644 - [Stream #a5f6b030-5d07-11e5-b554-5f8e66db7dc7] 
Retrying for following errorjava.lang.AssertionError: null
        at 
org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:96)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at java.io.InputStream.read(InputStream.java:179) ~[na:1.7.0_80]
        at java.io.InputStream.skip(InputStream.java:222) ~[na:1.7.0_80]
        at 
org.apache.cassandra.streaming.StreamReader.drain(StreamReader.java:137) 
~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:106)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
 [apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
 [apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:56)
 [apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
 [apache-cassandra-2.2.1.jar:2.2.1]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
{code}

and

{code}
WARN  [STREAM-IN-/192.168.220.16] 2015-09-17 09:40:42,378 
StreamSession.java:644 - [Stream #a5f6b030-5d07-11e5-b554-5f8e66db7dc7] 
Retrying for following errororg.apache.cassandra.serializers.MarshalException: 
String didn't validate.
        at 
org.apache.cassandra.serializers.UTF8Serializer.validate(UTF8Serializer.java:35)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.db.marshal.AbstractType.getString(AbstractType.java:91) 
~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.cql3.ColumnIdentifier.<init>(ColumnIdentifier.java:58) 
~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.db.composites.SimpleSparseCellNameType.fromByteBuffer(SimpleSparseCellNameType.java:83)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.db.RangeTombstone$Serializer.deserializeBody(RangeTombstone.java:357)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:84)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 ~[guava-16.0.jar:na]
        at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
~[guava-16.0.jar:na]
        at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:162) 
~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
 [apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
 [apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:56)
 [apache-cassandra-2.2.1.jar:2.2.1]
        at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
 [apache-cassandra-2.2.1.jar:2.2.1]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
{code}

so I wonder if there were SSTable corruption on those nodes.
Though "(/192.168.220.)16 was able to restream that particular stream." so 17 
was only problematic at that moment.



> Bootstrap hangs on adding new node 
> -----------------------------------
>
>                 Key: CASSANDRA-10291
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10291
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Debian 7 64 bit
> HotSpot JDK 1.7.0_79
> Cassandra-2.2.1 via apt-get 
> 1x Intel Quad-Core Xeon E3-1230 / 16GB / 4x1TB SATA / 3x1TB RAID0 data drive 
>            Reporter: Ara Sadoyan
>         Attachments: logs_netstats.tar.gz, nodetool.txt, system.log
>
>
> Adding new node in heavy loaded environment freeze bootstrap. No errors are 
> reported in log files.  Some of other other nodes throws "String didn't 
> validate" error, but I;m not sure that this is related. 
> After restarting node it start bootstrap again and hangs after some time . 
> nodetool netstats shows : 
> /data/XXX/XXXX/tmp-la-1184-big-Data.db 5126078789/18345924701   bytes(27%)  
> received  from idx:0/192.168.220.16
> /data/XXX/XXXX/tmp-la-1233-big-Data.db 7213706459/18600941671   bytes(38%)  
> received  from idx:0/192.168.220.22
> /data/XXX/XXXX/tmp-la-1599-big-Data.db 8492408759/17572043398   bytes(48%)  
> received  from idx:0/192.168.220.12
> /data/XXX/XXXX/tmp-la-2066-big-Data.db 15773981555/18508127610  bytes(85%)  
> received  from idx:0/192.168.220.18
> /data/XXX/XXXX/tmp-la-211-big-Data.db 8274231066/17172754085   bytes(48%)  
> received  from idx:0/192.168.220.20
> but listing files on local FS shows "No such file or directory"
> This happens only if there is significant amount of data. I have 1.5 TB per 
> node on 13 node cluster, we use STCS compaction strategy and flat network 
> topology . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to