[ 
https://issues.apache.org/jira/browse/CASSANDRA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271636#comment-13271636
 ] 

bert Passek commented on CASSANDRA-4228:
----------------------------------------

I already noticed the RandomPartitioner in the stack trace. Data was written to 
Cassandra by a hadoop job with configured OrderPreservingPartitioner. A 
different job reads from Cassandra where the partitioner in the job 
configuration was also set to OrderPreservingPartitioner.

We haven't actually changed any hadoop jobs, we just updated cassandra from 
1.0.8 to 1.1.0. And then we ran into this exception. The test case was written 
to track down the problem. It's strange because the exception is thrown even if 
we are trying to read from empty column families.

I'm gonna check the cluster and job configuration again, i might have setup 
something wrong.
                
> Exception while reading from cassandra via ColumnFamilyInputFormat and 
> OrderPreservingPartitioner
> -------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-4228
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Hadoop
>    Affects Versions: 1.1.0
>         Environment: Debian Squeeze
>            Reporter: bert Passek
>         Attachments: CassandraTest.java
>
>
> We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
> system. After that we can not use ColumnFamilyInputFormat anymore due to 
> exceptions in cassandra. A simple unit test is provided via attachement.
> Here are some details about our simple setup:
> Ring: 
> Address         DC          Rack        Status State   Load            Owns   
>              Token                                       
> 127.0.0.1       datacenter1 rack1       Up     Normal  859.36 KB       
> 100,00%             55894951196891831822413178196787984716      
> Schema Definition:
> create column family TestSuper
>   with column_type = 'Super'
>   and comparator = 'BytesType'
>   and subcomparator = 'BytesType'
>   and default_validation_class = 'BytesType'
>   and key_validation_class = 'BytesType'
>   and read_repair_chance = 0.1
>   and dclocal_read_repair_chance = 0.0
>   and gc_grace = 864000
>   and min_compaction_threshold = 4
>   and max_compaction_threshold = 32
>   and replicate_on_write = true
>   and compaction_strategy = 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
>   and caching = 'KEYS_ONLY'
>   and compression_options = {'sstable_compression' : 
> 'org.apache.cassandra.io.compress.SnappyCompressor'};
> While running the test we face following exception on client side:
> 12/05/09 10:18:22 INFO junit.TestRunner: 
> testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
> org.apache.thrift.transport.TTransportException
> 12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
> org.apache.thrift.transport.TTransportException
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
>       at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>       at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
>       at 
> de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>       at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>       at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>       at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>       at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>       at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
>       at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
>       at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
>       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
>       at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
>       at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
>       at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>       at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
>       at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
>       at org.junit.runners.Suite.runChild(Suite.java:115)
>       at org.junit.runners.Suite.runChild(Suite.java:23)
>       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
>       at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
>       at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
>       at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>       at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
>       at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
>       at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>       at org.junit.runner.JUnitCore.run(JUnitCore.java:116)
>       at org.junit.runner.JUnitCore.run(JUnitCore.java:107)
>       at org.junit.runner.JUnitCore.runClasses(JUnitCore.java:66)
>       at de.unister.cpc.junit.TestRunner.run(TestRunner.java:55)
>       at de.unister.cpc.MainRunner.runInternal(MainRunner.java:129)
>       at de.unister.cpc.MainRunner.run(MainRunner.java:52)
>       at de.unister.cpc.MainRunner.main(MainRunner.java:143)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
> Caused by: org.apache.thrift.transport.TTransportException
>       at 
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>       at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>       at 
> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
>       at 
> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
>       at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>       at 
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
>       at 
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
>       at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
>       at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>       at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:683)
>       at 
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:667)
>       at 
> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:356)
> and on server side:
> ==> /var/log/cassandra/system.log <==
> ERROR [Thrift:5] 2012-05-09 10:18:22,603 CustomTThreadPoolServer.java (line 
> 204) Error occurred during processing of message.
> java.lang.NumberFormatException: Zero length BigInteger
>       at java.math.BigInteger.<init>(BigInteger.java:276)
>       at java.math.BigInteger.<init>(BigInteger.java:451)
>       at 
> org.apache.cassandra.dht.RandomPartitioner$1.fromString(RandomPartitioner.java:136)
>       at 
> org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:685)
>       at 
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:2944)
>       at 
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:2932)
>       at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
>       at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
>       at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>       at java.lang.Thread.run(Thread.java:662)
> Maybe we are doing something wrong, but after the update we can not execute 
> any hadoop jobs which reads from cassandra via ColumnFamilyInputFormat in 
> combination with OrderPreservingPartitioner.
> Thanks in advance.
> Ciao Bert

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to