[jira] [Updated] (CASSANDRA-14235) ReadFailure Error -- Large Unbound Query

2019-02-16 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14235:
-
Reproduced In: 3.11.1

> ReadFailure Error -- Large Unbound Query 
> -
>
> Key: CASSANDRA-14235
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14235
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
> Environment: My instance of Cassandra is a single local node. It was 
> installed via a tar file, versus installing it as a service. All settings are 
> default. 
> I'm operating on a Centos 7 machine (release 7.4.1708)
>Reporter: Fraizier
>Priority: Major
>  Labels: newbie
>
> Receiving ReadFailure Error when executing 'select' query with cassandra 
> python-driver.  
> I have a keyspace called "Documents" and a table with two columns, name and 
> object. Name is the text datatype and object is the blob datatype. The blob 
> objects are pickled python class instances. The description of the 
> keyspace/table is as follows:
>  
> {code:java}
> CREATE TABLE "Documents".table ( 
>  name text PRIMARY KEY, 
>  object blob 
> ) WITH bloom_filter_fp_chance = 0.01 
>  AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment 
> = '' 
>  AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.1 
>  AND default_time_to_live = 0 AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 
>  AND read_repair_chance = 0.0 
>  AND speculative_retry = '99PERCENTILE';{code}
>  
> There are 3509 rows contained within this table and each object is approx. 
> 25kb of data. (so I'm estimating ~90Mb of data in the object column.) I'm 
> attempting to run a simple line of python cassandra code :
> {code:java}
> rows = session.execute("SELECT name, object FROM table")
> {code}
> and in the log file of cassandra this is what is produced:
> {code:java}
> WARN  [ReadStage-4] 2018-02-13 14:53:12,319 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-4,10,main]: {}
> java.lang.RuntimeException: java.lang.RuntimeException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2598)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_151]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [apache-cassandra-3.11.1.jar:3.11.1]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.11.1.jar:3.11.1]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
> Caused by: java.lang.RuntimeException: null
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.validateReallocation(DataOutputBuffer.java:134)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.calculateNewSize(DataOutputBuffer.java:152)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:159)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:119)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:413)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:210) 
> ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$serializeRowBody$0(UnfilteredSerializer.java:248)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 

[jira] [Updated] (CASSANDRA-14235) ReadFailure Error -- Large Unbound Query

2019-02-16 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14235:
-
Fix Version/s: (was: 3.11.1)

> ReadFailure Error -- Large Unbound Query 
> -
>
> Key: CASSANDRA-14235
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14235
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
> Environment: My instance of Cassandra is a single local node. It was 
> installed via a tar file, versus installing it as a service. All settings are 
> default. 
> I'm operating on a Centos 7 machine (release 7.4.1708)
>Reporter: Fraizier
>Priority: Major
>  Labels: newbie
>
> Receiving ReadFailure Error when executing 'select' query with cassandra 
> python-driver.  
> I have a keyspace called "Documents" and a table with two columns, name and 
> object. Name is the text datatype and object is the blob datatype. The blob 
> objects are pickled python class instances. The description of the 
> keyspace/table is as follows:
>  
> {code:java}
> CREATE TABLE "Documents".table ( 
>  name text PRIMARY KEY, 
>  object blob 
> ) WITH bloom_filter_fp_chance = 0.01 
>  AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment 
> = '' 
>  AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.1 
>  AND default_time_to_live = 0 AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 
>  AND read_repair_chance = 0.0 
>  AND speculative_retry = '99PERCENTILE';{code}
>  
> There are 3509 rows contained within this table and each object is approx. 
> 25kb of data. (so I'm estimating ~90Mb of data in the object column.) I'm 
> attempting to run a simple line of python cassandra code :
> {code:java}
> rows = session.execute("SELECT name, object FROM table")
> {code}
> and in the log file of cassandra this is what is produced:
> {code:java}
> WARN  [ReadStage-4] 2018-02-13 14:53:12,319 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-4,10,main]: {}
> java.lang.RuntimeException: java.lang.RuntimeException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2598)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_151]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [apache-cassandra-3.11.1.jar:3.11.1]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.11.1.jar:3.11.1]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
> Caused by: java.lang.RuntimeException: null
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.validateReallocation(DataOutputBuffer.java:134)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.calculateNewSize(DataOutputBuffer.java:152)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:159)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:119)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:413)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:210) 
> ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$serializeRowBody$0(UnfilteredSerializer.java:248)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 

[jira] [Updated] (CASSANDRA-14235) ReadFailure Error -- Large Unbound Query

2019-02-16 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14235:
-
Component/s: (was: Legacy/CQL)
 Messaging/Client

> ReadFailure Error -- Large Unbound Query 
> -
>
> Key: CASSANDRA-14235
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14235
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
> Environment: My instance of Cassandra is a single local node. It was 
> installed via a tar file, versus installing it as a service. All settings are 
> default. 
> I'm operating on a Centos 7 machine (release 7.4.1708)
>Reporter: Fraizier
>Priority: Major
>  Labels: newbie
> Fix For: 3.11.1
>
>
> Receiving ReadFailure Error when executing 'select' query with cassandra 
> python-driver.  
> I have a keyspace called "Documents" and a table with two columns, name and 
> object. Name is the text datatype and object is the blob datatype. The blob 
> objects are pickled python class instances. The description of the 
> keyspace/table is as follows:
>  
> {code:java}
> CREATE TABLE "Documents".table ( 
>  name text PRIMARY KEY, 
>  object blob 
> ) WITH bloom_filter_fp_chance = 0.01 
>  AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment 
> = '' 
>  AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.1 
>  AND default_time_to_live = 0 AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 
>  AND read_repair_chance = 0.0 
>  AND speculative_retry = '99PERCENTILE';{code}
>  
> There are 3509 rows contained within this table and each object is approx. 
> 25kb of data. (so I'm estimating ~90Mb of data in the object column.) I'm 
> attempting to run a simple line of python cassandra code :
> {code:java}
> rows = session.execute("SELECT name, object FROM table")
> {code}
> and in the log file of cassandra this is what is produced:
> {code:java}
> WARN  [ReadStage-4] 2018-02-13 14:53:12,319 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-4,10,main]: {}
> java.lang.RuntimeException: java.lang.RuntimeException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2598)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_151]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [apache-cassandra-3.11.1.jar:3.11.1]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.11.1.jar:3.11.1]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
> Caused by: java.lang.RuntimeException: null
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.validateReallocation(DataOutputBuffer.java:134)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.calculateNewSize(DataOutputBuffer.java:152)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:159)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:119)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:413)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> at org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:210) 
> ~[apache-cassandra-3.11.1.jar:3.11.1]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$serializeRowBody$0(UnfilteredSerializer.java:248)
>