[jira] [Created] (CASSANDRA-14209) group by select queries query results differ when using select * vs select fields

2018-02-01 Thread Digant Modha (JIRA)
Digant Modha created CASSANDRA-14209:


 Summary: group by select queries query results differ when using 
select * vs select fields
 Key: CASSANDRA-14209
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14209
 Project: Cassandra
  Issue Type: Bug
Reporter: Digant Modha
 Attachments: Re group by select queries.txt

{{I get two different out with these 2 queries.  The only difference between 
the 2 queries is that one does ‘select *’ and other does ‘select specific 
fields’ without any aggregate functions.}}

{{I am using Apache Cassandra 3.10.}}


{{Consistency level set to LOCAL_QUORUM.}}
{{cassandra@cqlsh> select * from wp.position where account_id = 'user_1';}}

{{ account_id | security_id | counter | avg_exec_price | pending_quantity | 
quantity | transaction_id | update_time}}
{{+-+-++--+--++-}}
{{ user_1 | AMZN | 2 | 1239.2 | 0 | 1011 | null | 2018-01-25 
17:18:07.158000+}}
{{ user_1 | AMZN | 1 | 1239.2 | 0 | 1010 | null | 2018-01-25 
17:18:07.158000+}}

{{(2 rows)}}
{{cassandra@cqlsh> select * from wp.position where account_id = 'user_1' group 
by security_id;}}

{{ account_id | security_id | counter | avg_exec_price | pending_quantity | 
quantity | transaction_id | update_time}}
{{+-+-++--+--++-}}
{{ user_1 | AMZN | 1 | 1239.2 | 0 | 1010 | null | 2018-01-25 
17:18:07.158000+}}

{{(1 rows)}}
{{cassandra@cqlsh> select account_id,security_id, counter, 
avg_exec_price,quantity, update_time from wp.position where account_id = 
'user_1' group by security_id ;}}

{{ account_id | security_id | counter | avg_exec_price | quantity | 
update_time}}
{{+-+-++--+-}}
{{ user_1 | AMZN | 2 | 1239.2 | 1011 | 2018-01-25 17:18:07.158000+}}

{{(1 rows)}}


{{Table Description:}}
{{CREATE TABLE wp.position (}}
{{ account_id text,}}
{{ security_id text,}}
{{ counter bigint,}}
{{ avg_exec_price double,}}
{{ pending_quantity double,}}
{{ quantity double,}}
{{ transaction_id uuid,}}
{{ update_time timestamp,}}
{{ PRIMARY KEY (account_id, security_id, counter)}}
{{) WITH CLUSTERING ORDER BY (security_id ASC, counter DESC)}}{{ }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10689) java.lang.OutOfMemoryError: Direct buffer memory

2016-03-04 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180691#comment-15180691
 ] 

Digant Modha edited comment on CASSANDRA-10689 at 3/4/16 10:59 PM:
---

I also have a similar issue:

Top shows process is using 19g memory.  The memory usage grows until node dies. 
 It is started  with -Xmx8g -Xms8g -Xmn2g.

java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.8.0_60]
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
~[na:1.8.0_60]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
~[na:1.8.0_60]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:580) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:456) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:432) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
org.apache.cassandra.transport.Message$Dispatcher$Flusher.run(Message.java:389) 
~[apache-cassandra-2.1.12.jar:2.1.12]
at 
io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:123) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:268) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]



was (Author: dmodha):
I also have a similar issue:
ava.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.8.0_60]
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
~[na:1.8.0_60]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
~[na:1.8.0_60]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:580) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:456) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:432) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 

[jira] [Commented] (CASSANDRA-10689) java.lang.OutOfMemoryError: Direct buffer memory

2016-03-04 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180691#comment-15180691
 ] 

Digant Modha commented on CASSANDRA-10689:
--

I also have a similar issue:
ava.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.8.0_60]
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
~[na:1.8.0_60]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
~[na:1.8.0_60]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:580) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:456) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:432) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:688)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:669)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
org.apache.cassandra.transport.Message$Dispatcher$Flusher.run(Message.java:389) 
~[apache-cassandra-2.1.12.jar:2.1.12]
at 
io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:123) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:268) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]


> java.lang.OutOfMemoryError: Direct buffer memory
> 
>
> Key: CASSANDRA-10689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10689
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mlowicki
>
> {code}
> ERROR [SharedPool-Worker-63] 2015-11-11 17:53:16,161 
> JVMStabilityInspector.java:117 - JVM state determined to be unstable.  
> Exiting forcefully due to:
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_80]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.7.0_80]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) 
> ~[na:1.7.0_80]
> at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) 
> ~[na:1.7.0_80]
> at sun.nio.ch.IOUtil.read(IOUtil.java:195) ~[na:1.7.0_80]
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:149) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:104)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]  
> at 
> 

[jira] [Commented] (CASSANDRA-8743) NFS doesn't behave on Windows

2015-10-14 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956955#comment-14956955
 ] 

Digant Modha commented on CASSANDRA-8743:
-

I can reproduce it using V2.1.10 - 3 node LINUX cluster - data stored in 
reiserfs:

ERROR [ValidationExecutor:65] 2015-10-14 04:46:23,962 Validator.java:245 - 
Failed creating a merkle tree for [repair #d9bd8fd0-724f-11e5-a7ea-6d54405ef242 
on ks/cf, (5710859303295413578,5728820762645031943]], /xx.xx.xx.172 (see log 
for details)

ERROR [AntiEntropySessions:231] 2015-10-14 04:46:24,016 RepairSession.java:303 
- [repair #d9bd8fd0-724f-11e5-a7ea-6d54405ef242] session completed with the 
following error
org.apache.cassandra.exceptions.RepairException: [repair 
#d9bd8fd0-724f-11e5-a7ea-6d54405ef242 on ks/cf, 
(5710859303295413578,5728820762645031943]] Validation failed in /xx.xx.xx.172
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:166)
 ~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:406)
 ~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:134)
 ~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
~[apache-cassandra-2.1.10.jar:2.1.10]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
ERROR [ValidationExecutor:65] 2015-10-14 04:46:24,016 CassandraDaemon.java:227 
- Exception in thread Thread[ValidationExecutor:65,1,main]
org.apache.cassandra.io.FSWriteError: java.nio.file.DirectoryNotEmptyException: 
/local/valrs/cassandra/dmds/data/ks/cf/snapshots/d9bd8fd0-724f-11e5-a7ea-6d54405ef242
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:381) 
~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:647) 
~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:2451)
 ~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1018)
 ~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:94)
 ~[apache-cassandra-2.1.10.jar:2.1.10]
at 
org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:622)
 ~[apache-cassandra-2.1.10.jar:2.1.10]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.nio.file.DirectoryNotEmptyException: 
/local/valrs/cassandra/dmds/data/ks/cf/snapshots/d9bd8fd0-724f-11e5-a7ea-6d54405ef242
at 
sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242) 
~[na:1.8.0_60]
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 ~[na:1.8.0_60]
at java.nio.file.Files.delete(Files.java:1126) ~[na:1.8.0_60]
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) 
~[apache-cassandra-2.1.10.jar:2.1.10]
... 10 common frames omitted
ERROR [AntiEntropySessions:231] 2015-10-14 04:46:24,018 
CassandraDaemon.java:227 - Exception in thread 
Thread[AntiEntropySessions:231,5,RMI Runtime]
java.lang.RuntimeException: org.apache.cassandra.exceptions.RepairException: 
[repair #d9bd8fd0-724f-11e5-a7ea-6d54405ef242 on ks/cf, 
(5710859303295413578,5728820762645031943]] Validation failed in /xx.xx.xx.172
at com.google.common.base.Throwables.propagate(Throwables.java:160) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
~[apache-cassandra-2.1.10.jar:2.1.10]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_60]
at 

[jira] [Issue Comment Deleted] (CASSANDRA-8743) Repair on NFS in version 2.1.2

2015-07-13 Thread Digant Modha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Digant Modha updated CASSANDRA-8743:

Comment: was deleted

(was: I see it in v2.0.10 and I am not using NFS:
ERROR [ValidationExecutor:1280] 2015-07-12 22:18:10,992 Validator.java (line 
242) Failed creating a merkle tree for [repair 
#d2178ba0-2902-11e5-bd95-f14c61d86b85 on dmds/curve_dates, 
(-1942303675502999131,-1890400428284965630]], /49.70.3.80 (see log for details)
ERROR [ValidationExecutor:1280] 2015-07-12 22:18:10,992 CassandraDaemon.java 
(line 199) Exception in thread Thread[ValidationExecutor:1280,1,main]
FSWriteError in 
/apps/data/cassandra/dmds/data/dmds/curve_dates/snapshots/d2178ba0-2902-11e5-bd95-f14c61d86b85
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:122)
at 
org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:384)
at 
org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:488)
at 
org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:1877)
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:811)
at 
org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:63)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.call(CompactionManager.java:398)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.file.DirectoryNotEmptyException: 
/apps/data/cassandra/dmds/data/dmds/curve_dates/snapshots/d2178ba0-2902-11e5-bd95-f14c61d86b85
at 
sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242)
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:118)
... 10 more
ERROR [ValidationExecutor:1280] 2015-07-12 22:18:10,993 StorageService.java 
(line 364) Stopping gossiper)

 Repair on NFS in version 2.1.2
 --

 Key: CASSANDRA-8743
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8743
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tamar Nirenberg
Assignee: Joshua McKenzie
Priority: Minor

 Running repair over NFS in Cassandra 2.1.2 encounters this error and crashes 
 the ring:
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,811 Validator.java:232 - 
 Failed creating a merkle tree for [repair 
 #c84c7c70-a21b-11e4-aeca-19e6d7fa2595 on ATTRIBUTES/LINKS, 
 (11621838520493020277529637175352775759,11853478749048239324667887059881170862]],
  /10.1.234.63 (see log for details)
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,827 CassandraDaemon.java:153 
 - Exception in thread Thread[ValidationExecutor:2,1,main]
 org.apache.cassandra.io.FSWriteError: 
 java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:381) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:547) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:2223)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:939)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:97)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:557)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_71]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 

[jira] [Commented] (CASSANDRA-8743) Repair on NFS in version 2.1.2

2015-07-13 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14625102#comment-14625102
 ] 

Digant Modha commented on CASSANDRA-8743:
-

I see it in v2.0.10 and I am not using NFS:
ERROR [ValidationExecutor:1280] 2015-07-12 22:18:10,992 Validator.java (line 
242) Failed creating a merkle tree for [repair 
#d2178ba0-2902-11e5-bd95-f14c61d86b85 on dmds/curve_dates, 
(-1942303675502999131,-1890400428284965630]], /49.70.3.80 (see log for details)
ERROR [ValidationExecutor:1280] 2015-07-12 22:18:10,992 CassandraDaemon.java 
(line 199) Exception in thread Thread[ValidationExecutor:1280,1,main]
FSWriteError in 
/apps/data/cassandra/dmds/data/dmds/curve_dates/snapshots/d2178ba0-2902-11e5-bd95-f14c61d86b85
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:122)
at 
org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:384)
at 
org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:488)
at 
org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:1877)
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:811)
at 
org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:63)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.call(CompactionManager.java:398)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.file.DirectoryNotEmptyException: 
/apps/data/cassandra/dmds/data/dmds/curve_dates/snapshots/d2178ba0-2902-11e5-bd95-f14c61d86b85
at 
sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242)
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:118)
... 10 more
ERROR [ValidationExecutor:1280] 2015-07-12 22:18:10,993 StorageService.java 
(line 364) Stopping gossiper

 Repair on NFS in version 2.1.2
 --

 Key: CASSANDRA-8743
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8743
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tamar Nirenberg
Assignee: Joshua McKenzie
Priority: Minor

 Running repair over NFS in Cassandra 2.1.2 encounters this error and crashes 
 the ring:
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,811 Validator.java:232 - 
 Failed creating a merkle tree for [repair 
 #c84c7c70-a21b-11e4-aeca-19e6d7fa2595 on ATTRIBUTES/LINKS, 
 (11621838520493020277529637175352775759,11853478749048239324667887059881170862]],
  /10.1.234.63 (see log for details)
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,827 CassandraDaemon.java:153 
 - Exception in thread Thread[ValidationExecutor:2,1,main]
 org.apache.cassandra.io.FSWriteError: 
 java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:381) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:547) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:2223)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:939)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:97)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:557)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_71]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 

[jira] [Created] (CASSANDRA-7973) cqlsh connect error member_descriptor' object is not callable

2014-09-18 Thread Digant Modha (JIRA)
Digant Modha created CASSANDRA-7973:
---

 Summary: cqlsh connect error member_descriptor' object is not 
callable
 Key: CASSANDRA-7973
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7973
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.0
Reporter: Digant Modha
Priority: Minor


When using cqlsh (Cassandra 2.1.0) with ssl, python 2.6.9. I get Connection 
error: ('Unable to connect to any servers', {...: 
TypeError('member_descriptor' object is not callable,)}) 
I am able to connect from another machine using python 2.7.5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106815#comment-14106815
 ] 

Digant Modha commented on CASSANDRA-7817:
-

Even if it's row level deletion, the code still has to read all the 
cells/columns?  Does that mean that the row level deletion optimization 
should/does not play a role in this case?  Thanks.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106822#comment-14106822
 ] 

Digant Modha commented on CASSANDRA-7817:
-

I mean full row deletion - delete using partition key only.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106834#comment-14106834
 ] 

Digant Modha commented on CASSANDRA-7817:
-

Thanks,  that answers my question.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-21 Thread Digant Modha (JIRA)
Digant Modha created CASSANDRA-7817:
---

 Summary: when entire row is deleted, the records in the row seem 
to counted toward TombstoneOverwhelmingException
 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha


I saw this behavior in development cluster, but was able to reproduce it in a 
single node setup.  In development cluster I had more than 52,000 records and 
used default values for tombstone threshold.

For testing purpose, I used lower numbers for thresholds:
tombstone_warn_threshold: 100
tombstone_failure_threshold: 1000

Here are the steps:
table:
CREATE TABLE cstestcf_conflate_data (
  key ascii,
  datehr int,
  validfrom timestamp,
  asof timestamp,
  copied boolean,
  datacenter ascii,
  storename ascii,
  value blob,
  version ascii,
  PRIMARY KEY ((key, datehr), validfrom, asof)
) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;

cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
and datehr = 2014082119;
 count
---
   470
(1 rows)

cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and datehr 
= 2014082119;

cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
and datehr = 2014082119;
Request did not complete within rpc_timeout.

Exception in system.log:
java.lang.RuntimeException: 
org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7051) UnsupportedOperationException

2014-04-17 Thread Digant Modha (JIRA)
Digant Modha created CASSANDRA-7051:
---

 Summary: UnsupportedOperationException
 Key: CASSANDRA-7051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7051
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
 Environment: Cassandra 2.0.6
Reporter: Digant Modha
Priority: Critical


UnsupportedOperationException exception thrown when using batchstatement.  This 
is because in 
org.apache.cassandra.cql3.statements.BatchStatement.unzipMutations returns a 
collection that does not support add if the size of mutation is 1.

STACK:
throws UnsupportedOperationException.
Daemon Thread [Native-Transport-Requests:1043] (Suspended (entry into method 
init in UnsupportedOperationException))
UnsupportedOperationException.init() line: 42 [local variables 
unavailable]
HashMap$Values(AbstractCollectionE).add(E) line: 260
HashMap$Values(AbstractCollectionE).addAll(Collection? extends E) 
line: 342
StorageProxy.mutateWithTriggers(CollectionIMutation, 
ConsistencyLevel, boolean) line: 519
BatchStatement.executeWithoutConditions(CollectionIMutation, 
ConsistencyLevel) line: 210
BatchStatement.execute(BatchStatement$BatchVariables, boolean, 
ConsistencyLevel, long) line: 203
BatchStatement.executeWithPerStatementVariables(ConsistencyLevel, 
QueryState, ListListByteBuffer) line: 192
QueryProcessor.processBatch(BatchStatement, ConsistencyLevel, 
QueryState, ListListByteBuffer, ListObject) line: 373
BatchMessage.execute(QueryState) line: 206
Message$Dispatcher.messageReceived(ChannelHandlerContext, MessageEvent) 
line: 304

Message$Dispatcher(SimpleChannelUpstreamHandler).handleUpstream(ChannelHandlerContext,
 ChannelEvent) line: 70

DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext,
 ChannelEvent) line: 564

DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(ChannelEvent) 
line: 791
ChannelUpstreamEventRunnable.doRun() line: 43
ChannelUpstreamEventRunnable(ChannelEventRunnable).run() line: 67

RequestThreadPoolExecutor(ThreadPoolExecutor).runWorker(ThreadPoolExecutor$Worker)
 line: 1145
ThreadPoolExecutor$Worker.run() line: 615
Thread.run() line: 744

org.apache.cassandra.cql3.statements.BatchStatement:
private Collection? extends IMutation unzipMutations(MapString, 
MapByteBuffer, IMutation mutations)
{
// The case where all statement where on the same keyspace is pretty 
common
if (mutations.size() == 1)
return mutations.values().iterator().next().values();

ListIMutation ms = new ArrayList();
for (MapByteBuffer, IMutation ksMap : mutations.values())
ms.addAll(ksMap.values());
return ms;
}




--
This message was sent by Atlassian JIRA
(v6.2#6252)