[jira] [Updated] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-11 Thread Andreas Schnitzerling (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Schnitzerling updated CASSANDRA-8192:
-
Attachment: (was: system.log)

 AssertionError in Memory.java
 -

 Key: CASSANDRA-8192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
 Attachments: cassandra.yaml


 Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
 start up.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {panel}
 In the attached log you can still see as well CASSANDRA-8069 and 
 CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8122) Undeclare throwable exception while executing 'nodetool netstats localhost'

2014-11-11 Thread Marcus Olsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Olsson updated CASSANDRA-8122:
-
Attachment: CASSANDRA-8122-1.patch

Attached patch without whitespace modifications.

 Undeclare throwable exception while executing 'nodetool netstats localhost'
 ---

 Key: CASSANDRA-8122
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8122
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra: 2.0.9
Reporter: Vishal Mehta
Priority: Minor
 Attachments: CASSANDRA-8122-1.patch, CASSANDRA-8122.patch


 *Steps*
 # Stop cassandra service
 # Check netstats of nodetool using 'nodetool netstats localhost'
 # Start cassandra service
 # Again check netstats of nodetool using 'nodetool netstats localhost'
 *Expected output*
 Mode: STARTING
 Not sending any streams. (End of output - no further exceptions)
 *Observed output*
 {noformat}
  nodetool netstats localhost
 Mode: STARTING
 Not sending any streams.
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
   at com.sun.proxy.$Proxy6.getReadRepairAttempted(Unknown Source)
   at 
 org.apache.cassandra.tools.NodeProbe.getReadRepairAttempted(NodeProbe.java:897)
   at 
 org.apache.cassandra.tools.NodeCmd.printNetworkStats(NodeCmd.java:726)
   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1281)
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.db:type=StorageProxy
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
   at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:273)
   at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:251)
   at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:160)
   at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
   at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
   at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
   at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
   ... 4 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8289) Allow users to debug/test UDF

2014-11-11 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-8289:
---

 Summary: Allow users to debug/test UDF
 Key: CASSANDRA-8289
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8289
 Project: Cassandra
  Issue Type: New Feature
Reporter: Robert Stupp
Assignee: Robert Stupp
 Fix For: 3.0


Currently it's not possible to execute unit tests against UDFs nor is it 
possible to debug them.

Idea is to provide some kind of minimalistic framework to execute at least 
scalar UDFs from a unit test.

Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
takes, compiles that UDF and allows the user to call it using plain java calls.

It case of the Java language it could also generate Java source files to enable 
users to set breakpoints.

For example:
{code}
import org.apache.cassandra.udfexec.*

public class MyUnitTest {
  @Test
  public void testIt() {
UDFExec sinExec = UDFExec.compile(sin, java,
  Double.class, // return type
  Double.class  // argument type(s)
);
sinExec.call(2.0d);
sinExec.call(null);
  }
}
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8289) Allow users to debug/test UDF

2014-11-11 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8289:

Description: 
Currently it's not possible to execute unit tests against UDFs nor is it 
possible to debug them.

Idea is to provide some kind of minimalistic framework to execute at least 
scalar UDFs from a unit test.

Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
takes, compiles that UDF and allows the user to call it using plain java calls.

In case of the Java language it could also generate Java source files to enable 
users to set breakpoints.

It could also check for timeouts to identify e.g. endless loop scenarios or 
do some byte code analysis to check for evil package usage.

For example:
{code}
import org.apache.cassandra.udfexec.*

public class MyUnitTest {
  @Test
  public void testIt() {
UDFExec sinExec = UDFExec.compile(sin, java,
  Double.class, // return type
  Double.class  // argument type(s)
);
sinExec.call(2.0d);
sinExec.call(null);
  }
}
{code}


  was:
Currently it's not possible to execute unit tests against UDFs nor is it 
possible to debug them.

Idea is to provide some kind of minimalistic framework to execute at least 
scalar UDFs from a unit test.

Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
takes, compiles that UDF and allows the user to call it using plain java calls.

It case of the Java language it could also generate Java source files to enable 
users to set breakpoints.

For example:
{code}
import org.apache.cassandra.udfexec.*

public class MyUnitTest {
  @Test
  public void testIt() {
UDFExec sinExec = UDFExec.compile(sin, java,
  Double.class, // return type
  Double.class  // argument type(s)
);
sinExec.call(2.0d);
sinExec.call(null);
  }
}
{code}



 Allow users to debug/test UDF
 -

 Key: CASSANDRA-8289
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8289
 Project: Cassandra
  Issue Type: New Feature
Reporter: Robert Stupp
Assignee: Robert Stupp
  Labels: udf
 Fix For: 3.0


 Currently it's not possible to execute unit tests against UDFs nor is it 
 possible to debug them.
 Idea is to provide some kind of minimalistic framework to execute at least 
 scalar UDFs from a unit test.
 Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
 takes, compiles that UDF and allows the user to call it using plain java 
 calls.
 In case of the Java language it could also generate Java source files to 
 enable users to set breakpoints.
 It could also check for timeouts to identify e.g. endless loop scenarios or 
 do some byte code analysis to check for evil package usage.
 For example:
 {code}
 import org.apache.cassandra.udfexec.*
 public class MyUnitTest {
   @Test
   public void testIt() {
 UDFExec sinExec = UDFExec.compile(sin, java,
   Double.class, // return type
   Double.class  // argument type(s)
 );
 sinExec.call(2.0d);
 sinExec.call(null);
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2014-11-11 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206159#comment-14206159
 ] 

Andreas Schnitzerling commented on CASSANDRA-8067:
--

I still have the same issue running since a few weeks w/ 2.1.1 and I already 
upgraded the sstables on that node. There are a lot in my system.log. In 2.0.10 
there was not such an issue. I still have most nodes running on 2.0.10. I'm 
curious for Yuki's report, if upgrading the whole cluster fixes the problem. 
You can see my system.log at CASSANDRA-8192.

 NullPointerException in KeyCacheSerializer
 --

 Key: CASSANDRA-8067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Leleu
 Fix For: 2.1.1


 Hi,
 I have this stack trace in the logs of Cassandra server (v2.1)
 {code}
 ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:14,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source) ~[na:1.7.0]
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
 ~[na:1.7.0]
 at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0]
 {code}
 It may not be critical because this error occured in the AutoSavingCache. 
 However the line 475 is about the CFMetaData so it may hide bigger issue...
 {code}
  474 CFMetaData cfm = 
 Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
 out);
 {code}
 Regards,
 Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8241) Use javac instead of javassist

2014-11-11 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206161#comment-14206161
 ] 

Robert Stupp commented on CASSANDRA-8241:
-

ecj.jar is just the java compiler - there are no more dependencies required. I 
think the Eclipse compiler is fine and used by other projects (e.g. Tomcat).

OTOH we could also support both javac + ecj since ecj supports the Java 
Compiler API - but that would add complexity in dev + ops. So I'm not sold on 
that.

Not sure whether pulling in some source template makes code generation easier 
to read/maintain since we have several replacements that use variable argument 
lists. But I try that.

Sure - allowing user-provided source code integration is painful. We can add 
some support to help users not to accidentally add bad code. Just thought about 
some library/tool that users can use (just created CASSANDRA-8289 for that).

 Use javac instead of javassist
 --

 Key: CASSANDRA-8241
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8241
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Assignee: Robert Stupp
  Labels: udf
 Fix For: 3.0

 Attachments: 8241-ecj.txt, udf-java-javac.txt


 Using JDK's built-in Java-Compiler API has some advantages over javassist.
 Although compilation feels a bit slower, Java compiler API has some 
 advantages:
 * boxing + unboxing works
 * generics work
 * compiler error messages are better (or at least known) and have line/column 
 numbers
 The implementation does not use any temp files. Everything's in memory.
 Patch attached to this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6434) Repair-aware gc grace period

2014-11-11 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206162#comment-14206162
 ] 

Marcus Eriksson commented on CASSANDRA-6434:


Hint TTL is set to the smallest gcgs among the column families included in the 
mutation to make sure we don't resurrect any data by replaying an old hint.

After this ticket we would only drop tombstones if the sstable is repaired, 
repairing the node will also make sure that it has received the data it should 
have. If we then receive a hint for a range that is from before the last 
repair, we could safely ignore it. CL.ANY makes this more unclean since a node 
might not actually have all the data it should after a repair, but, if we keep 
a safety window of max_hint_window_in_ms during which we always keep the 
tombstones, repaired or not, we would probably be safe. (and it would solve 
[~jjordan]s issue above as well)

I most likely missed/ignored some corner case (like that last comment on 
CASSANDRA-3620 and batches generating new hints etc.. rage)

 Repair-aware gc grace period 
 -

 Key: CASSANDRA-6434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6434
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: sankalp kohli
Assignee: Marcus Eriksson
 Fix For: 3.0


 Since the reason for gcgs is to ensure that we don't purge tombstones until 
 every replica has been notified, it's redundant in a world where we're 
 tracking repair times per sstable (and repairing frequentily), i.e., a world 
 where we default to incremental repair a la CASSANDRA-5351.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8289) Allow users to debug/test UDF

2014-11-11 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8289:

Description: 
Currently it's not possible to execute unit tests against UDFs nor is it 
possible to debug them.

Idea is to provide some kind of minimalistic framework to execute at least 
scalar UDFs from a unit test.

Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
takes, compiles that UDF and allows the user to call it using plain java calls.

In case of the Java language it could also generate Java source files to enable 
users to set breakpoints.

It could also check for timeouts to identify e.g. endless loop scenarios or 
do some byte code analysis to check for evil package usage.

For example:
{code}
import org.apache.cassandra.udfexec.*

public class MyUnitTest {
  @Test
  public void testIt() {
UDFExec sinExec = UDFExec.compile(sin, java,
  Double.class, // return type
  Double.class  // argument type(s)
);
sinExec.call(2.0d);
sinExec.call(null);
  }
}
{code}

Note: this one is not intended to do some magic to start a debugger on a C* 
node and debug it there.

  was:
Currently it's not possible to execute unit tests against UDFs nor is it 
possible to debug them.

Idea is to provide some kind of minimalistic framework to execute at least 
scalar UDFs from a unit test.

Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
takes, compiles that UDF and allows the user to call it using plain java calls.

In case of the Java language it could also generate Java source files to enable 
users to set breakpoints.

It could also check for timeouts to identify e.g. endless loop scenarios or 
do some byte code analysis to check for evil package usage.

For example:
{code}
import org.apache.cassandra.udfexec.*

public class MyUnitTest {
  @Test
  public void testIt() {
UDFExec sinExec = UDFExec.compile(sin, java,
  Double.class, // return type
  Double.class  // argument type(s)
);
sinExec.call(2.0d);
sinExec.call(null);
  }
}
{code}



 Allow users to debug/test UDF
 -

 Key: CASSANDRA-8289
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8289
 Project: Cassandra
  Issue Type: New Feature
Reporter: Robert Stupp
Assignee: Robert Stupp
  Labels: udf
 Fix For: 3.0


 Currently it's not possible to execute unit tests against UDFs nor is it 
 possible to debug them.
 Idea is to provide some kind of minimalistic framework to execute at least 
 scalar UDFs from a unit test.
 Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
 takes, compiles that UDF and allows the user to call it using plain java 
 calls.
 In case of the Java language it could also generate Java source files to 
 enable users to set breakpoints.
 It could also check for timeouts to identify e.g. endless loop scenarios or 
 do some byte code analysis to check for evil package usage.
 For example:
 {code}
 import org.apache.cassandra.udfexec.*
 public class MyUnitTest {
   @Test
   public void testIt() {
 UDFExec sinExec = UDFExec.compile(sin, java,
   Double.class, // return type
   Double.class  // argument type(s)
 );
 sinExec.call(2.0d);
 sinExec.call(null);
   }
 }
 {code}
 Note: this one is not intended to do some magic to start a debugger on a C* 
 node and debug it there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-11 Thread Andreas Schnitzerling (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Schnitzerling updated CASSANDRA-8192:
-
Attachment: system.log

Other issues in that system.log: CASSANDRA-6283, CASSANDRA-8067, CASSANDRA-8069

 AssertionError in Memory.java
 -

 Key: CASSANDRA-8192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
 Attachments: cassandra.yaml, system.log


 Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
 start up.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {panel}
 In the attached log you can still see as well CASSANDRA-8069 and 
 CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8241) Use javac instead of javassist

2014-11-11 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206166#comment-14206166
 ] 

Benjamin Lerer commented on CASSANDRA-8241:
---

I think it would be good to define a bit more the scope of the problem. My 
understanding is that so far we only use code generation for UDF. Is there some 
other place where we use it?
If not, I think it would be nice to have something better that javassist, but 
it is only a nice to have and we can stick with javassist for now. From a 
benefits/problems point of view it looks as the best choice. 

 Use javac instead of javassist
 --

 Key: CASSANDRA-8241
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8241
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Assignee: Robert Stupp
  Labels: udf
 Fix For: 3.0

 Attachments: 8241-ecj.txt, udf-java-javac.txt


 Using JDK's built-in Java-Compiler API has some advantages over javassist.
 Although compilation feels a bit slower, Java compiler API has some 
 advantages:
 * boxing + unboxing works
 * generics work
 * compiler error messages are better (or at least known) and have line/column 
 numbers
 The implementation does not use any temp files. Everything's in memory.
 Patch attached to this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-11 Thread Andreas Schnitzerling (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Schnitzerling updated CASSANDRA-8192:
-
Attachment: cassandra.bat

I don't have powershell-permissions.

 AssertionError in Memory.java
 -

 Key: CASSANDRA-8192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
 Attachments: cassandra.bat, cassandra.yaml, system.log


 Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
 start up.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {panel}
 In the attached log you can still see as well CASSANDRA-8069 and 
 CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8070) OutOfMemoryError in OptionalTasks

2014-11-11 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206191#comment-14206191
 ] 

Andreas Schnitzerling commented on CASSANDRA-8070:
--

I dream as well from better environment ;-) . In our laboratories we need to 
use 32 bit because of old driver software. And that limits as well to 3GB. 
Since we have a lot of those machines, we like to use that ressources to save 
and process data. For our application it's enough (calculate eventcounters, 
show some graphics, sometimes I run spark-jobs).

 OutOfMemoryError in OptionalTasks
 -

 Key: CASSANDRA-8070
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8070
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
 Attachments: system.log


 Since update of 2 of 12 nodes from 2.0.10 to 2.1-release Exception during 
 operation.
 {panel:title=system.log}
 ERROR [OptionalTasks:1] 2014-10-02 09:26:42,821 CassandraDaemon.java:166 - 
 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.OutOfMemoryError: Java heap space
   at com.yammer.metrics.stats.Snapshot.init(Snapshot.java:30) 
 ~[metrics-core-2.2.0.jar:na]
   at 
 com.yammer.metrics.stats.ExponentiallyDecayingSample.getSnapshot(ExponentiallyDecayingSample.java:131)
  ~[metrics-core-2.2.0.jar:na]
   at com.yammer.metrics.core.Histogram.getSnapshot(Histogram.java:180) 
 ~[metrics-core-2.2.0.jar:na]
   at com.yammer.metrics.core.Timer.getSnapshot(Timer.java:183) 
 ~[metrics-core-2.2.0.jar:na]
   at 
 org.apache.cassandra.db.ColumnFamilyStore$2.run(ColumnFamilyStore.java:340) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 [na:1.7.0_67]
   at java.util.concurrent.FutureTask.runAndReset(Unknown Source) 
 [na:1.7.0_67]
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
  Source) [na:1.7.0_67]
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
  Source) [na:1.7.0_67]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_67]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_67]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_67]
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2014-11-11 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8238:
---
Attachment: 0001-assert-that-readMeter-is-not-null.patch

attaching patch to output which sstable had a null readMeter to be able to 
debug this

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-assert-that-readMeter-is-not-null.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8290) archiving commitlogs after restart fails

2014-11-11 Thread Manuel Lausch (JIRA)
Manuel Lausch created CASSANDRA-8290:


 Summary: archiving commitlogs after restart fails 
 Key: CASSANDRA-8290
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8290
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.11 
Debian wheezy
Reporter: Manuel Lausch
Priority: Minor


After update to Cassandra 2.0.11 Cassandra mostly  fails during startup while 
archiving commitlogs

see logfile:
{noformat}
RROR [main] 2014-11-03 13:08:59,388 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: java.io.IOException: Exception while executing the 
command: /bin/ln /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
/var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
1, command output: /bin/ln: failed to create hard link 
`/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists

at 
org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:158)
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:124)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:336)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
java.io.IOException: Exception while executing the command: /bin/ln 
/var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
/var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
1, command output: /bin/ln: failed to create hard link 
`/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists

at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at 
org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:145)
... 4 more
Caused by: java.lang.RuntimeException: java.io.IOException: Exception while 
executing the command: /bin/ln 
/var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
/var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
1, command output: /bin/ln: failed to create hard link 
`/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists

at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Exception while executing the command: /bin/ln 
/var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
/var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
1, command output: /bin/ln: failed to create hard link 
`/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists

at org.apache.cassandra.utils.FBUtilities.exec(FBUtilities.java:604)
at 
org.apache.cassandra.db.commitlog.CommitLogArchiver.exec(CommitLogArchiver.java:197)
at 
org.apache.cassandra.db.commitlog.CommitLogArchiver.access$100(CommitLogArchiver.java:44)
at 
org.apache.cassandra.db.commitlog.CommitLogArchiver$1.runMayThrow(CommitLogArchiver.java:132)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
... 5 more
ERROR [commitlog_archiver:1] 2014-11-03 13:08:59,388 CassandraDaemon.java (line 
199) Exception in thread Thread[commitlog_archiver:1,5,main]
java.lang.RuntimeException: java.io.IOException: Exception while executing the 
command: /bin/ln /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
/var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
1, command output: /bin/ln: failed to create hard link 
`/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists

at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 

[jira] [Commented] (CASSANDRA-8241) Use javac instead of javassist

2014-11-11 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206249#comment-14206249
 ] 

Robert Stupp commented on CASSANDRA-8241:
-

bq. scope of the problem

IMO people are used to rely on Java language's boxing and implicit casts and it 
would cause (just) usability issues when these features are not there.

For example the stupid {{sin}} function is easier to read with boxing. 

{code:title=javac/ecj}
return input != null ? Math.sin(input) : null;
{code}
{code:title=javassist}
return input != null ? Double.valueOf(Math.sin(input.doubleValue())) : null;
{code}

Complexity increases with the number of arguments and when 
UDTs/tuples/collections are possible.

In addition we could easily allow new Java 8 language features (e.g. lambdas 
and streaming) within UDFs (currently restricted to Java 7) - could for example 
simplify source code working on collections passed to UDFs.

bq. some other place where we use it?

I could imagine to use user-provided code for triggers. But current trigger 
implementation works at a lower level - so (I guess) that would be a 
re-implementation of triggers on CQL3 level.
Maybe custom 2i providers could also use user-provided source code - but since 
these usually require custom libs, it does not feel suitable.

 Use javac instead of javassist
 --

 Key: CASSANDRA-8241
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8241
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Assignee: Robert Stupp
  Labels: udf
 Fix For: 3.0

 Attachments: 8241-ecj.txt, udf-java-javac.txt


 Using JDK's built-in Java-Compiler API has some advantages over javassist.
 Although compilation feels a bit slower, Java compiler API has some 
 advantages:
 * boxing + unboxing works
 * generics work
 * compiler error messages are better (or at least known) and have line/column 
 numbers
 The implementation does not use any temp files. Everything's in memory.
 Patch attached to this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8241) Use javac instead of javassist

2014-11-11 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206311#comment-14206311
 ] 

Benjamin Lerer commented on CASSANDRA-8241:
---

It is a nice to have, but in my opinion it does make a big change. So I would 
stick with javassist for now and look at it in the future if some users are 
really pushing for it.

 Use javac instead of javassist
 --

 Key: CASSANDRA-8241
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8241
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Assignee: Robert Stupp
  Labels: udf
 Fix For: 3.0

 Attachments: 8241-ecj.txt, udf-java-javac.txt


 Using JDK's built-in Java-Compiler API has some advantages over javassist.
 Although compilation feels a bit slower, Java compiler API has some 
 advantages:
 * boxing + unboxing works
 * generics work
 * compiler error messages are better (or at least known) and have line/column 
 numbers
 The implementation does not use any temp files. Everything's in memory.
 Patch attached to this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-11 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8280:
---
Reproduced In: 2.1.2, 2.1.1, 2.0.11  (was: 2.1.1, 2.1.2)
 Assignee: Sam Tunnicliffe  (was: Marcus Eriksson)

assigning [~beobal] as it seems to be a general 2i problem (and it exists in 
atleast 2.0)

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 

[jira] [Assigned] (CASSANDRA-8081) AssertionError with 2ndary indexes

2014-11-11 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-8081:
--

Assignee: Sam Tunnicliffe  (was: Marcus Eriksson)

 AssertionError with 2ndary indexes 
 ---

 Key: CASSANDRA-8081
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8081
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: kais
Assignee: Sam Tunnicliffe

 If you create a table with a column type text or blob and add a secondary 
 index on it. If you insert a value that is longer than 
 FBUtilities.MAX_UNSIGNED_SHORT, then flush, you get an assertion error
 {code}
 CREATE TABLE test_text (key text PRIMARY KEY, col text);
 CREATE INDEX test_text_col_idx ON test_text (col);
 {code}
 {code}
  INFO [FlushWriter:3] 2014-10-08 10:53:38,471 Memtable.java (line 331) 
 Writing Memtable-test_text.test_text_col_idx@849649959(15/150 serialized/live 
 bytes, 1 ops)
  INFO [FlushWriter:4] 2014-10-08 10:53:38,554 Memtable.java (line 331) 
 Writing Memtable-test_text@1448092010(100025/1000250 serialized/live bytes, 2 
 ops)
 ERROR [FlushWriter:3] 2014-10-08 10:53:38,554 CassandraDaemon.java (line 196) 
 Exception in thread Thread[FlushWriter:3,5,RMI Runtime]
 java.lang.AssertionError: 10
   at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:342)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7409) Allow multiple overlapping sstables in L1

2014-11-11 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206385#comment-14206385
 ] 

Marcus Eriksson commented on CASSANDRA-7409:


Did you keep the logs? I guess we could estimate how much write amplification 
we do by summing up the amounts in the Compacted ... X bytes to Y lines

Why do we pick candidates from the bottom now? Could we perhaps first check 
higher levels down to the MOLO-level and then bottom-up in the levels that 
allow overlap?

 Allow multiple overlapping sstables in L1
 -

 Key: CASSANDRA-7409
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7409
 Project: Cassandra
  Issue Type: Improvement
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
  Labels: compaction
 Fix For: 3.0


 Currently, when a normal L0 compaction takes place (not STCS), we take up to 
 MAX_COMPACTING_L0 L0 sstables and all of the overlapping L1 sstables and 
 compact them together. If we didn't have to deal with the overlapping L1 
 tables, we could compact a higher number of L0 sstables together into a set 
 of non-overlapping L1 sstables.
 This could be done by delaying the invariant that L1 has no overlapping 
 sstables. Going from L1 to L2, we would be compacting fewer sstables together 
 which overlap.
 When reading, we will not have the same one sstable per level (except L0) 
 guarantee, but this can be bounded (once we have too many sets of sstables, 
 either compact them back into the same level, or compact them up to the next 
 level).
 This could be generalized to allow any level to be the maximum for this 
 overlapping strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8221) Specify keyspace in error message when streaming fails due to missing replicas

2014-11-11 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206452#comment-14206452
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-8221:
-

I saw the fix version changed. Hope I don't have to change anything as I have 
submitted the patch for both the 2.0 and 2.1 branches. Thanks

 Specify keyspace in error message when streaming fails due to missing replicas
 --

 Key: CASSANDRA-8221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8221
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Trivial
  Labels: lhf
 Fix For: 2.0.12, 2.1.3

 Attachments: cassandra-2.0.11-8221.txt, cassandra-2.1.1-8221.txt


 When there aren't sufficient live replicas for streaming (during bootstrap, 
 etc), you'll get an error message like unable to find sufficient sources for 
 streaming range.  It would be helpful to include the keyspace that this 
 failed for, since each keyspace can have different replication settings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-11-11 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206543#comment-14206543
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124:
-

[~thobbs] I have done a basic testing of the nodetool cleanup and it did not 
throw any error and looks like the changes have not regressed anything. Should 
I upload one temporary patch with the changes for the changes I have done for 
cleanup for your perusal? Please let me know. Based on that I can go ahead and 
make the changes for the remaining options such as compact, decommission, move, 
relocate. Thanks

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8291) Parent repair session is not removed in remote node

2014-11-11 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-8291:
-

 Summary: Parent repair session is not removed in remote node
 Key: CASSANDRA-8291
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8291
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1.3
 Attachments: 
0001-remove-parent-session-after-anticompaction-in-remote.patch

After anti-compaction is run on remote node, parent repair session is not 
removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8288) cqlsh describe needs to show 'sstable_compression': ''

2014-11-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8288:
---
Assignee: Tyler Hobbs
  Labels: cqlsh  (was: )

 cqlsh describe needs to show 'sstable_compression': ''
 --

 Key: CASSANDRA-8288
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8288
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
  Labels: cqlsh

 For uncompressed tables cqlsh describe schema should show AND compression = 
 {'sstable_compression': ''} otherwise when you replay the schema you get the 
 default of LZ4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8279) Geo-Red : Streaming is working fine on two nodes but failing on one node repeatedly.

2014-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206627#comment-14206627
 ] 

Michael Shuler commented on CASSANDRA-8279:
---

Great troubleshooting! Yes, I do think this could be a cause for the error. I'm 
not certain if those properties are persisted to a system table, so I will 
check on that to see if simply updating the configuration is all that is 
needed, or if there are any steps necessary.

 Geo-Red  : Streaming is working fine on two nodes but failing on one node 
 repeatedly.
 -

 Key: CASSANDRA-8279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8279
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: LINUX
Reporter: Akhtar Hussain

 Exception in thread main java.lang.RuntimeException: Error while rebuilding 
 node: Stream failed
 at 
 org.apache.cassandra.service.StorageService.rebuild(StorageService.java:896)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at java.security.AccessController.doPrivileged(Native Method)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8122) Undeclare throwable exception while executing 'nodetool netstats localhost'

2014-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206636#comment-14206636
 ] 

Michael Shuler commented on CASSANDRA-8122:
---

Thanks! Now that I see the change, I think your first patch is right - my 
mistake.  :)  This should also get merged forward and it looks like nodetool 
was refactored for 2.1 - could you also create a patch for 2.1? 
(o/a/c/tools/NodeTool.java)

 Undeclare throwable exception while executing 'nodetool netstats localhost'
 ---

 Key: CASSANDRA-8122
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8122
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra: 2.0.9
Reporter: Vishal Mehta
Priority: Minor
 Attachments: CASSANDRA-8122-1.patch, CASSANDRA-8122.patch


 *Steps*
 # Stop cassandra service
 # Check netstats of nodetool using 'nodetool netstats localhost'
 # Start cassandra service
 # Again check netstats of nodetool using 'nodetool netstats localhost'
 *Expected output*
 Mode: STARTING
 Not sending any streams. (End of output - no further exceptions)
 *Observed output*
 {noformat}
  nodetool netstats localhost
 Mode: STARTING
 Not sending any streams.
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
   at com.sun.proxy.$Proxy6.getReadRepairAttempted(Unknown Source)
   at 
 org.apache.cassandra.tools.NodeProbe.getReadRepairAttempted(NodeProbe.java:897)
   at 
 org.apache.cassandra.tools.NodeCmd.printNetworkStats(NodeCmd.java:726)
   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1281)
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.db:type=StorageProxy
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
   at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:273)
   at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:251)
   at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:160)
   at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
   at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
   at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
   at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
   ... 4 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8285) OOME in Cassandra 2.0.11

2014-11-11 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-8285:
---

Assignee: Russ Hatch  (was: Ryan McGuire)

 OOME in Cassandra 2.0.11
 

 Key: CASSANDRA-8285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8285
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.11 + java-driver 2.0.8-SNAPSHOT
 Cassandra 2.0.11 + ruby-driver 1.0-beta
Reporter: Pierre Laporte
Assignee: Russ Hatch
 Attachments: OOME_node_system.log, gc.log.gz, 
 heap-usage-after-gc-zoom.png, heap-usage-after-gc.png


 We ran drivers 3-days endurance tests against Cassandra 2.0.11 and C* crashed 
 with an OOME.  This happened both with ruby-driver 1.0-beta and java-driver 
 2.0.8-snapshot.
 Attached are :
 | OOME_node_system.log | The system.log of one Cassandra node that crashed |
 | gc.log.gz | The GC log on the same node |
 | heap-usage-after-gc.png | The heap occupancy evolution after every GC cycle 
 |
 | heap-usage-after-gc-zoom.png | A focus on when things start to go wrong |
 Workload :
 Our test executes 5 CQL statements (select, insert, select, delete, select) 
 for a given unique id, during 3 days, using multiple threads.  There is not 
 change in the workload during the test.
 Symptoms :
 In the attached log, it seems something starts in Cassandra between 
 2014-11-06 10:29:22 and 2014-11-06 10:45:32.  This causes an allocation that 
 fills the heap.  We eventually get stuck in a Full GC storm and get an OOME 
 in the logs.
 I have run the java-driver tests against Cassandra 1.2.19 and 2.1.1.  The 
 error does not occur.  It seems specific to 2.0.11.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Include ks name in failed streaming error message

2014-11-11 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 c3a809584 - a6802aa47


Include ks name in failed streaming error message

Patch by Rajanarayanan Thottuvaikkatumana; reviewed by Tyler Hobbs for
CASSANDRA-8221


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a6802aa4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a6802aa4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a6802aa4

Branch: refs/heads/cassandra-2.0
Commit: a6802aa479a46b6f3fb1855786f72f6e3b08e0b9
Parents: c3a8095
Author: Rajanarayanan Thottuvaikkatumana rnambood...@gmail.com
Authored: Tue Nov 11 11:38:06 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 11:38:06 2014 -0600

--
 CHANGES.txt   |  2 ++
 src/java/org/apache/cassandra/dht/RangeStreamer.java  | 10 +-
 src/java/org/apache/cassandra/service/StorageService.java |  2 +-
 3 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2b3bd3c..842643c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Include keyspace name in error message when there are insufficient
+   live nodes to stream from (CASSANDRA-8221)
  * Avoid overlap in L1 when L0 contains many nonoverlapping
sstables (CASSANDRA-8211)
  * Improve PropertyFileSnitch logging (CASSANDRA-8183)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index 4e925d3..8846e1d 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -123,7 +123,7 @@ public class RangeStreamer
 logger.debug(String.format(%s: range %s exists on %s, 
description, entry.getKey(), entry.getValue()));
 }
 
-for (Map.EntryInetAddress, CollectionRangeToken entry : 
getRangeFetchMap(rangesForKeyspace, sourceFilters).asMap().entrySet())
+for (Map.EntryInetAddress, CollectionRangeToken entry : 
getRangeFetchMap(rangesForKeyspace, sourceFilters, 
keyspaceName).asMap().entrySet())
 {
 if (logger.isDebugEnabled())
 {
@@ -170,7 +170,7 @@ public class RangeStreamer
  * @return
  */
 private static MultimapInetAddress, RangeToken 
getRangeFetchMap(MultimapRangeToken, InetAddress rangesWithSources,
-
CollectionISourceFilter sourceFilters)
+
CollectionISourceFilter sourceFilters, String keyspace)
 {
 MultimapInetAddress, RangeToken rangeFetchMapMap = 
HashMultimap.create();
 for (RangeToken range : rangesWithSources.keySet())
@@ -199,15 +199,15 @@ public class RangeStreamer
 }
 
 if (!foundSource)
-throw new IllegalStateException(unable to find sufficient 
sources for streaming range  + range);
+throw new IllegalStateException(unable to find sufficient 
sources for streaming range  + range +  in keyspace  + keyspace);
 }
 
 return rangeFetchMapMap;
 }
 
-public static MultimapInetAddress, RangeToken 
getWorkMap(MultimapRangeToken, InetAddress rangesWithSourceTarget)
+public static MultimapInetAddress, RangeToken 
getWorkMap(MultimapRangeToken, InetAddress rangesWithSourceTarget, String 
keyspace)
 {
-return getRangeFetchMap(rangesWithSourceTarget, 
Collections.ISourceFiltersingleton(new 
FailureDetectorSourceFilter(FailureDetector.instance)));
+return getRangeFetchMap(rangesWithSourceTarget, 
Collections.ISourceFiltersingleton(new 
FailureDetectorSourceFilter(FailureDetector.instance)), keyspace);
 }
 
 // For testing purposes

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 066544a..4bc1eee 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3200,7 +3200,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 
 // stream 

[1/2] cassandra git commit: Include ks name in failed streaming error message

2014-11-11 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 155eccd66 - 033762099


Include ks name in failed streaming error message

Patch by Rajanarayanan Thottuvaikkatumana; reviewed by Tyler Hobbs for
CASSANDRA-8221


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a6802aa4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a6802aa4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a6802aa4

Branch: refs/heads/cassandra-2.1
Commit: a6802aa479a46b6f3fb1855786f72f6e3b08e0b9
Parents: c3a8095
Author: Rajanarayanan Thottuvaikkatumana rnambood...@gmail.com
Authored: Tue Nov 11 11:38:06 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 11:38:06 2014 -0600

--
 CHANGES.txt   |  2 ++
 src/java/org/apache/cassandra/dht/RangeStreamer.java  | 10 +-
 src/java/org/apache/cassandra/service/StorageService.java |  2 +-
 3 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2b3bd3c..842643c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Include keyspace name in error message when there are insufficient
+   live nodes to stream from (CASSANDRA-8221)
  * Avoid overlap in L1 when L0 contains many nonoverlapping
sstables (CASSANDRA-8211)
  * Improve PropertyFileSnitch logging (CASSANDRA-8183)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index 4e925d3..8846e1d 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -123,7 +123,7 @@ public class RangeStreamer
 logger.debug(String.format(%s: range %s exists on %s, 
description, entry.getKey(), entry.getValue()));
 }
 
-for (Map.EntryInetAddress, CollectionRangeToken entry : 
getRangeFetchMap(rangesForKeyspace, sourceFilters).asMap().entrySet())
+for (Map.EntryInetAddress, CollectionRangeToken entry : 
getRangeFetchMap(rangesForKeyspace, sourceFilters, 
keyspaceName).asMap().entrySet())
 {
 if (logger.isDebugEnabled())
 {
@@ -170,7 +170,7 @@ public class RangeStreamer
  * @return
  */
 private static MultimapInetAddress, RangeToken 
getRangeFetchMap(MultimapRangeToken, InetAddress rangesWithSources,
-
CollectionISourceFilter sourceFilters)
+
CollectionISourceFilter sourceFilters, String keyspace)
 {
 MultimapInetAddress, RangeToken rangeFetchMapMap = 
HashMultimap.create();
 for (RangeToken range : rangesWithSources.keySet())
@@ -199,15 +199,15 @@ public class RangeStreamer
 }
 
 if (!foundSource)
-throw new IllegalStateException(unable to find sufficient 
sources for streaming range  + range);
+throw new IllegalStateException(unable to find sufficient 
sources for streaming range  + range +  in keyspace  + keyspace);
 }
 
 return rangeFetchMapMap;
 }
 
-public static MultimapInetAddress, RangeToken 
getWorkMap(MultimapRangeToken, InetAddress rangesWithSourceTarget)
+public static MultimapInetAddress, RangeToken 
getWorkMap(MultimapRangeToken, InetAddress rangesWithSourceTarget, String 
keyspace)
 {
-return getRangeFetchMap(rangesWithSourceTarget, 
Collections.ISourceFiltersingleton(new 
FailureDetectorSourceFilter(FailureDetector.instance)));
+return getRangeFetchMap(rangesWithSourceTarget, 
Collections.ISourceFiltersingleton(new 
FailureDetectorSourceFilter(FailureDetector.instance)), keyspace);
 }
 
 // For testing purposes

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 066544a..4bc1eee 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3200,7 +3200,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 
 // stream 

[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-11 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/03376209
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/03376209
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/03376209

Branch: refs/heads/trunk
Commit: 0337620994555930e4ff25ca201a3d59a04a1b17
Parents: 155eccd a6802aa
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Nov 11 11:40:47 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 11:40:47 2014 -0600

--
 CHANGES.txt   |  2 ++
 src/java/org/apache/cassandra/dht/RangeStreamer.java  | 10 +-
 src/java/org/apache/cassandra/service/StorageService.java |  2 +-
 3 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/03376209/CHANGES.txt
--
diff --cc CHANGES.txt
index d868a2c,842643c..b1b5df8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,7 -1,6 +1,9 @@@
 -2.0.12:
 +2.1.3
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 +Merged from 2.0:
+  * Include keyspace name in error message when there are insufficient
+live nodes to stream from (CASSANDRA-8221)
   * Avoid overlap in L1 when L0 contains many nonoverlapping
 sstables (CASSANDRA-8211)
   * Improve PropertyFileSnitch logging (CASSANDRA-8183)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03376209/src/java/org/apache/cassandra/dht/RangeStreamer.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03376209/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index a49dfc3,4bc1eee..29054f4
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -3393,10 -3200,9 +3393,10 @@@ public class StorageService extends Not
  }
  
  // stream requests
- MultimapInetAddress, RangeToken workMap = 
RangeStreamer.getWorkMap(rangesToFetchWithPreferredEndpoints);
+ MultimapInetAddress, RangeToken workMap = 
RangeStreamer.getWorkMap(rangesToFetchWithPreferredEndpoints, keyspace);
  for (InetAddress address : workMap.keySet())
  {
 +logger.debug(Will request range {} of keyspace {} 
from endpoint {}, workMap.get(address), keyspace, address);
  InetAddress preferred = 
SystemKeyspace.getPreferredIP(address);
  streamPlan.requestRanges(address, preferred, 
keyspace, workMap.get(address));
  }



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-11 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/03376209
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/03376209
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/03376209

Branch: refs/heads/cassandra-2.1
Commit: 0337620994555930e4ff25ca201a3d59a04a1b17
Parents: 155eccd a6802aa
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Nov 11 11:40:47 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 11:40:47 2014 -0600

--
 CHANGES.txt   |  2 ++
 src/java/org/apache/cassandra/dht/RangeStreamer.java  | 10 +-
 src/java/org/apache/cassandra/service/StorageService.java |  2 +-
 3 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/03376209/CHANGES.txt
--
diff --cc CHANGES.txt
index d868a2c,842643c..b1b5df8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,7 -1,6 +1,9 @@@
 -2.0.12:
 +2.1.3
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 +Merged from 2.0:
+  * Include keyspace name in error message when there are insufficient
+live nodes to stream from (CASSANDRA-8221)
   * Avoid overlap in L1 when L0 contains many nonoverlapping
 sstables (CASSANDRA-8211)
   * Improve PropertyFileSnitch logging (CASSANDRA-8183)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03376209/src/java/org/apache/cassandra/dht/RangeStreamer.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03376209/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index a49dfc3,4bc1eee..29054f4
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -3393,10 -3200,9 +3393,10 @@@ public class StorageService extends Not
  }
  
  // stream requests
- MultimapInetAddress, RangeToken workMap = 
RangeStreamer.getWorkMap(rangesToFetchWithPreferredEndpoints);
+ MultimapInetAddress, RangeToken workMap = 
RangeStreamer.getWorkMap(rangesToFetchWithPreferredEndpoints, keyspace);
  for (InetAddress address : workMap.keySet())
  {
 +logger.debug(Will request range {} of keyspace {} 
from endpoint {}, workMap.get(address), keyspace, address);
  InetAddress preferred = 
SystemKeyspace.getPreferredIP(address);
  streamPlan.requestRanges(address, preferred, 
keyspace, workMap.get(address));
  }



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-11 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f2e2862f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f2e2862f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f2e2862f

Branch: refs/heads/trunk
Commit: f2e2862f5c47680871ce6329654504140683c241
Parents: d286ac7 0337620
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Nov 11 11:41:08 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 11:41:08 2014 -0600

--
 CHANGES.txt   |  2 ++
 src/java/org/apache/cassandra/dht/RangeStreamer.java  | 10 +-
 src/java/org/apache/cassandra/service/StorageService.java |  2 +-
 3 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2e2862f/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2e2862f/src/java/org/apache/cassandra/dht/RangeStreamer.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2e2862f/src/java/org/apache/cassandra/service/StorageService.java
--



[1/3] cassandra git commit: Include ks name in failed streaming error message

2014-11-11 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk d286ac7d0 - f2e2862f5


Include ks name in failed streaming error message

Patch by Rajanarayanan Thottuvaikkatumana; reviewed by Tyler Hobbs for
CASSANDRA-8221


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a6802aa4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a6802aa4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a6802aa4

Branch: refs/heads/trunk
Commit: a6802aa479a46b6f3fb1855786f72f6e3b08e0b9
Parents: c3a8095
Author: Rajanarayanan Thottuvaikkatumana rnambood...@gmail.com
Authored: Tue Nov 11 11:38:06 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 11:38:06 2014 -0600

--
 CHANGES.txt   |  2 ++
 src/java/org/apache/cassandra/dht/RangeStreamer.java  | 10 +-
 src/java/org/apache/cassandra/service/StorageService.java |  2 +-
 3 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2b3bd3c..842643c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Include keyspace name in error message when there are insufficient
+   live nodes to stream from (CASSANDRA-8221)
  * Avoid overlap in L1 when L0 contains many nonoverlapping
sstables (CASSANDRA-8211)
  * Improve PropertyFileSnitch logging (CASSANDRA-8183)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index 4e925d3..8846e1d 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -123,7 +123,7 @@ public class RangeStreamer
 logger.debug(String.format(%s: range %s exists on %s, 
description, entry.getKey(), entry.getValue()));
 }
 
-for (Map.EntryInetAddress, CollectionRangeToken entry : 
getRangeFetchMap(rangesForKeyspace, sourceFilters).asMap().entrySet())
+for (Map.EntryInetAddress, CollectionRangeToken entry : 
getRangeFetchMap(rangesForKeyspace, sourceFilters, 
keyspaceName).asMap().entrySet())
 {
 if (logger.isDebugEnabled())
 {
@@ -170,7 +170,7 @@ public class RangeStreamer
  * @return
  */
 private static MultimapInetAddress, RangeToken 
getRangeFetchMap(MultimapRangeToken, InetAddress rangesWithSources,
-
CollectionISourceFilter sourceFilters)
+
CollectionISourceFilter sourceFilters, String keyspace)
 {
 MultimapInetAddress, RangeToken rangeFetchMapMap = 
HashMultimap.create();
 for (RangeToken range : rangesWithSources.keySet())
@@ -199,15 +199,15 @@ public class RangeStreamer
 }
 
 if (!foundSource)
-throw new IllegalStateException(unable to find sufficient 
sources for streaming range  + range);
+throw new IllegalStateException(unable to find sufficient 
sources for streaming range  + range +  in keyspace  + keyspace);
 }
 
 return rangeFetchMapMap;
 }
 
-public static MultimapInetAddress, RangeToken 
getWorkMap(MultimapRangeToken, InetAddress rangesWithSourceTarget)
+public static MultimapInetAddress, RangeToken 
getWorkMap(MultimapRangeToken, InetAddress rangesWithSourceTarget, String 
keyspace)
 {
-return getRangeFetchMap(rangesWithSourceTarget, 
Collections.ISourceFiltersingleton(new 
FailureDetectorSourceFilter(FailureDetector.instance)));
+return getRangeFetchMap(rangesWithSourceTarget, 
Collections.ISourceFiltersingleton(new 
FailureDetectorSourceFilter(FailureDetector.instance)), keyspace);
 }
 
 // For testing purposes

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6802aa4/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 066544a..4bc1eee 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3200,7 +3200,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 
 // stream requests
-  

[jira] [Resolved] (CASSANDRA-8221) Specify keyspace in error message when streaming fails due to missing replicas

2014-11-11 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-8221.

Resolution: Fixed
  Reviewer: Tyler Hobbs

bq. I saw the fix version changed. Hope I don't have to change anything as I 
have submitted the patch for both the 2.0 and 2.1 branches. Thanks

No, as long as the patches still apply without conflicts, it's fine.  The 
reviewer or committer will let you know if the patch doesn't apply for some 
reason.

By the way, you can normally just attach a patch for the lowest fix version.  A 
patch is only needed for higher versions is there is a merge conflict or 
related functionality changes.

With that said, +1 on the patches.  Committed as a6802aa479.  Thanks!

 Specify keyspace in error message when streaming fails due to missing replicas
 --

 Key: CASSANDRA-8221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8221
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Trivial
  Labels: lhf
 Fix For: 2.0.12, 2.1.3

 Attachments: cassandra-2.0.11-8221.txt, cassandra-2.1.1-8221.txt


 When there aren't sufficient live replicas for streaming (during bootstrap, 
 etc), you'll get an error message like unable to find sufficient sources for 
 streaming range.  It would be helpful to include the keyspace that this 
 failed for, since each keyspace can have different replication settings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2014-11-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206691#comment-14206691
 ] 

Tyler Hobbs commented on CASSANDRA-8238:


[~krummas] is the plan to commit this patch and wait for more info? If so, I 
would make the assertion message include something like If you're seeing this 
exception, please attach your logs to CASSANDRA-8238 to help us debug..

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-assert-that-readMeter-is-not-null.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-11-11 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7124:
---
Reviewer: Yuki Morishita

[~rnamboodiri] I'm making [~yukim] the reviewer for this ticket because he's 
more familiar with this code than I am.  Thanks!

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-11-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206710#comment-14206710
 ] 

Yuki Morishita commented on CASSANDRA-7124:
---

[~rnamboodiri] You can upload temporal patch for review. I'll take a look.

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8292) From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local files

2014-11-11 Thread Brandon Kearby (JIRA)
Brandon Kearby created CASSANDRA-8292:
-

 Summary: From Pig: 
org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
variable: [cassandra.config].  Please prefix the file with file:/// for local 
files or file://server/ for remote files.
 Key: CASSANDRA-8292
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8292
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Kearby


Getting this error from Pig:
{code}
ERROR org.apache.cassandra.config.DatabaseDescriptor - Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
variable: [cassandra.config].  Please prefix the file with file:/// for local 
files or file://server/ for remote files.  Aborting.
at 
org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73)
at 
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
at 
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
at 
org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
at 
org.apache.cassandra.hadoop.HadoopCompat.clinit(HadoopCompat.java:135)
at 
org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:120)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:273)
at 
org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1014)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1031)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:531)
at 
org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:318)
at 
org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.startReadyJobs(JobControl.java:238)
at 
org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:269)
at java.lang.Thread.run(Thread.java:745)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:260)
Expecting URI in variable: [cassandra.config].  Please prefix the file with 
file:/// for local files or file://server/ for remote files.  Aborting.
Fatal configuration error; unable to start. See log for stacktrace.
{code}

Sample Pig Script:
{code}
grunt sigs = load 'cql://socialdata/signal' using 
org.apache.cassandra.hadoop.pig.CqlNativeStorage();
grunt a = limit sigs 5;
  
grunt dump a;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8069) IllegalArgumentException in SSTableBatchOpen

2014-11-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-8069.

Resolution: Not a Problem

{noformat}
ERROR [OptionalTasks:1] 2014-10-02 09:26:42,821 CassandraDaemon.java:166 - 
Exception in thread Thread[OptionalTasks:1,5,main]
java.lang.OutOfMemoryError: Java heap space
at com.yammer.metrics.stats.Snapshot.init(Snapshot.java:30) 
~[metrics-core-2.2.0.jar:na]
at 
com.yammer.metrics.stats.ExponentiallyDecayingSample.getSnapshot(ExponentiallyDecayingSample.java:131)
 ~[metrics-core-2.2.0.jar:na]
at com.yammer.metrics.core.Histogram.getSnapshot(Histogram.java:180) 
~[metrics-core-2.2.0.jar:na]
at com.yammer.metrics.core.Timer.getSnapshot(Timer.java:183) 
~[metrics-core-2.2.0.jar:na]
at 
org.apache.cassandra.db.ColumnFamilyStore$2.run(ColumnFamilyStore.java:340) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
 ~[apache-cassandra-
{noformat}

OOM - you need more heap space allocated to the process.

As per CASSANDRA-8192, you really need  512m allocated as heap max.

 IllegalArgumentException in SSTableBatchOpen
 

 Key: CASSANDRA-8069
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8069
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
 Attachments: system.log


 Since update of 2 of 12 nodes from 2.0.10 to 2.1-release Exception on start 
 and during operation.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-01 10:43:27,088 CassandraDaemon.java:166 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.IllegalArgumentException: null
   at org.apache.cassandra.io.util.Memory.allocate(Memory.java:61) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:172)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:125)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:757) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:716) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:394) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:294) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:430) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_67]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_67]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_67]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_67]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_67]
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8292) From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local fil

2014-11-11 Thread Brandon Kearby (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206766#comment-14206766
 ] 

Brandon Kearby commented on CASSANDRA-8292:
---

Might be a regression. Probably just need to remove the line:

f25da979 (Joshua McKenzie  2014-10-27 13:49:07 -0500 135) 
JVMStabilityInspector.inspectThrowable(e);

 From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
 URI in variable: [cassandra.config].  Please prefix the file with file:/// 
 for local files or file://server/ for remote files.
 

 Key: CASSANDRA-8292
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8292
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Kearby

 Getting this error from Pig:
 {code}
 ERROR org.apache.cassandra.config.DatabaseDescriptor - Fatal configuration 
 error
 org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
 variable: [cassandra.config].  Please prefix the file with file:/// for local 
 files or file://server/ for remote files.  Aborting.
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73)
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
   at 
 org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
   at 
 org.apache.cassandra.hadoop.HadoopCompat.clinit(HadoopCompat.java:135)
   at 
 org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:120)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:273)
   at 
 org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1014)
   at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1031)
   at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:531)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:318)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.startReadyJobs(JobControl.java:238)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:269)
   at java.lang.Thread.run(Thread.java:745)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:260)
 Expecting URI in variable: [cassandra.config].  Please prefix the file with 
 file:/// for local files or file://server/ for remote files.  Aborting.
 Fatal configuration error; unable to start. See log for stacktrace.
 {code}
 Sample Pig Script:
 {code}
 grunt sigs = load 'cql://socialdata/signal' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt a = limit sigs 5;  
 
 grunt dump a;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206771#comment-14206771
 ] 

Joshua McKenzie commented on CASSANDRA-8192:


A trivial attempt to reproduce w/heap memory bounded to 512M on a 32-bit jvm on 
32-bit Windows doesn't reproduce any of the above errors w/stress.  In a more 
production type environment, however, you're not going to have almost any 
breathing room to create CF's or really do anything due to fixed memory 
overhead per CF from the slab allocator combined with a very small heap.

You really need to be running with more than 512M allocated to cassandra - a 
32-bit JVM, 32-bit OS, and 512m heap is far outside the bounds of recommended 
or required specs.  You should be able to go up to at least 1.4G 
([reference|http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_heap_32bit])
 which may help alleviate some of these problems, though it's still not 
something one would consider for heavy production use.

Re: CASSANDRA-6283: you'll still see those error messages due to a race in 
CompactionScanner closing vs. sstables being deleted but should not see 
permanent accumulation of files requiring node restart.  This is being 
addressed in CASSANDRA-8019.

Re: the other errors: I would be hesitant to take those errors at face-value as 
you've been OOM'ing on these nodes as per CASSANDRA-8070 and other tickets 
you've opened regarding 2.1.1.

 AssertionError in Memory.java
 -

 Key: CASSANDRA-8192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
 Attachments: cassandra.bat, cassandra.yaml, system.log


 Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
 start up.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {panel}
 In the attached log you can still see as well CASSANDRA-8069 and 
 CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8070) OutOfMemoryError in OptionalTasks

2014-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206773#comment-14206773
 ] 

Michael Shuler commented on CASSANDRA-8070:
---

[~Andie78] you might glean some tuning hints from 
http://opensourceconnections.com/blog/2013/08/31/building-the-perfect-cassandra-test-environment/
 - he set this up for 64MB test machines.

 OutOfMemoryError in OptionalTasks
 -

 Key: CASSANDRA-8070
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8070
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
 Attachments: system.log


 Since update of 2 of 12 nodes from 2.0.10 to 2.1-release Exception during 
 operation.
 {panel:title=system.log}
 ERROR [OptionalTasks:1] 2014-10-02 09:26:42,821 CassandraDaemon.java:166 - 
 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.OutOfMemoryError: Java heap space
   at com.yammer.metrics.stats.Snapshot.init(Snapshot.java:30) 
 ~[metrics-core-2.2.0.jar:na]
   at 
 com.yammer.metrics.stats.ExponentiallyDecayingSample.getSnapshot(ExponentiallyDecayingSample.java:131)
  ~[metrics-core-2.2.0.jar:na]
   at com.yammer.metrics.core.Histogram.getSnapshot(Histogram.java:180) 
 ~[metrics-core-2.2.0.jar:na]
   at com.yammer.metrics.core.Timer.getSnapshot(Timer.java:183) 
 ~[metrics-core-2.2.0.jar:na]
   at 
 org.apache.cassandra.db.ColumnFamilyStore$2.run(ColumnFamilyStore.java:340) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 [na:1.7.0_67]
   at java.util.concurrent.FutureTask.runAndReset(Unknown Source) 
 [na:1.7.0_67]
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
  Source) [na:1.7.0_67]
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
  Source) [na:1.7.0_67]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_67]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_67]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_67]
 {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-7386) JBOD threshold to prevent unbalanced disk utilization

2014-11-11 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-7386:
---

Assignee: Alan Boudreault  (was: Ryan McGuire)

 JBOD threshold to prevent unbalanced disk utilization
 -

 Key: CASSANDRA-7386
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7386
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Lohfink
Assignee: Alan Boudreault
Priority: Minor
 Fix For: 2.1.3

 Attachments: 7386-v1.patch, 7386v2.diff, Mappe1.ods, 
 mean-writevalue-7disks.png, patch_2_1_branch_proto.diff, 
 sstable-count-second-run.png


 Currently the pick the disks are picked first by number of current tasks, 
 then by free space.  This helps with performance but can lead to large 
 differences in utilization in some (unlikely but possible) scenarios.  Ive 
 seen 55% to 10% and heard reports of 90% to 10% on IRC.  With both LCS and 
 STCS (although my suspicion is that STCS makes it worse since harder to be 
 balanced).
 I purpose the algorithm change a little to have some maximum range of 
 utilization where it will pick by free space over load (acknowledging it can 
 be slower).  So if a disk A is 30% full and disk B is 5% full it will never 
 pick A over B until it balances out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Support for frozen collections

2014-11-11 Thread tylerhobbs
Support for frozen collections

Patch by Tyler Hobbs; reviewed by Sylvain Lebresne for CASSANDRA-7859


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee55f361
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee55f361
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee55f361

Branch: refs/heads/cassandra-2.1
Commit: ee55f361b76f9ce7dd2a21a0ff4e80da931c77d2
Parents: 0337620
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Nov 11 12:40:48 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 12:40:48 2014 -0600

--
 CHANGES.txt |   1 +
 bin/cqlsh   |  18 +
 pylib/cqlshlib/cql3handling.py  |  19 +-
 .../apache/cassandra/cql3/AbstractMarker.java   |   2 +-
 src/java/org/apache/cassandra/cql3/CQL3Row.java |   2 +-
 .../org/apache/cassandra/cql3/CQL3Type.java | 158 ++--
 .../apache/cassandra/cql3/ColumnCondition.java  | 275 ---
 .../org/apache/cassandra/cql3/Constants.java|   3 +-
 src/java/org/apache/cassandra/cql3/Cql.g|  14 +-
 src/java/org/apache/cassandra/cql3/Lists.java   |  66 +-
 src/java/org/apache/cassandra/cql3/Maps.java|  62 +-
 .../org/apache/cassandra/cql3/Operation.java|  16 +-
 src/java/org/apache/cassandra/cql3/Sets.java|  80 +-
 src/java/org/apache/cassandra/cql3/Term.java|   6 +
 src/java/org/apache/cassandra/cql3/Tuples.java  |  13 +-
 .../apache/cassandra/cql3/UntypedResultSet.java |   6 +-
 .../apache/cassandra/cql3/UpdateParameters.java |   2 +-
 .../org/apache/cassandra/cql3/UserTypes.java|   3 +-
 .../cql3/statements/AlterTableStatement.java|  39 +-
 .../cql3/statements/AlterTypeStatement.java |  18 +-
 .../cql3/statements/CreateIndexStatement.java   |  24 +-
 .../cql3/statements/CreateTableStatement.java   |  28 +-
 .../cql3/statements/DeleteStatement.java|   2 +-
 .../cql3/statements/DropTypeStatement.java  |   6 +-
 .../cassandra/cql3/statements/IndexTarget.java  |  23 +-
 .../cassandra/cql3/statements/Restriction.java  |   6 +
 .../cql3/statements/SelectStatement.java| 107 ++-
 .../statements/SingleColumnRestriction.java |  24 +
 .../org/apache/cassandra/db/CFRowAdder.java |   4 +-
 .../db/composites/AbstractCellNameType.java |   4 +-
 .../cassandra/db/composites/CellNameType.java   |   2 +-
 .../composites/CompoundSparseCellNameType.java  |   5 +-
 .../cassandra/db/filter/ExtendedFilter.java |  47 +-
 .../cassandra/db/index/SecondaryIndex.java  |   6 +-
 .../db/index/SecondaryIndexManager.java |  32 +
 .../db/index/SecondaryIndexSearcher.java|   2 +
 .../db/index/composites/CompositesIndex.java|   4 +-
 .../CompositesIndexOnCollectionValue.java   |   2 +-
 .../cassandra/db/marshal/AbstractType.java  |  18 +
 .../cassandra/db/marshal/CollectionType.java| 113 ++-
 .../db/marshal/ColumnToCollectionType.java  |   2 +-
 .../apache/cassandra/db/marshal/FrozenType.java |  62 ++
 .../apache/cassandra/db/marshal/ListType.java   |  77 +-
 .../apache/cassandra/db/marshal/MapType.java| 105 ++-
 .../apache/cassandra/db/marshal/SetType.java|  69 +-
 .../apache/cassandra/db/marshal/TupleType.java  |   9 +-
 .../apache/cassandra/db/marshal/TypeParser.java |  34 +-
 .../apache/cassandra/db/marshal/UserType.java   |   2 +-
 .../apache/cassandra/hadoop/pig/CqlStorage.java |   8 +-
 .../serializers/CollectionSerializer.java   |  24 +-
 .../cassandra/serializers/ListSerializer.java   |  36 +-
 .../cassandra/serializers/MapSerializer.java|  38 +-
 .../apache/cassandra/transport/DataType.java|  16 +-
 .../org/apache/cassandra/cql3/CQLTester.java|  84 +-
 .../cassandra/cql3/ColumnConditionTest.java |  28 +-
 .../cassandra/cql3/FrozenCollectionsTest.java   | 791 +++
 .../apache/cassandra/cql3/TupleTypeTest.java|  44 +-
 .../db/marshal/CollectionTypeTest.java  |  22 +-
 .../cassandra/transport/SerDeserTest.java   |  13 +-
 .../cassandra/stress/generate/values/Lists.java |   2 +-
 .../cassandra/stress/generate/values/Sets.java  |   2 +-
 61 files changed, 2139 insertions(+), 591 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b1b5df8..5b63f48 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Support for frozen collections (CASSANDRA-7859)
  * Fix overflow on histogram computation (CASSANDRA-8028)
  * Have paxos reuse the timestamp generation of normal queries (CASSANDRA-7801)
 Merged from 2.0:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/bin/cqlsh

[1/3] cassandra git commit: Support for frozen collections

2014-11-11 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 033762099 - ee55f361b


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/src/java/org/apache/cassandra/db/marshal/TypeParser.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TypeParser.java 
b/src/java/org/apache/cassandra/db/marshal/TypeParser.java
index 1b83180..cdb5679 100644
--- a/src/java/org/apache/cassandra/db/marshal/TypeParser.java
+++ b/src/java/org/apache/cassandra/db/marshal/TypeParser.java
@@ -241,7 +241,7 @@ public class TypeParser
 
 public MapByteBuffer, CollectionType getCollectionsParameters() throws 
SyntaxException, ConfigurationException
 {
-MapByteBuffer, CollectionType map = new HashMapByteBuffer, 
CollectionType();
+MapByteBuffer, CollectionType map = new HashMap();
 
 if (isEOS())
 return map;
@@ -539,25 +539,37 @@ public class TypeParser
  */
 public static String stringifyTypeParameters(ListAbstractType? types)
 {
-StringBuilder sb = new StringBuilder();
-sb.append('(').append(StringUtils.join(types, ,)).append(')');
-return sb.toString();
+return stringifyTypeParameters(types, false);
+}
+
+/**
+ * Helper function to ease the writing of AbstractType.toString() methods.
+ */
+public static String stringifyTypeParameters(ListAbstractType? types, 
boolean ignoreFreezing)
+{
+StringBuilder sb = new StringBuilder(();
+for (int i = 0; i  types.size(); i++)
+{
+if (i  0)
+sb.append(,);
+sb.append(types.get(i).toString(ignoreFreezing));
+}
+return sb.append(')').toString();
 }
 
-public static String stringifyCollectionsParameters(MapByteBuffer, 
CollectionType collections)
+public static String stringifyCollectionsParameters(MapByteBuffer, ? 
extends CollectionType collections)
 {
 StringBuilder sb = new StringBuilder();
 sb.append('(');
 boolean first = true;
-for (Map.EntryByteBuffer, CollectionType entry : 
collections.entrySet())
+for (Map.EntryByteBuffer, ? extends CollectionType entry : 
collections.entrySet())
 {
 if (!first)
-{
 sb.append(',');
-}
+
 first = false;
 sb.append(ByteBufferUtil.bytesToHex(entry.getKey())).append(:);
-entry.getValue().appendToStringBuilder(sb);
+sb.append(entry.getValue());
 }
 sb.append(')');
 return sb.toString();
@@ -572,7 +584,9 @@ public class TypeParser
 {
 sb.append(',');
 
sb.append(ByteBufferUtil.bytesToHex(columnNames.get(i))).append(:);
-sb.append(columnTypes.get(i).toString());
+
+// omit FrozenType(...) from fields because it is currently 
implicit
+sb.append(columnTypes.get(i).toString(true));
 }
 sb.append(')');
 return sb.toString();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/src/java/org/apache/cassandra/db/marshal/UserType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/UserType.java 
b/src/java/org/apache/cassandra/db/marshal/UserType.java
index 44c208f..180d713 100644
--- a/src/java/org/apache/cassandra/db/marshal/UserType.java
+++ b/src/java/org/apache/cassandra/db/marshal/UserType.java
@@ -61,7 +61,7 @@ public class UserType extends TupleType
 for (PairByteBuffer, AbstractType p : params.right)
 {
 columnNames.add(p.left);
-columnTypes.add(p.right);
+columnTypes.add(p.right.freeze());
 }
 return new UserType(keyspace, name, columnNames, columnTypes);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java
index 7a6be71..2ba4dbf 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java
@@ -148,9 +148,9 @@ public class CqlStorage extends AbstractCassandraStorage
 }
 AbstractType elementValidator;
 if (validator instanceof SetType)
-elementValidator = ((SetType?) validator).elements;
+elementValidator = ((SetType?) validator).getElementsType();
 else if (validator instanceof ListType)
-elementValidator = ((ListType?) validator).elements;
+elementValidator = ((ListType?) validator).getElementsType();
 else 
 return;
 
@@ -167,8 +167,8 @@ public class CqlStorage extends AbstractCassandraStorage
 

[2/3] cassandra git commit: Support for frozen collections

2014-11-11 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index 7be635f..a17ee92 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@ -120,13 +120,12 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 throw new InvalidRequestException(String.format(Cannot 
re-add previously dropped counter column %s, columnName));
 
 AbstractType? type = validator.getType();
-if (type instanceof CollectionType)
+if (type.isCollection()  type.isMultiCell())
 {
 if (!cfm.comparator.supportCollections())
-throw new InvalidRequestException(Cannot use 
collection types with non-composite PRIMARY KEY);
+throw new InvalidRequestException(Cannot use 
non-frozen collections with a non-composite PRIMARY KEY);
 if (cfm.isSuper())
-throw new InvalidRequestException(Cannot use 
collection types with Super column family);
-
+throw new InvalidRequestException(Cannot use 
non-frozen collections with super column families);
 
 // If there used to be a collection column with the same 
name (that has been dropped), it will
 // still be appear in the ColumnToCollectionType because 
or reasons explained on #6276. The same
@@ -151,35 +150,35 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 case ALTER:
 assert columnName != null;
 if (def == null)
-throw new InvalidRequestException(String.format(Cell %s 
was not found in table %s, columnName, columnFamily()));
+throw new InvalidRequestException(String.format(Column %s 
was not found in table %s, columnName, columnFamily()));
 
+AbstractType? validatorType = validator.getType();
 switch (def.kind)
 {
 case PARTITION_KEY:
-AbstractType? newType = validator.getType();
-if (newType instanceof CounterColumnType)
+if (validatorType instanceof CounterColumnType)
 throw new 
InvalidRequestException(String.format(counter type is not supported for 
PRIMARY KEY part %s, columnName));
 if (cfm.getKeyValidator() instanceof CompositeType)
 {
 ListAbstractType? oldTypes = ((CompositeType) 
cfm.getKeyValidator()).types;
-if 
(!newType.isValueCompatibleWith(oldTypes.get(def.position(
+if 
(!validatorType.isValueCompatibleWith(oldTypes.get(def.position(
 throw new 
ConfigurationException(String.format(Cannot change %s from type %s to type %s: 
types are incompatible.,

columnName,

oldTypes.get(def.position()).asCQL3Type(),

validator));
 
 ListAbstractType? newTypes = new 
ArrayListAbstractType?(oldTypes);
-newTypes.set(def.position(), newType);
+newTypes.set(def.position(), validatorType);
 
cfm.keyValidator(CompositeType.getInstance(newTypes));
 }
 else
 {
-if 
(!newType.isValueCompatibleWith(cfm.getKeyValidator()))
+if 
(!validatorType.isValueCompatibleWith(cfm.getKeyValidator()))
 throw new 
ConfigurationException(String.format(Cannot change %s from type %s to type %s: 
types are incompatible.,

columnName,

cfm.getKeyValidator().asCQL3Type(),

validator));
-cfm.keyValidator(newType);
+cfm.keyValidator(validatorType);
 }
 break;
 case CLUSTERING_COLUMN:
@@ -187,22 +186,22 @@ public class AlterTableStatement extends 

[jira] [Updated] (CASSANDRA-8292) From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local files

2014-11-11 Thread Brandon Kearby (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Kearby updated CASSANDRA-8292:
--
Description: 
Getting this error from Pig:
Looks like the client side hadoop code is trying to locate the cassandra.yaml.

{code}
ERROR org.apache.cassandra.config.DatabaseDescriptor - Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
variable: [cassandra.config].  Please prefix the file with file:/// for local 
files or file://server/ for remote files.  Aborting.
at 
org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73)
at 
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
at 
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
at 
org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
at 
org.apache.cassandra.hadoop.HadoopCompat.clinit(HadoopCompat.java:135)
at 
org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:120)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:273)
at 
org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1014)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1031)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:531)
at 
org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:318)
at 
org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.startReadyJobs(JobControl.java:238)
at 
org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:269)
at java.lang.Thread.run(Thread.java:745)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:260)
Expecting URI in variable: [cassandra.config].  Please prefix the file with 
file:/// for local files or file://server/ for remote files.  Aborting.
Fatal configuration error; unable to start. See log for stacktrace.
{code}

Sample Pig Script:
{code}
grunt sigs = load 'cql://socialdata/signal' using 
org.apache.cassandra.hadoop.pig.CqlNativeStorage();
grunt a = limit sigs 5;
  
grunt dump a;
{code}

  was:
Getting this error from Pig:
{code}
ERROR org.apache.cassandra.config.DatabaseDescriptor - Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
variable: [cassandra.config].  Please prefix the file with file:/// for local 
files or file://server/ for remote files.  Aborting.
at 
org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73)
at 
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
at 
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
at 
org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
at 
org.apache.cassandra.hadoop.HadoopCompat.clinit(HadoopCompat.java:135)
at 
org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:120)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:273)
at 
org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1014)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1031)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   

[jira] [Updated] (CASSANDRA-8046) Set base C* version in debs and strip -N, ~textN, +textN

2014-11-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8046:
--
Labels: qa-resolved  (was: )

 Set base C* version in debs and strip -N, ~textN, +textN
 

 Key: CASSANDRA-8046
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8046
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Trivial
  Labels: qa-resolved
 Fix For: 2.0.11, 2.1.1, 3.0

 Attachments: dpkg-parsechangelog_fix_2.1.txt


 Patch is for 2.1 and please, backport the line to the 2.0 branch.
 In the 2.1/trunk branches, debian/rules has a line to strip -N from the 
 changelog version.
 I'm working on Ubuntu PPA rebuilds using x.x.x~textN so that the Apache 
 Cassandra repository packages are higher versions ( x.x~something is always a 
 lower version than x.x)
 Using this works for all the deb version possibilities (including the use of 
 +snapN as a higher version tool) to set just the base version when building 
 the Cassandra jars:
 {noformat}
 mshuler@hana:~$ echo Version: 2.1.1 | sed -ne 's/^Version: 
 \([^-|~|+]*\).*/\1/p'
 2.1.1
 mshuler@hana:~$ echo Version: 2.1.1-9 | sed -ne 's/^Version: 
 \([^-|~|+]*\).*/\1/p'
 2.1.1
 mshuler@hana:~$ echo Version: 2.1.1~ppa9 | sed -ne 's/^Version: 
 \([^-|~|+]*\).*/\1/p'
 2.1.1
 mshuler@hana:~$ echo Version: 2.1.1+snap9 | sed -ne 's/^Version: 
 \([^-|~|+]*\).*/\1/p'
 2.1.1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8040) Add bash-completion to debian/control Build-Depends

2014-11-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8040:
--
Labels: qa-resolved  (was: )

 Add bash-completion to debian/control Build-Depends
 ---

 Key: CASSANDRA-8040
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8040
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Trivial
  Labels: qa-resolved
 Fix For: 2.1.1

 Attachments: 8040.txt


 That's what I get for building CASSANDRA-6421 outside a clean cowbuilder ;)
 {noformat}
 dh_bash-completion
 make: dh_bash-completion: Command not found
 debian/rules:63: recipe for target 'binary-indep' failed
 make: *** [binary-indep] Error 127
 dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 
 2
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7956) nodetool compactionhistory crashes because of low heap size (GC overhead limit exceeded)

2014-11-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-7956:
--
Labels: qa-resolved  (was: )

 nodetool compactionhistory crashes because of low heap size (GC overhead 
 limit exceeded)
 --

 Key: CASSANDRA-7956
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7956
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.8
Reporter: Nikolai Grigoriev
Assignee: Michael Shuler
Priority: Trivial
  Labels: qa-resolved
 Fix For: 2.0.11, 2.1.1

 Attachments: 7956.txt, 
 nodetool_compactionhistory_128m_heap_output.txt.gz


 {code}
 ]# nodetool compactionhistory
 Compaction History:
 Exception in thread main java.lang.OutOfMemoryError: GC overhead limit 
 exceeded
 at java.io.ObjectStreamClass.newInstance(ObjectStreamClass.java:967)
 at 
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1782)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
 at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
 at java.util.HashMap.readObject(HashMap.java:1180)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
 at 
 java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
 at 
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
 at 
 java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
 at 
 java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:500)
 at 
 javax.management.openmbean.TabularDataSupport.readObject(TabularDataSupport.java:912)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
 at 
 java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
 at 
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
 at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
 at sun.rmi.server.UnicastRef.unmarshalValue(UnicastRef.java:325)
 at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:174)
 at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
 at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
 at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:906)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
 at com.sun.proxy.$Proxy3.getCompactionHistory(Unknown Source)
 {code}
 nodetool starts with -Xmx32m. This seems to be not enough at least in my case 
 to show the history. I am not sure what would the appropriate amount be but 
 increasing it to 128m definitely solves the problem. Output from modified 
 nodetool attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7941) Fix bin/cassandra cassandra.logdir option in debian package

2014-11-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-7941:
--
Labels: qa-resolved  (was: )

 Fix bin/cassandra cassandra.logdir option in debian package
 ---

 Key: CASSANDRA-7941
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7941
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Michael Shuler
Assignee: Michael Shuler
  Labels: qa-resolved
 Fix For: 2.1.1

 Attachments: 
 0001-Fix-bin-cassandra-cassandra.logdir-option-in-debian-.patch


 Cassandra writes logs to $CASSANDRA_HOME/logs by default, and the debian 
 package needs to write to /var/log/cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7851) C* PID file should be readable by mere users

2014-11-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-7851:
--
Labels: qa-resolved  (was: )

 C* PID file should be readable by mere users
 

 Key: CASSANDRA-7851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7851
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.0.11, 2.1.1

 Attachments: 7851.txt


 {noformat}
 automaton@i-175d594e9:~$ service cassandra status
  * Cassandra is not running
 automaton@i-175d594e9:~$ sudo service cassandra status
  * Cassandra is running
 automaton@i-175d594e9:~$ ls -la /var/run/cassandra/
 ls: cannot open directory /var/run/cassandra/: Permission denied
 automaton@i-175d594e9:~$ sudo ls -la /var/run/cassandra/
 total 4
 drwxr-x---  2 cassandra cassandra  60 Aug 30 01:21 .
 drwxr-xr-x 15 root  root  700 Aug 30 01:21 ..
 -rw-r--r--  1 cassandra cassandra   4 Aug 30 01:21 cassandra.pid
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7468) Add time-based execution to cassandra-stress

2014-11-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-7468:
--
Labels: qa-resolved  (was: )

 Add time-based execution to cassandra-stress
 

 Key: CASSANDRA-7468
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7468
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Matt Kennedy
Assignee: Matt Kennedy
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.1.1

 Attachments: 7468v2.txt, trunk-7468-rebase.patch, trunk-7468.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8083) OpenJDK 6 Dependency in dsc20 RPM

2014-11-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8083:
--
Labels: qa-resolved  (was: )

 OpenJDK 6 Dependency in dsc20 RPM
 -

 Key: CASSANDRA-8083
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8083
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
 Environment: Packages pulled from the community repo at 
 http://rpm.datastax.com/community as per the installation guidelines.
Reporter: Timo Beckers
Assignee: Michael Shuler
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.0.11

   Original Estimate: 1h
  Remaining Estimate: 1h

 According to https://issues.apache.org/jira/browse/CASSANDRA-6925 and 
 https://issues.apache.org/jira/browse/CASSANDRA-7243, the Cassandra project 
 only produces platform-agnostic .tar.gz. The person in the second ticket was 
 referred to https://support.datastax.com/home to report RPM issues, but this 
 ticketing system requires a login to post and registration is not open. I 
 realize this is not the right issue tracker to post this on, but I hope to 
 reach the community repo maintainer through this one.
 The problem I'm facing only seems to occur for the 'cassandra20' package from 
 the Datastax community repo. On a fresh CentOS installation with no prior 
 Java stack installed:
 # yum install dsc20
 Installing:
  dsc20
 Installing for dependencies:
  java-1.6.0-openjdk
 ...
  cassandra20
 This inevitably results in the following log message:
 Cassandra 2.0 and later require Java 7 or later.
 and sometimes
 Unsupported major.minor version 51.0
 The issue seems to be with the 'cassandra20' package that depends on 
 openjdk6. I noticed the same behaviour with dsc21 a couple weeks ago, but 
 this seems to be fixed already. Could you please take a look or assign this 
 to a person who could?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8019) Windows Unit tests and Dtests erroring due to sstable deleting task error

2014-11-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8019:
---
Attachment: 8019_v3.txt

v3 attached.  Refcounting on SSTR from within SSTableScanner, updated 
SSTableRewriterTest to try-with-resource CompactionControllers and Scanners.  
Passes all unit tests on linux and dtest failures match CI environment, and 
Unable to delete errors on windows unit tests on 2.1 branch are greatly 
reduced.  I still see some Unable to delete messages during runtime while 
attempting to force compaction on a loaded system but those are also reduced 
and I'll track them down in a separate effort.

I chose to go with refcounting rather than simply changing the ordering in 
CompactionTask as we need some codification of the ordering relationship 
between scanners and sstables in order to prevent this type of error in the 
future.

The SSTableScanner relies on internal data structures within the SSTR and, 
while the previous code will hold the reference open and prevent GC due to the 
pointer it has internally as well as the ifile and dfile references, our 
previous logical structure of there being no relationship between 
SSTableScanners being open and SSTR deletion was misleading.  While we 
replicate some of the references in the scanner so the SSTR can technically be 
deleted out of order and we rely on the filesystem to keep the file open if we 
have a handle to it, a more clear relationship between the components is 
preferable IMO.

[~jbellis]: I threw you on this as reviewer when I was leaning towards log 
suppression route as it was a trivial effort; [~krummas]: would you be willing 
to review this as you've been in the compaction and SSTableRewriter space 
recently?

 Windows Unit tests and Dtests erroring due to sstable deleting task error
 -

 Key: CASSANDRA-8019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8019
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Philip Thompson
Assignee: Joshua McKenzie
  Labels: windows
 Fix For: 2.1.3

 Attachments: 8019_aggressive_v1.txt, 8019_conservative_v1.txt, 
 8019_v2.txt, 8019_v3.txt


 Currently a large number of dtests and unit tests are erroring on windows 
 with the following error in the node log:
 {code}
 ERROR [NonPeriodicTasks:1] 2014-09-29 11:05:04,383 
 SSTableDeletingTask.java:89 - Unable to delete 
 c:\\users\\username\\appdata\\local\\temp\\dtest-vr6qgw\\test\\node1\\data\\system\\local-7ad54392bcdd35a684174e047860b377\\system-local-ka-4-Data.db
  (it will be removed on server restart; we'll also retry after GC)\n
 {code}
 git bisect points to the following commit:
 {code}
 0e831007760bffced8687f51b99525b650d7e193 is the first bad commit
 commit 0e831007760bffced8687f51b99525b650d7e193
 Author: Benedict Elliott Smith bened...@apache.org
 Date:  Fri Sep 19 18:17:19 2014 +0100
 Fix resource leak in event of corrupt sstable
 patch by benedict; review by yukim for CASSANDRA-7932
 :100644 100644 d3ee7d99179dce03307503a8093eb47bd0161681 
 f55e5d27c1c53db3485154cd16201fc5419f32df M  CHANGES.txt
 :04 04 194f4c0569b6be9cc9e129c441433c5c14de7249 
 3c62b53b2b2bd4b212ab6005eab38f8a8e228923 M  src
 :04 04 64f49266e328b9fdacc516c52ef1921fe42e994f 
 de2ca38232bee6d2a6a5e068ed9ee0fbbc5aaebe M  test
 {code}
 You can reproduce this by running simple_bootstrap_test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8290) archiving commitlogs after restart fails

2014-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206860#comment-14206860
 ] 

Michael Shuler commented on CASSANDRA-8290:
---

It looks like the archive commitlogs were already created, so the linking 
failed.  The exact steps you took during upgrade might be interesting to look 
at, so we can reproduce the problem. Deleting the 
/var/lib/cassandra/archive/CommitLog*.log files that it's failing on would at 
least be simple workaround.

 archiving commitlogs after restart fails 
 -

 Key: CASSANDRA-8290
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8290
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.11 
 Debian wheezy
Reporter: Manuel Lausch
Priority: Minor

 After update to Cassandra 2.0.11 Cassandra mostly  fails during startup while 
 archiving commitlogs
 see logfile:
 {noformat}
 RROR [main] 2014-11-03 13:08:59,388 CassandraDaemon.java (line 513) Exception 
 encountered during startup
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:158)
 at 
 org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:124)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:336)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:145)
 ... 4 more
 Caused by: java.lang.RuntimeException: java.io.IOException: Exception while 
 executing the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Exception while executing the command: 
 /bin/ln /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at org.apache.cassandra.utils.FBUtilities.exec(FBUtilities.java:604)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.exec(CommitLogArchiver.java:197)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.access$100(CommitLogArchiver.java:44)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver$1.runMayThrow(CommitLogArchiver.java:132)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ... 5 more
 ERROR [commitlog_archiver:1] 2014-11-03 13:08:59,388 CassandraDaemon.java 
 (line 199) Exception in thread Thread[commitlog_archiver:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command 

[jira] [Comment Edited] (CASSANDRA-8019) Windows Unit tests and Dtests erroring due to sstable deleting task error

2014-11-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206856#comment-14206856
 ] 

Joshua McKenzie edited comment on CASSANDRA-8019 at 11/11/14 7:17 PM:
--

v3 attached.  Refcounting on SSTR from within SSTableScanner, updated 
SSTableRewriterTest to try-with-resource CompactionControllers and Scanners.  
Passes all unit tests on linux and dtest failures match CI environment, and 
Unable to delete errors on windows unit tests on 2.1 branch are greatly 
reduced.  I still see some Unable to delete messages during runtime while 
attempting to force compaction on a loaded system but those are also reduced 
and I'll track them down in a separate effort.

I chose to go with refcounting rather than simply changing the ordering in 
CompactionTask as we need some codification of the ordering relationship 
between scanners and sstables in order to prevent this type of error in the 
future.

The SSTableScanner relies on internal data structures within the SSTR and, 
while the previous code will hold the reference open and prevent GC due to the 
pointer it has internally as well as the ifile and dfile references, our 
previous logical structure of there being no relationship between 
SSTableScanners being open and SSTR deletion was misleading.  While we 
replicate some of the references in the scanner so the SSTR can technically be 
deleted out of order and we rely on the filesystem to keep the file open if we 
have a handle to it, a more clear relationship between the components is 
preferable IMO.

[~jbellis]: I threw you on this as reviewer when I was leaning towards log 
suppression route as it was a trivial effort; [~krummas]: would you be willing 
to review this as you've been in the compaction and SSTableRewriter space 
recently?

Edit: I should note: While this is a symptom that we see on Windows on the 2.1 
branch specifically, this isn't so much a Windows issue as resource ordering 
issue centered around the compaction process and SSTableScanners.


was (Author: joshuamckenzie):
v3 attached.  Refcounting on SSTR from within SSTableScanner, updated 
SSTableRewriterTest to try-with-resource CompactionControllers and Scanners.  
Passes all unit tests on linux and dtest failures match CI environment, and 
Unable to delete errors on windows unit tests on 2.1 branch are greatly 
reduced.  I still see some Unable to delete messages during runtime while 
attempting to force compaction on a loaded system but those are also reduced 
and I'll track them down in a separate effort.

I chose to go with refcounting rather than simply changing the ordering in 
CompactionTask as we need some codification of the ordering relationship 
between scanners and sstables in order to prevent this type of error in the 
future.

The SSTableScanner relies on internal data structures within the SSTR and, 
while the previous code will hold the reference open and prevent GC due to the 
pointer it has internally as well as the ifile and dfile references, our 
previous logical structure of there being no relationship between 
SSTableScanners being open and SSTR deletion was misleading.  While we 
replicate some of the references in the scanner so the SSTR can technically be 
deleted out of order and we rely on the filesystem to keep the file open if we 
have a handle to it, a more clear relationship between the components is 
preferable IMO.

[~jbellis]: I threw you on this as reviewer when I was leaning towards log 
suppression route as it was a trivial effort; [~krummas]: would you be willing 
to review this as you've been in the compaction and SSTableRewriter space 
recently?

 Windows Unit tests and Dtests erroring due to sstable deleting task error
 -

 Key: CASSANDRA-8019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8019
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Philip Thompson
Assignee: Joshua McKenzie
  Labels: windows
 Fix For: 2.1.3

 Attachments: 8019_aggressive_v1.txt, 8019_conservative_v1.txt, 
 8019_v2.txt, 8019_v3.txt


 Currently a large number of dtests and unit tests are erroring on windows 
 with the following error in the node log:
 {code}
 ERROR [NonPeriodicTasks:1] 2014-09-29 11:05:04,383 
 SSTableDeletingTask.java:89 - Unable to delete 
 c:\\users\\username\\appdata\\local\\temp\\dtest-vr6qgw\\test\\node1\\data\\system\\local-7ad54392bcdd35a684174e047860b377\\system-local-ka-4-Data.db
  (it will be removed on server restart; we'll also retry after GC)\n
 {code}
 git bisect points to the following commit:
 {code}
 0e831007760bffced8687f51b99525b650d7e193 is the first bad commit
 commit 0e831007760bffced8687f51b99525b650d7e193
 Author: 

[jira] [Commented] (CASSANDRA-8019) Windows Unit tests and Dtests erroring due to sstable deleting task error

2014-11-11 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206866#comment-14206866
 ] 

Marcus Eriksson commented on CASSANDRA-8019:


[~JoshuaMcKenzie] sure!

 Windows Unit tests and Dtests erroring due to sstable deleting task error
 -

 Key: CASSANDRA-8019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8019
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Philip Thompson
Assignee: Joshua McKenzie
  Labels: windows
 Fix For: 2.1.3

 Attachments: 8019_aggressive_v1.txt, 8019_conservative_v1.txt, 
 8019_v2.txt, 8019_v3.txt


 Currently a large number of dtests and unit tests are erroring on windows 
 with the following error in the node log:
 {code}
 ERROR [NonPeriodicTasks:1] 2014-09-29 11:05:04,383 
 SSTableDeletingTask.java:89 - Unable to delete 
 c:\\users\\username\\appdata\\local\\temp\\dtest-vr6qgw\\test\\node1\\data\\system\\local-7ad54392bcdd35a684174e047860b377\\system-local-ka-4-Data.db
  (it will be removed on server restart; we'll also retry after GC)\n
 {code}
 git bisect points to the following commit:
 {code}
 0e831007760bffced8687f51b99525b650d7e193 is the first bad commit
 commit 0e831007760bffced8687f51b99525b650d7e193
 Author: Benedict Elliott Smith bened...@apache.org
 Date:  Fri Sep 19 18:17:19 2014 +0100
 Fix resource leak in event of corrupt sstable
 patch by benedict; review by yukim for CASSANDRA-7932
 :100644 100644 d3ee7d99179dce03307503a8093eb47bd0161681 
 f55e5d27c1c53db3485154cd16201fc5419f32df M  CHANGES.txt
 :04 04 194f4c0569b6be9cc9e129c441433c5c14de7249 
 3c62b53b2b2bd4b212ab6005eab38f8a8e228923 M  src
 :04 04 64f49266e328b9fdacc516c52ef1921fe42e994f 
 de2ca38232bee6d2a6a5e068ed9ee0fbbc5aaebe M  test
 {code}
 You can reproduce this by running simple_bootstrap_test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/4] cassandra git commit: Support for frozen collections

2014-11-11 Thread tylerhobbs
Support for frozen collections

Patch by Tyler Hobbs; reviewed by Sylvain Lebresne for CASSANDRA-7859


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee55f361
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee55f361
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee55f361

Branch: refs/heads/trunk
Commit: ee55f361b76f9ce7dd2a21a0ff4e80da931c77d2
Parents: 0337620
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Nov 11 12:40:48 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 12:40:48 2014 -0600

--
 CHANGES.txt |   1 +
 bin/cqlsh   |  18 +
 pylib/cqlshlib/cql3handling.py  |  19 +-
 .../apache/cassandra/cql3/AbstractMarker.java   |   2 +-
 src/java/org/apache/cassandra/cql3/CQL3Row.java |   2 +-
 .../org/apache/cassandra/cql3/CQL3Type.java | 158 ++--
 .../apache/cassandra/cql3/ColumnCondition.java  | 275 ---
 .../org/apache/cassandra/cql3/Constants.java|   3 +-
 src/java/org/apache/cassandra/cql3/Cql.g|  14 +-
 src/java/org/apache/cassandra/cql3/Lists.java   |  66 +-
 src/java/org/apache/cassandra/cql3/Maps.java|  62 +-
 .../org/apache/cassandra/cql3/Operation.java|  16 +-
 src/java/org/apache/cassandra/cql3/Sets.java|  80 +-
 src/java/org/apache/cassandra/cql3/Term.java|   6 +
 src/java/org/apache/cassandra/cql3/Tuples.java  |  13 +-
 .../apache/cassandra/cql3/UntypedResultSet.java |   6 +-
 .../apache/cassandra/cql3/UpdateParameters.java |   2 +-
 .../org/apache/cassandra/cql3/UserTypes.java|   3 +-
 .../cql3/statements/AlterTableStatement.java|  39 +-
 .../cql3/statements/AlterTypeStatement.java |  18 +-
 .../cql3/statements/CreateIndexStatement.java   |  24 +-
 .../cql3/statements/CreateTableStatement.java   |  28 +-
 .../cql3/statements/DeleteStatement.java|   2 +-
 .../cql3/statements/DropTypeStatement.java  |   6 +-
 .../cassandra/cql3/statements/IndexTarget.java  |  23 +-
 .../cassandra/cql3/statements/Restriction.java  |   6 +
 .../cql3/statements/SelectStatement.java| 107 ++-
 .../statements/SingleColumnRestriction.java |  24 +
 .../org/apache/cassandra/db/CFRowAdder.java |   4 +-
 .../db/composites/AbstractCellNameType.java |   4 +-
 .../cassandra/db/composites/CellNameType.java   |   2 +-
 .../composites/CompoundSparseCellNameType.java  |   5 +-
 .../cassandra/db/filter/ExtendedFilter.java |  47 +-
 .../cassandra/db/index/SecondaryIndex.java  |   6 +-
 .../db/index/SecondaryIndexManager.java |  32 +
 .../db/index/SecondaryIndexSearcher.java|   2 +
 .../db/index/composites/CompositesIndex.java|   4 +-
 .../CompositesIndexOnCollectionValue.java   |   2 +-
 .../cassandra/db/marshal/AbstractType.java  |  18 +
 .../cassandra/db/marshal/CollectionType.java| 113 ++-
 .../db/marshal/ColumnToCollectionType.java  |   2 +-
 .../apache/cassandra/db/marshal/FrozenType.java |  62 ++
 .../apache/cassandra/db/marshal/ListType.java   |  77 +-
 .../apache/cassandra/db/marshal/MapType.java| 105 ++-
 .../apache/cassandra/db/marshal/SetType.java|  69 +-
 .../apache/cassandra/db/marshal/TupleType.java  |   9 +-
 .../apache/cassandra/db/marshal/TypeParser.java |  34 +-
 .../apache/cassandra/db/marshal/UserType.java   |   2 +-
 .../apache/cassandra/hadoop/pig/CqlStorage.java |   8 +-
 .../serializers/CollectionSerializer.java   |  24 +-
 .../cassandra/serializers/ListSerializer.java   |  36 +-
 .../cassandra/serializers/MapSerializer.java|  38 +-
 .../apache/cassandra/transport/DataType.java|  16 +-
 .../org/apache/cassandra/cql3/CQLTester.java|  84 +-
 .../cassandra/cql3/ColumnConditionTest.java |  28 +-
 .../cassandra/cql3/FrozenCollectionsTest.java   | 791 +++
 .../apache/cassandra/cql3/TupleTypeTest.java|  44 +-
 .../db/marshal/CollectionTypeTest.java  |  22 +-
 .../cassandra/transport/SerDeserTest.java   |  13 +-
 .../cassandra/stress/generate/values/Lists.java |   2 +-
 .../cassandra/stress/generate/values/Sets.java  |   2 +-
 61 files changed, 2139 insertions(+), 591 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b1b5df8..5b63f48 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Support for frozen collections (CASSANDRA-7859)
  * Fix overflow on histogram computation (CASSANDRA-8028)
  * Have paxos reuse the timestamp generation of normal queries (CASSANDRA-7801)
 Merged from 2.0:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/bin/cqlsh

[2/4] cassandra git commit: Support for frozen collections

2014-11-11 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee55f361/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index 7be635f..a17ee92 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@ -120,13 +120,12 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 throw new InvalidRequestException(String.format(Cannot 
re-add previously dropped counter column %s, columnName));
 
 AbstractType? type = validator.getType();
-if (type instanceof CollectionType)
+if (type.isCollection()  type.isMultiCell())
 {
 if (!cfm.comparator.supportCollections())
-throw new InvalidRequestException(Cannot use 
collection types with non-composite PRIMARY KEY);
+throw new InvalidRequestException(Cannot use 
non-frozen collections with a non-composite PRIMARY KEY);
 if (cfm.isSuper())
-throw new InvalidRequestException(Cannot use 
collection types with Super column family);
-
+throw new InvalidRequestException(Cannot use 
non-frozen collections with super column families);
 
 // If there used to be a collection column with the same 
name (that has been dropped), it will
 // still be appear in the ColumnToCollectionType because 
or reasons explained on #6276. The same
@@ -151,35 +150,35 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 case ALTER:
 assert columnName != null;
 if (def == null)
-throw new InvalidRequestException(String.format(Cell %s 
was not found in table %s, columnName, columnFamily()));
+throw new InvalidRequestException(String.format(Column %s 
was not found in table %s, columnName, columnFamily()));
 
+AbstractType? validatorType = validator.getType();
 switch (def.kind)
 {
 case PARTITION_KEY:
-AbstractType? newType = validator.getType();
-if (newType instanceof CounterColumnType)
+if (validatorType instanceof CounterColumnType)
 throw new 
InvalidRequestException(String.format(counter type is not supported for 
PRIMARY KEY part %s, columnName));
 if (cfm.getKeyValidator() instanceof CompositeType)
 {
 ListAbstractType? oldTypes = ((CompositeType) 
cfm.getKeyValidator()).types;
-if 
(!newType.isValueCompatibleWith(oldTypes.get(def.position(
+if 
(!validatorType.isValueCompatibleWith(oldTypes.get(def.position(
 throw new 
ConfigurationException(String.format(Cannot change %s from type %s to type %s: 
types are incompatible.,

columnName,

oldTypes.get(def.position()).asCQL3Type(),

validator));
 
 ListAbstractType? newTypes = new 
ArrayListAbstractType?(oldTypes);
-newTypes.set(def.position(), newType);
+newTypes.set(def.position(), validatorType);
 
cfm.keyValidator(CompositeType.getInstance(newTypes));
 }
 else
 {
-if 
(!newType.isValueCompatibleWith(cfm.getKeyValidator()))
+if 
(!validatorType.isValueCompatibleWith(cfm.getKeyValidator()))
 throw new 
ConfigurationException(String.format(Cannot change %s from type %s to type %s: 
types are incompatible.,

columnName,

cfm.getKeyValidator().asCQL3Type(),

validator));
-cfm.keyValidator(newType);
+cfm.keyValidator(validatorType);
 }
 break;
 case CLUSTERING_COLUMN:
@@ -187,22 +186,22 @@ public class AlterTableStatement extends 

[4/4] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-11 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
src/java/org/apache/cassandra/db/index/SecondaryIndexManager.java
src/java/org/apache/cassandra/db/marshal/ListType.java
src/java/org/apache/cassandra/db/marshal/TypeParser.java
src/java/org/apache/cassandra/serializers/ListSerializer.java
src/java/org/apache/cassandra/serializers/MapSerializer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fb4356a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fb4356a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fb4356a3

Branch: refs/heads/trunk
Commit: fb4356a36a3066bf1c35d6f5e5b8472619b47960
Parents: f2e2862 ee55f36
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Nov 11 13:33:31 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Nov 11 13:33:31 2014 -0600

--
 CHANGES.txt |   1 +
 bin/cqlsh   |  18 +
 pylib/cqlshlib/cql3handling.py  |  19 +-
 .../apache/cassandra/cql3/AbstractMarker.java   |   2 +-
 src/java/org/apache/cassandra/cql3/CQL3Row.java |   2 +-
 .../org/apache/cassandra/cql3/CQL3Type.java | 158 ++--
 .../apache/cassandra/cql3/ColumnCondition.java  | 275 ---
 .../org/apache/cassandra/cql3/Constants.java|   2 +-
 src/java/org/apache/cassandra/cql3/Cql.g|  14 +-
 src/java/org/apache/cassandra/cql3/Lists.java   |  64 +-
 src/java/org/apache/cassandra/cql3/Maps.java|  61 +-
 .../org/apache/cassandra/cql3/Operation.java|  16 +-
 src/java/org/apache/cassandra/cql3/Sets.java|  80 +-
 src/java/org/apache/cassandra/cql3/Term.java|   6 +
 src/java/org/apache/cassandra/cql3/Tuples.java  |  13 +-
 .../apache/cassandra/cql3/UntypedResultSet.java |   6 +-
 .../apache/cassandra/cql3/UpdateParameters.java |   2 +-
 .../org/apache/cassandra/cql3/UserTypes.java|   3 +-
 .../cql3/statements/AlterTableStatement.java|  39 +-
 .../cql3/statements/AlterTypeStatement.java |  18 +-
 .../cql3/statements/CreateIndexStatement.java   |  24 +-
 .../cql3/statements/CreateTableStatement.java   |  28 +-
 .../cql3/statements/DeleteStatement.java|   2 +-
 .../cql3/statements/DropTypeStatement.java  |   6 +-
 .../cassandra/cql3/statements/IndexTarget.java  |  23 +-
 .../cassandra/cql3/statements/Restriction.java  |   6 +
 .../cql3/statements/SelectStatement.java|  84 +-
 .../statements/SingleColumnRestriction.java |  24 +
 .../org/apache/cassandra/db/CFRowAdder.java |   4 +-
 .../db/composites/AbstractCellNameType.java |   4 +-
 .../cassandra/db/composites/CellNameType.java   |   2 +-
 .../db/composites/CompositesBuilder.java|  41 +
 .../composites/CompoundSparseCellNameType.java  |   5 +-
 .../cassandra/db/filter/ExtendedFilter.java |  47 +-
 .../cassandra/db/index/SecondaryIndex.java  |   6 +-
 .../db/index/SecondaryIndexManager.java |  34 +-
 .../db/index/SecondaryIndexSearcher.java|   2 +
 .../db/index/composites/CompositesIndex.java|   4 +-
 .../CompositesIndexOnCollectionValue.java   |   2 +-
 .../cassandra/db/marshal/AbstractType.java  |  18 +
 .../cassandra/db/marshal/CollectionType.java| 113 ++-
 .../db/marshal/ColumnToCollectionType.java  |   2 +-
 .../apache/cassandra/db/marshal/FrozenType.java |  62 ++
 .../apache/cassandra/db/marshal/ListType.java   |  75 +-
 .../apache/cassandra/db/marshal/MapType.java| 103 ++-
 .../apache/cassandra/db/marshal/SetType.java|  68 +-
 .../apache/cassandra/db/marshal/TupleType.java  |   9 +-
 .../apache/cassandra/db/marshal/TypeParser.java |  33 +-
 .../apache/cassandra/db/marshal/UserType.java   |   2 +-
 .../apache/cassandra/hadoop/pig/CqlStorage.java |   8 +-
 .../serializers/CollectionSerializer.java   |  24 +-
 .../cassandra/serializers/ListSerializer.java   |  35 +-
 .../cassandra/serializers/MapSerializer.java|  38 +-
 .../apache/cassandra/transport/DataType.java|  16 +-
 .../org/apache/cassandra/cql3/CQLTester.java|  82 +-
 .../cassandra/cql3/ColumnConditionTest.java |  28 +-
 .../cassandra/cql3/FrozenCollectionsTest.java   | 791 +++
 .../apache/cassandra/cql3/TupleTypeTest.java|  44 +-
 .../db/marshal/CollectionTypeTest.java  |  22 +-
 .../cassandra/transport/SerDeserTest.java   |  13 +-
 .../cassandra/stress/generate/values/Lists.java |   2 +-
 .../cassandra/stress/generate/values/Sets.java  |   2 +-
 62 files changed, 2166 insertions(+), 571 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fb4356a3/CHANGES.txt
--
diff --cc CHANGES.txt
index 

[jira] [Updated] (CASSANDRA-7859) Extend freezing to collections

2014-11-11 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7859:
---
Attachment: 7859-final.txt

 Extend freezing to collections
 --

 Key: CASSANDRA-7859
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7859
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Tyler Hobbs
  Labels: cql
 Fix For: 2.1.3

 Attachments: 7859-final.txt, 7859-v1.txt


 This is the follow-up to CASSANDRA-7857, to extend {{frozen}} to collections. 
 This will allow things like {{maptext, frozenmapint, int}} for 
 instance, as well as allowing {{frozen}} collections in PK columns.
 Additionally (and that's alsmot a separate ticket but I figured we can start 
 discussing it here), we could decide that tuple is a frozen type by default. 
 This means that we would allow {{tupleint, text}} without needing to add 
 {{frozen}}, but we would require {{frozen}} for complex type inside tuple, so 
 {{tupleint, listtext}} would be rejected, but not {{tupleint, 
 frozenlisttext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8019) Windows Unit tests and Dtests erroring due to sstable deleting task error

2014-11-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8019:
---
Reviewer: Marcus Eriksson  (was: Jonathan Ellis)

 Windows Unit tests and Dtests erroring due to sstable deleting task error
 -

 Key: CASSANDRA-8019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8019
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Philip Thompson
Assignee: Joshua McKenzie
  Labels: windows
 Fix For: 2.1.3

 Attachments: 8019_aggressive_v1.txt, 8019_conservative_v1.txt, 
 8019_v2.txt, 8019_v3.txt


 Currently a large number of dtests and unit tests are erroring on windows 
 with the following error in the node log:
 {code}
 ERROR [NonPeriodicTasks:1] 2014-09-29 11:05:04,383 
 SSTableDeletingTask.java:89 - Unable to delete 
 c:\\users\\username\\appdata\\local\\temp\\dtest-vr6qgw\\test\\node1\\data\\system\\local-7ad54392bcdd35a684174e047860b377\\system-local-ka-4-Data.db
  (it will be removed on server restart; we'll also retry after GC)\n
 {code}
 git bisect points to the following commit:
 {code}
 0e831007760bffced8687f51b99525b650d7e193 is the first bad commit
 commit 0e831007760bffced8687f51b99525b650d7e193
 Author: Benedict Elliott Smith bened...@apache.org
 Date:  Fri Sep 19 18:17:19 2014 +0100
 Fix resource leak in event of corrupt sstable
 patch by benedict; review by yukim for CASSANDRA-7932
 :100644 100644 d3ee7d99179dce03307503a8093eb47bd0161681 
 f55e5d27c1c53db3485154cd16201fc5419f32df M  CHANGES.txt
 :04 04 194f4c0569b6be9cc9e129c441433c5c14de7249 
 3c62b53b2b2bd4b212ab6005eab38f8a8e228923 M  src
 :04 04 64f49266e328b9fdacc516c52ef1921fe42e994f 
 de2ca38232bee6d2a6a5e068ed9ee0fbbc5aaebe M  test
 {code}
 You can reproduce this by running simple_bootstrap_test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206929#comment-14206929
 ] 

Joshua McKenzie commented on CASSANDRA-8192:


See CASSANDRA-8070 comments for recommendations on running Cassandra in a 
low-memory environment.

 AssertionError in Memory.java
 -

 Key: CASSANDRA-8192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
 Attachments: cassandra.bat, cassandra.yaml, system.log


 Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
 start up.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {panel}
 In the attached log you can still see as well CASSANDRA-8069 and 
 CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7409) Allow multiple overlapping sstables in L1

2014-11-11 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206931#comment-14206931
 ] 

Carl Yeksigian commented on CASSANDRA-7409:
---

It appears that the L2-L3 compactions are including everything in L3 (kind of 
like L0-L1 now). This shouldn't happen, so it seems like it isn't choosing the 
best sstables to compact together in L3. It is the same in L0-L1 in the new 
LCS run.

The reason that we pick candidates from the bottom is that we want to use the 
IO we have on making progress on pushing data through the levels. This means 
that with overlapping, we want to get as much data as possible out of L0 at 
each compaction, but we aren't wasting IO because we shouldn't do any rewriting 
of data from L1 until we have nothing else to compact.

There is going to be an issue when we have MOLO=0, because we can't do anything 
about that overlapping, so it makes sense to keep the old behaviour in at least 
that case.

 Allow multiple overlapping sstables in L1
 -

 Key: CASSANDRA-7409
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7409
 Project: Cassandra
  Issue Type: Improvement
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
  Labels: compaction
 Fix For: 3.0


 Currently, when a normal L0 compaction takes place (not STCS), we take up to 
 MAX_COMPACTING_L0 L0 sstables and all of the overlapping L1 sstables and 
 compact them together. If we didn't have to deal with the overlapping L1 
 tables, we could compact a higher number of L0 sstables together into a set 
 of non-overlapping L1 sstables.
 This could be done by delaying the invariant that L1 has no overlapping 
 sstables. Going from L1 to L2, we would be compacting fewer sstables together 
 which overlap.
 When reading, we will not have the same one sstable per level (except L0) 
 guarantee, but this can be bounded (once we have too many sets of sstables, 
 either compact them back into the same level, or compact them up to the next 
 level).
 This could be generalized to allow any level to be the maximum for this 
 overlapping strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8292) From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local files

2014-11-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8292:
--
Assignee: Joshua McKenzie

 From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
 URI in variable: [cassandra.config].  Please prefix the file with file:/// 
 for local files or file://server/ for remote files.
 

 Key: CASSANDRA-8292
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8292
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Kearby
Assignee: Joshua McKenzie

 Getting this error from Pig:
 Looks like the client side hadoop code is trying to locate the cassandra.yaml.
 {code}
 ERROR org.apache.cassandra.config.DatabaseDescriptor - Fatal configuration 
 error
 org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
 variable: [cassandra.config].  Please prefix the file with file:/// for local 
 files or file://server/ for remote files.  Aborting.
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73)
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
   at 
 org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
   at 
 org.apache.cassandra.hadoop.HadoopCompat.clinit(HadoopCompat.java:135)
   at 
 org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:120)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:273)
   at 
 org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1014)
   at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1031)
   at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:531)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:318)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.startReadyJobs(JobControl.java:238)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:269)
   at java.lang.Thread.run(Thread.java:745)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:260)
 Expecting URI in variable: [cassandra.config].  Please prefix the file with 
 file:/// for local files or file://server/ for remote files.  Aborting.
 Fatal configuration error; unable to start. See log for stacktrace.
 {code}
 Sample Pig Script:
 {code}
 grunt sigs = load 'cql://socialdata/signal' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt a = limit sigs 5;  
 
 grunt dump a;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8292) From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local files

2014-11-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8292:
---
Attachment: 8292_v1.txt

Looks like.  v1 attached that removes that check/include from 
HadoopCompat.java.  [~brandon.kearby]: do you have a test environment where you 
could give this patch a quick spin?

 From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
 URI in variable: [cassandra.config].  Please prefix the file with file:/// 
 for local files or file://server/ for remote files.
 

 Key: CASSANDRA-8292
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8292
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Kearby
Assignee: Joshua McKenzie
 Attachments: 8292_v1.txt


 Getting this error from Pig:
 Looks like the client side hadoop code is trying to locate the cassandra.yaml.
 {code}
 ERROR org.apache.cassandra.config.DatabaseDescriptor - Fatal configuration 
 error
 org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
 variable: [cassandra.config].  Please prefix the file with file:/// for local 
 files or file://server/ for remote files.  Aborting.
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73)
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
   at 
 org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
   at 
 org.apache.cassandra.hadoop.HadoopCompat.clinit(HadoopCompat.java:135)
   at 
 org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:120)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:273)
   at 
 org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1014)
   at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1031)
   at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:531)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:318)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.startReadyJobs(JobControl.java:238)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:269)
   at java.lang.Thread.run(Thread.java:745)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:260)
 Expecting URI in variable: [cassandra.config].  Please prefix the file with 
 file:/// for local files or file://server/ for remote files.  Aborting.
 Fatal configuration error; unable to start. See log for stacktrace.
 {code}
 Sample Pig Script:
 {code}
 grunt sigs = load 'cql://socialdata/signal' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt a = limit sigs 5;  
 
 grunt dump a;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7826) support non-frozen, nested collections

2014-11-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206985#comment-14206985
 ] 

Tyler Hobbs commented on CASSANDRA-7826:


bq. Will this support arbitrary nesting? If not, to what depth will we be able 
to nest collections?

Yes, up to the cell name size limit.

bq. we will be able to update and retrieve individual elements from the root 
collection and any nested collections, correct?

Yes, that's the goal.

 support non-frozen, nested collections
 --

 Key: CASSANDRA-7826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7826
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Tupshin Harper
Assignee: Tyler Hobbs
  Labels: ponies
 Fix For: 3.0


 The inability to nest collections is one of the bigger data modelling 
 limitations we have right now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8293) Restore Commitlogs throws exception on startup on trunk

2014-11-11 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8293:
--

 Summary: Restore Commitlogs throws exception on startup on trunk
 Key: CASSANDRA-8293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8293
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
 Fix For: 3.0
 Attachments: CommitLog-5-1415734519132.log, 
CommitLog-5-1415734519133.log, CommitLog-5-1415734519134.log, 
CommitLog-5-1415734519135.log, node1.log

Running Cassandra from trunk, restoring commitlogs generated from trunk throws 
the following exception:
{code}
ERROR [main] 2014-11-11 13:47:14,738 CassandraDaemon.java:482 - Exception 
encountered during startup
java.lang.IllegalStateException: Unsupported commit log version: 5
at 
org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:215)
 ~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) 
~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:300) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:465) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:557) 
[main/:na]
{code}

This is reproduced by the dtest 
{{snapshot_test.py:TestArchiveCommitlog.test_archive_commitlog}}. Attached are 
the system log of the node that threw the exception, and the commitlog files 
used. I am restoring the commitlog files by editing 
conf/commitlog_archiving.properties to look like this:{code}# Command to 
execute to make an archived commitlog live again.
# Parameters: %from is the full path to an archived commitlog segment (from 
restore_directories)
# %to is the live commitlog directory
# Example: restore_command=cp -f %from %to
restore_command=cp -f %from %to

# Directory to scan the recovery files in.
restore_directories=/Users/philipthompson/cstar/archived/{code}. If the files 
are placed into the live commitlog directory manually, and then C* is started, 
there are no exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8293) Restore Commitlogs throws exception on startup on trunk

2014-11-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8293:
---
Assignee: Joshua McKenzie

 Restore Commitlogs throws exception on startup on trunk
 ---

 Key: CASSANDRA-8293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8293
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Joshua McKenzie
 Fix For: 3.0

 Attachments: CommitLog-5-1415734519132.log, 
 CommitLog-5-1415734519133.log, CommitLog-5-1415734519134.log, 
 CommitLog-5-1415734519135.log, node1.log


 Running Cassandra from trunk, restoring commitlogs generated from trunk 
 throws the following exception:
 {code}
 ERROR [main] 2014-11-11 13:47:14,738 CassandraDaemon.java:482 - Exception 
 encountered during startup
 java.lang.IllegalStateException: Unsupported commit log version: 5
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeRestoreArchive(CommitLogArchiver.java:215)
  ~[main/:na]
 at 
 org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:116) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:300) 
 [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:465)
  [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:557) 
 [main/:na]
 {code}
 This is reproduced by the dtest 
 {{snapshot_test.py:TestArchiveCommitlog.test_archive_commitlog}}. Attached 
 are the system log of the node that threw the exception, and the commitlog 
 files used. I am restoring the commitlog files by editing 
 conf/commitlog_archiving.properties to look like this:{code}# Command to 
 execute to make an archived commitlog live again.
 # Parameters: %from is the full path to an archived commitlog segment (from 
 restore_directories)
 # %to is the live commitlog directory
 # Example: restore_command=cp -f %from %to
 restore_command=cp -f %from %to
 # Directory to scan the recovery files in.
 restore_directories=/Users/philipthompson/cstar/archived/{code}. If the files 
 are placed into the live commitlog directory manually, and then C* is 
 started, there are no exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8294) Wrong command description for nodetool disablehandoff

2014-11-11 Thread DOAN DuyHai (JIRA)
DOAN DuyHai created CASSANDRA-8294:
--

 Summary: Wrong command description for nodetool disablehandoff
 Key: CASSANDRA-8294
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8294
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.1.1
Nodetool
Reporter: DOAN DuyHai
Priority: Trivial


The description for the nodetool command *disablehandoff* is wrong:

Disable gossip (effectively marking the node down)

It should be something like Stop sending hinted handoff



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8290) archiving commitlogs after restart fails

2014-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207023#comment-14207023
 ] 

Michael Shuler commented on CASSANDRA-8290:
---

One option here is to reconfigure your commitlog archive command to force the 
link with {{ln -f ...}}, so the startup continues, relinking the same files 
again, if they exist. I'm not sure of the ramifications of *not* failing the 
startup as above - something indeed went wrong, do we really want to continue 
as if nothing happened?

 archiving commitlogs after restart fails 
 -

 Key: CASSANDRA-8290
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8290
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.11 
 Debian wheezy
Reporter: Manuel Lausch
Priority: Minor

 After update to Cassandra 2.0.11 Cassandra mostly  fails during startup while 
 archiving commitlogs
 see logfile:
 {noformat}
 RROR [main] 2014-11-03 13:08:59,388 CassandraDaemon.java (line 513) Exception 
 encountered during startup
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:158)
 at 
 org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:124)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:336)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:145)
 ... 4 more
 Caused by: java.lang.RuntimeException: java.io.IOException: Exception while 
 executing the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Exception while executing the command: 
 /bin/ln /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at org.apache.cassandra.utils.FBUtilities.exec(FBUtilities.java:604)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.exec(CommitLogArchiver.java:197)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.access$100(CommitLogArchiver.java:44)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver$1.runMayThrow(CommitLogArchiver.java:132)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ... 5 more
 ERROR [commitlog_archiver:1] 2014-11-03 13:08:59,388 CassandraDaemon.java 
 (line 199) Exception in thread Thread[commitlog_archiver:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command 

[jira] [Commented] (CASSANDRA-6246) EPaxos

2014-11-11 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207036#comment-14207036
 ] 

sankalp kohli commented on CASSANDRA-6246:
--

One of the features we keep hearing from people moving from RDMS background is 
replicated log style replication. This provides timeline consistency when you 
do the reads say in other DC after a DC failure. Currently in C*, say you did 3 
writes A,B and C. Here say B could not be replicated to other DC. Now after 
failover, you will be reading A and C and not B. 

This breaks a lot of things for some applications. 

One of the advantages of epaxos is that it orders all writes on all machines. 
If all writes are done via epaxos, I think it provide the above timeline 
consistency. 

So apart from epaxos being fast, I think this is a very important feature we 
get with it. 

What do you think [~bdeggleston] 

 EPaxos
 --

 Key: CASSANDRA-6246
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6246
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Blake Eggleston
Priority: Minor

 One reason we haven't optimized our Paxos implementation with Multi-paxos is 
 that Multi-paxos requires leader election and hence, a period of 
 unavailability when the leader dies.
 EPaxos is a Paxos variant that requires (1) less messages than multi-paxos, 
 (2) is particularly useful across multiple datacenters, and (3) allows any 
 node to act as coordinator: 
 http://sigops.org/sosp/sosp13/papers/p358-moraru.pdf
 However, there is substantial additional complexity involved if we choose to 
 implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8292) From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local files

2014-11-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8292:

Reviewer: Brandon Williams

+1

 From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
 URI in variable: [cassandra.config].  Please prefix the file with file:/// 
 for local files or file://server/ for remote files.
 

 Key: CASSANDRA-8292
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8292
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Kearby
Assignee: Joshua McKenzie
 Attachments: 8292_v1.txt


 Getting this error from Pig:
 Looks like the client side hadoop code is trying to locate the cassandra.yaml.
 {code}
 ERROR org.apache.cassandra.config.DatabaseDescriptor - Fatal configuration 
 error
 org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
 variable: [cassandra.config].  Please prefix the file with file:/// for local 
 files or file://server/ for remote files.  Aborting.
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73)
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
   at 
 org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
   at 
 org.apache.cassandra.hadoop.HadoopCompat.clinit(HadoopCompat.java:135)
   at 
 org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:120)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:273)
   at 
 org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1014)
   at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1031)
   at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:531)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:318)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.startReadyJobs(JobControl.java:238)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:269)
   at java.lang.Thread.run(Thread.java:745)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:260)
 Expecting URI in variable: [cassandra.config].  Please prefix the file with 
 file:/// for local files or file://server/ for remote files.  Aborting.
 Fatal configuration error; unable to start. See log for stacktrace.
 {code}
 Sample Pig Script:
 {code}
 grunt sigs = load 'cql://socialdata/signal' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt a = limit sigs 5;  
 
 grunt dump a;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6411) Issue with reading from sstable

2014-11-11 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207090#comment-14207090
 ] 

Sebastian Estevez commented on CASSANDRA-6411:
--

Found what looks to be this same issue via google alert here:

https://bugs.launchpad.net/opencontrail/+bug/1389663

Not sure what version they are running but thought I would post it here as an 
FYI.

 Issue with reading from sstable
 ---

 Key: CASSANDRA-6411
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6411
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Mike Konobeevskiy
Assignee: Yuki Morishita
 Attachments: 6411-log.zip, 6411-sstables.zip


 With Cassandra 1.2.5 this happens almost every week. 
 {noformat}
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: 
 java.io.EOFException: EOF after 5105 bytes out of 19815
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
 java.io.EOFException: EOF after 5105 bytes out of 19815
   at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:91)
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:68)
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:44)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:274)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
   at org.apache.cassandra.db.Table.getRow(Table.java:347)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:70)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
   ... 3 more
 Caused by: java.io.EOFException: EOF after 5105 bytes out of 19815
   at 
 org.apache.cassandra.io.util.FileUtils.skipBytesFully(FileUtils.java:350)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.skipShortLength(ByteBufferUtil.java:382)
   at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:72)
   ... 16 more
 {noformat}
 This is occurring roughly weekly with quite minimal usage.
 Recreation of CF does not help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-11 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/797e3d5c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/797e3d5c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/797e3d5c

Branch: refs/heads/trunk
Commit: 797e3d5c8f17324d194af886f750c1acf35528ad
Parents: fb4356a c80d2d3
Author: Yuki Morishita yu...@apache.org
Authored: Tue Nov 11 15:36:40 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Nov 11 15:36:40 2014 -0600

--
 src/java/org/apache/cassandra/tools/NodeTool.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/797e3d5c/src/java/org/apache/cassandra/tools/NodeTool.java
--



[1/3] cassandra git commit: Fix help doc on 'nodetool disablehandoff'

2014-11-11 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 ee55f361b - c80d2d3a4
  refs/heads/trunk fb4356a36 - 797e3d5c8


Fix help doc on 'nodetool disablehandoff'

CASSANDRA-8294


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c80d2d3a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c80d2d3a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c80d2d3a

Branch: refs/heads/cassandra-2.1
Commit: c80d2d3a4b9c3a825ef34674d7a9e3856e8bd498
Parents: ee55f36
Author: Yuki Morishita yu...@apache.org
Authored: Tue Nov 11 15:35:50 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Nov 11 15:35:50 2014 -0600

--
 src/java/org/apache/cassandra/tools/NodeTool.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c80d2d3a/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index c09751b..8a59e8d 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -2340,7 +2340,7 @@ public class NodeTool
 }
 }
 
-@Command(name = disablehandoff, description = Disable gossip 
(effectively marking the node down))
+@Command(name = disablehandoff, description = Disable storing hinted 
handoffs)
 public static class DisableHandoff extends NodeToolCmd
 {
 @Override



[jira] [Resolved] (CASSANDRA-8294) Wrong command description for nodetool disablehandoff

2014-11-11 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-8294.
---
   Resolution: Fixed
Fix Version/s: 2.1.3
 Assignee: Yuki Morishita

Thanks, committed in c80d2d3a4b9c3a825ef34674d7a9e3856e8bd498.
This actually stop storing hints, so new description is 'Disable storing hinted 
handoffs'.

 Wrong command description for nodetool disablehandoff
 -

 Key: CASSANDRA-8294
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8294
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.1.1
 Nodetool
Reporter: DOAN DuyHai
Assignee: Yuki Morishita
Priority: Trivial
 Fix For: 2.1.3


 The description for the nodetool command *disablehandoff* is wrong:
 Disable gossip (effectively marking the node down)
 It should be something like Stop sending hinted handoff



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Fix help doc on 'nodetool disablehandoff'

2014-11-11 Thread yukim
Fix help doc on 'nodetool disablehandoff'

CASSANDRA-8294


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c80d2d3a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c80d2d3a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c80d2d3a

Branch: refs/heads/trunk
Commit: c80d2d3a4b9c3a825ef34674d7a9e3856e8bd498
Parents: ee55f36
Author: Yuki Morishita yu...@apache.org
Authored: Tue Nov 11 15:35:50 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Nov 11 15:35:50 2014 -0600

--
 src/java/org/apache/cassandra/tools/NodeTool.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c80d2d3a/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index c09751b..8a59e8d 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -2340,7 +2340,7 @@ public class NodeTool
 }
 }
 
-@Command(name = disablehandoff, description = Disable gossip 
(effectively marking the node down))
+@Command(name = disablehandoff, description = Disable storing hinted 
handoffs)
 public static class DisableHandoff extends NodeToolCmd
 {
 @Override



[jira] [Updated] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-11-11 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajanarayanan Thottuvaikkatumana updated CASSANDRA-7124:

Attachment: cassandra-trunk-temp-7124.txt

Temporary Patch for nodetool cleanup as per CASSANDRA-7124

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: cassandra-trunk-temp-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-11-11 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207152#comment-14207152
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124:
-

[~ yukim], I have attached the temporary patch for nodetool cleanup. Please 
have a look at it and tell me whether it is OK and whether I can go ahead with 
the implementation of the remaining options such as compact, decommission, 
move, relocate. Thanks

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: cassandra-trunk-temp-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-11-11 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207152#comment-14207152
 ] 

Rajanarayanan Thottuvaikkatumana edited comment on CASSANDRA-7124 at 11/11/14 
9:53 PM:
---

[~ yukim] I have attached the temporary patch for nodetool cleanup. Please 
have a look at it and tell me whether it is OK and whether I can go ahead with 
the implementation of the remaining options such as compact, decommission, 
move, relocate. Thanks


was (Author: rnamboodiri):
[~ yukim], I have attached the temporary patch for nodetool cleanup. Please 
have a look at it and tell me whether it is OK and whether I can go ahead with 
the implementation of the remaining options such as compact, decommission, 
move, relocate. Thanks

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: cassandra-trunk-temp-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jose Martinez Poblete (JIRA)
Jose Martinez Poblete created CASSANDRA-8295:


 Summary: Cassandra runs OOM @ 
java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 w/Cassandra 2.0.11.82

{noformat} 
 INFO 10:36:21,991 Logging initialized
 INFO 10:36:22,016 DSE version: 4.5.3
 INFO 10:36:22,016 Hadoop version: 1.0.4.13
 INFO 10:36:22,017 Hive version: 0.12.0.5
 INFO 10:36:22,017 Pig version: 0.10.1
 INFO 10:36:22,018 Solr version: 4.6.0.2.8
 INFO 10:36:22,019 Sqoop version: 1.4.4.14.2
 INFO 10:36:22,019 Mahout version: 0.8
 INFO 10:36:22,020 Appender version: 3.0.2
 INFO 10:36:22,020 Spark version: 0.9.1
 INFO 10:36:22,021 Shark version: 0.9.1.4
{noformat}
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
system.tgz.1, system.tgz.2, system.tgz.3

Customer runs a 3 node cluster 
Their dataset is less than 1Tb and during data load, one of the nodes enter a 
GC death spiral:

{noformat}
 INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) GC 
for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
 INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max is 
8375238656
...
 INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; max is 
8375238656
 INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) GC 
for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; max 
is 8375238656
{noformat} 

Their application waits 1 minute before a retry when a timeout is returned

This is what we find on their heapdumps:

{noformat}
Class Name  


 | Shallow Heap | 
Retained Heap | Percentage
-
org.apache.cassandra.db.Memtable @ 0x773f52f80  


 |   72 | 
8,086,073,504 | 96.66%
|- java.util.concurrent.ConcurrentSkipListMap @ 0x724508fe8 


[jira] [Updated] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jose Martinez Poblete (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Martinez Poblete updated CASSANDRA-8295:
-
Environment: 
DSE 4.5.3 Cassandra 2.0.11.82


  was:
Cassandra 2.0.11.82



 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
 max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
 max is 8375238656
 {noformat} 
 Their application waits 1 minute before a retry when a timeout is returned
 This is what we find on their heapdumps:
 {noformat}
 Class Name
   
   
| Shallow Heap 
 | Retained Heap | Percentage
 -
 org.apache.cassandra.db.Memtable @ 0x773f52f80
   
   
|   72 
 | 8,086,073,504 | 96.66%
 |- java.util.concurrent.ConcurrentSkipListMap @ 0x724508fe8   
   
   
|   48 
 | 8,086,073,320 | 96.66%
 |  |- 

[jira] [Updated] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jose Martinez Poblete (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Martinez Poblete updated CASSANDRA-8295:
-
Environment: 
Cassandra 2.0.11.82


  was:
DSE 4.5.3 w/Cassandra 2.0.11.82

{noformat} 
 INFO 10:36:21,991 Logging initialized
 INFO 10:36:22,016 DSE version: 4.5.3
 INFO 10:36:22,016 Hadoop version: 1.0.4.13
 INFO 10:36:22,017 Hive version: 0.12.0.5
 INFO 10:36:22,017 Pig version: 0.10.1
 INFO 10:36:22,018 Solr version: 4.6.0.2.8
 INFO 10:36:22,019 Sqoop version: 1.4.4.14.2
 INFO 10:36:22,019 Mahout version: 0.8
 INFO 10:36:22,020 Appender version: 3.0.2
 INFO 10:36:22,020 Spark version: 0.9.1
 INFO 10:36:22,021 Shark version: 0.9.1.4
{noformat}


 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
 max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
 max is 8375238656
 {noformat} 
 Their application waits 1 minute before a retry when a timeout is returned
 This is what we find on their heapdumps:
 {noformat}
 Class Name
   
   
| Shallow Heap 
 | Retained Heap | Percentage
 -
 org.apache.cassandra.db.Memtable @ 0x773f52f80
   
   
  

[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jose Martinez Poblete (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207203#comment-14207203
 ] 

Jose Martinez Poblete commented on CASSANDRA-8295:
--

More info from MAT

{noformat}
Class Name  Objects Shallow Heap
java.nio.HeapByteBuffer
First 10 of 73,845,620 objects  73,845,620  3,544,589,760
edu.stanford.ppl.concurrent.SnapTreeMap$Node
First 10 of 34,614,044 objects  34,614,044  1,661,474,112
byte[]
First 10 of 3,969,475 objects   3,969,475   1,510,362,528
org.apache.cassandra.db.Column
First 10 of 34,614,043 objects  34,614,043  1,107,649,376
edu.stanford.ppl.concurrent.CopyOnWriteManager$COWEpoch
First 10 of 411,924 objects 411,924 39,544,704
java.nio.ByteBuffer[]
First 10 of 823,848 objects 823,848 30,913,568
long[]
First 10 of 411,924 objects 411,924 22,819,304
edu.stanford.ppl.concurrent.SnapTreeMap$RootHolder
First 10 of 411,924 objects 411,924 19,772,352
org.apache.cassandra.db.RangeTombstoneList
First 10 of 411,924 objects 411,924 16,476,960
int[]
First 10 of 411,924 objects 411,924 15,456,784
edu.stanford.ppl.concurrent.CopyOnWriteManager$Latch
First 10 of 411,924 objects 411,924 13,181,568
edu.stanford.ppl.concurrent.SnapTreeMap
First 10 of 411,924 objects 411,924 13,181,568
java.util.concurrent.atomic.AtomicReference
First 10 of 823,848 objects 823,848 13,181,568
java.util.concurrent.ConcurrentSkipListMap$Node
First 10 of 411,929 objects 411,929 9,886,296
org.apache.cassandra.db.DecoratedKey
First 10 of 411,928 objects 411,928 9,886,272
java.lang.Long
First 10 of 411,928 objects 411,928 9,886,272
org.apache.cassandra.db.AtomicSortedColumns
First 10 of 411,924 objects 411,924 9,886,176
org.apache.cassandra.db.AtomicSortedColumns$Holder
First 10 of 411,924 objects 411,924 9,886,176
org.apache.cassandra.db.DeletionInfo
First 10 of 411,924 objects 411,924 9,886,176
org.apache.cassandra.dht.LongToken
First 10 of 411,928 objects 411,928 6,590,848
edu.stanford.ppl.concurrent.SnapTreeMap$COWMgr
First 10 of 411,924 objects 411,924 6,590,784
java.util.concurrent.ConcurrentSkipListMap$Index
First 10 of 207,065 objects 207,065 4,969,560
java.util.concurrent.ConcurrentSkipListMap$HeadIndex
First 10 of 16 objects  16  512
org.apache.cassandra.db.DeletedColumn
All 1 objects   1   32

Total: 24 entries
155,076,837 8,086,073,256
{noformat}

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java 

[jira] [Issue Comment Deleted] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jose Martinez Poblete (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Martinez Poblete updated CASSANDRA-8295:
-
Comment: was deleted

(was: More info from MAT

{noformat}
Class Name  Objects Shallow Heap
java.nio.HeapByteBuffer
First 10 of 73,845,620 objects  73,845,620  3,544,589,760
edu.stanford.ppl.concurrent.SnapTreeMap$Node
First 10 of 34,614,044 objects  34,614,044  1,661,474,112
byte[]
First 10 of 3,969,475 objects   3,969,475   1,510,362,528
org.apache.cassandra.db.Column
First 10 of 34,614,043 objects  34,614,043  1,107,649,376
edu.stanford.ppl.concurrent.CopyOnWriteManager$COWEpoch
First 10 of 411,924 objects 411,924 39,544,704
java.nio.ByteBuffer[]
First 10 of 823,848 objects 823,848 30,913,568
long[]
First 10 of 411,924 objects 411,924 22,819,304
edu.stanford.ppl.concurrent.SnapTreeMap$RootHolder
First 10 of 411,924 objects 411,924 19,772,352
org.apache.cassandra.db.RangeTombstoneList
First 10 of 411,924 objects 411,924 16,476,960
int[]
First 10 of 411,924 objects 411,924 15,456,784
edu.stanford.ppl.concurrent.CopyOnWriteManager$Latch
First 10 of 411,924 objects 411,924 13,181,568
edu.stanford.ppl.concurrent.SnapTreeMap
First 10 of 411,924 objects 411,924 13,181,568
java.util.concurrent.atomic.AtomicReference
First 10 of 823,848 objects 823,848 13,181,568
java.util.concurrent.ConcurrentSkipListMap$Node
First 10 of 411,929 objects 411,929 9,886,296
org.apache.cassandra.db.DecoratedKey
First 10 of 411,928 objects 411,928 9,886,272
java.lang.Long
First 10 of 411,928 objects 411,928 9,886,272
org.apache.cassandra.db.AtomicSortedColumns
First 10 of 411,924 objects 411,924 9,886,176
org.apache.cassandra.db.AtomicSortedColumns$Holder
First 10 of 411,924 objects 411,924 9,886,176
org.apache.cassandra.db.DeletionInfo
First 10 of 411,924 objects 411,924 9,886,176
org.apache.cassandra.dht.LongToken
First 10 of 411,928 objects 411,928 6,590,848
edu.stanford.ppl.concurrent.SnapTreeMap$COWMgr
First 10 of 411,924 objects 411,924 6,590,784
java.util.concurrent.ConcurrentSkipListMap$Index
First 10 of 207,065 objects 207,065 4,969,560
java.util.concurrent.ConcurrentSkipListMap$HeadIndex
First 10 of 16 objects  16  512
org.apache.cassandra.db.DeletedColumn
All 1 objects   1   32

Total: 24 entries
155,076,837 8,086,073,256
{noformat})

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for 

[jira] [Resolved] (CASSANDRA-7867) Added column does not sort as the last column

2014-11-11 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-7867.
---
Resolution: Incomplete

Please reopen or file a new ticket if you can provide steps to reproduce the 
issue.

 Added column does not sort as the last column
 -

 Key: CASSANDRA-7867
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7867
 Project: Cassandra
  Issue Type: Bug
 Environment: 6-node Cassandra 2.0.9 with r=3
 java version 1.7.0_45
 Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
 Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
 Linux 3.10.40-50.136.amzn1.x86_64 #1 SMP Tue May 13 21:35:08 UTC 2014 x86_64 
 x86_64 x86_64 GNU/Linux
 Client is using DataStax java client v2.0.3
Reporter: Bob Vawter

 This appears to be a stack trace distinct from that in CASSANDRA-5856.
 {noformat}
 java.lang.AssertionError: Added column does not sort as the last column
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)
 at 
 org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:116)
 at 
 org.apache.cassandra.service.pager.AbstractQueryPager.discardHead(AbstractQueryPager.java:319)
 at 
 org.apache.cassandra.service.pager.AbstractQueryPager.discardLast(AbstractQueryPager.java:301)
 at 
 org.apache.cassandra.service.pager.AbstractQueryPager.discardFirst(AbstractQueryPager.java:219)
 at 
 org.apache.cassandra.service.pager.AbstractQueryPager.discardFirst(AbstractQueryPager.java:202)
 at 
 org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:124)
 at 
 org.apache.cassandra.service.pager.SliceQueryPager.fetchPage(SliceQueryPager.java:35)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:236)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:61)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:175)
 at 
 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}
 The underlying table is an append-only event log and is usually being queried 
 in descending ts order:
 {noformat}
 CREATE TABLE ledger (
   id blob,
   ts timeuuid,
   data text,
   json_key text,
   type text,
   PRIMARY KEY ((id), ts)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.10 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-11 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207256#comment-14207256
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-8228:
-

I had a look at the code of org.apache.cassandra.service.ActiveRepairService 
and as per my understanding, when the error message comes after the code 
execution exception of prepareLatch.await(1, TimeUnit.HOURS); OR when the 
status.get() returns false, we don't have an object that carries the endpoint 
which did not provide the reply. Hence we will not be able to provide the host 
address which caused the error. 

The other option is to iterate through all the endpoints and list the host 
addresses of all of them and say Some of the below list of endpoints did not 
provide positive reply or some thing of that sort. In an applications 
perspective, I am not sure whether that makes sense. But it can give some 
additional information to the error for sure. Please let me know whether this 
is to be implemented or not. I can make those changes. Thanks

 Log malfunctioning host on prepareForRepair
 ---

 Key: CASSANDRA-8228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Juho Mäkinen
Priority: Trivial
  Labels: lhf

 Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
 result with Repair failed with error Did not get positive replies from all 
 endpoints. error, but there's no other logging regarding to this error.
 It seems that it would be trivial to modify the prepareForRepair() to log the 
 host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7168) Add repair aware consistency levels

2014-11-11 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207287#comment-14207287
 ] 

T Jake Luciani commented on CASSANDRA-7168:
---

To re-summarize this ticket, the goal is to improve performance of queries that 
require consistency by using the repaired data cut the amount of remote data to 
check at quorum.  Initially let’s only try to perform this optimization when 
the coordinator is a partition replica.

I think the following would be a good way to start:

  * Add REPAIRED_QUORUM level
  * Change StorageProxy.read to allow a special code path for REPAIRED_QUORUM 
that will  
  ** Identify the max repairedAt time for the SStables that cover the partition
  ** Pass the max repaired at time to the ReadCommand and MessageService  
  ** Execute the repaired only read locally.
  ** Merge the results.
  
For the actual reads we will need to change the collation controller to take 
the max repaired at time and ignore sstables repaired sstables with repairedAt 
 the passed one.  We will also need to include tombstones in the results of 
the non-repaired column family result since they need to be merged with the 
repaired result.

 Add repair aware consistency levels
 ---

 Key: CASSANDRA-7168
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7168
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
  Labels: performance
 Fix For: 3.0


 With CASSANDRA-5351 and CASSANDRA-2424 I think there is an opportunity to 
 avoid a lot of extra disk I/O when running queries with higher consistency 
 levels.  
 Since repaired data is by definition consistent and we know which sstables 
 are repaired, we can optimize the read path by having a REPAIRED_QUORUM which 
 breaks reads into two phases:
  
   1) Read from one replica the result from the repaired sstables. 
   2) Read from a quorum only the un-repaired data.
 For the node performing 1) we can pipeline the call so it's a single hop.
 In the long run (assuming data is repaired regularly) we will end up with 
 much closer to CL.ONE performance while maintaining consistency.
 Some things to figure out:
   - If repairs fail on some nodes we can have a situation where we don't have 
 a consistent repaired state across the replicas.  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207322#comment-14207322
 ] 

Michael Shuler commented on CASSANDRA-8295:
---

Full disk could cause whacky behavior. (just digging through the logs; not a 
cause/effect statement..)
{noformat}
ERROR [CompactionExecutor:302] 2014-11-06 09:21:46,779 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:302,1,main]
FSWriteError in 
/cassandra/data/mfgprod/test_results_new7/mfgprod-test_results_new7-tmp-jb-18858-Filter.db
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:478)
at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:212)
at 
org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:304)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:209)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.write(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:295)
at java.io.DataOutputStream.writeInt(DataOutputStream.java:197)
at 
org.apache.cassandra.utils.BloomFilterSerializer.serialize(BloomFilterSerializer.java:34)
at 
org.apache.cassandra.utils.Murmur3BloomFilter$Murmur3BloomFilterSerializer.serialize(Murmur3BloomFilter.java:44)
at org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:41)
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:471)
... 13 more
ERROR [CompactionExecutor:302] 2014-11-06 09:21:46,780 StorageService.java 
(line 366) Stopping gossiper
 WARN [CompactionExecutor:302] 2014-11-06 09:21:46,781 StorageService.java 
(line 280) Stopping gossip by operator request
 INFO [CompactionExecutor:302] 2014-11-06 09:21:46,781 Gossiper.java (line 
1279) Announcing shutdown
ERROR [CompactionExecutor:302] 2014-11-06 09:21:48,781 StorageService.java 
(line 371) Stopping RPC server
 INFO [CompactionExecutor:302] 2014-11-06 09:21:48,781 ThriftServer.java (line 
141) Stop listening to thrift clients
ERROR [CompactionExecutor:302] 2014-11-06 09:21:48,782 StorageService.java 
(line 376) Stopping native transport
 INFO [CompactionExecutor:302] 2014-11-06 09:21:48,789 Server.java (line 182) 
Stop listening for CQL clients
ERROR [CompactionExecutor:302] 2014-11-06 09:21:48,789 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:302,1,main]
FSWriteError in 
/cassandra/data/mfgprod/test_results_new7/mfgprod-test_results_new7-tmp-jb-18858-Filter.db
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:478)
at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:212)
at 
org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:304)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:209)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.write(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:295)
at java.io.DataOutputStream.writeInt(DataOutputStream.java:197)
at 

[jira] [Comment Edited] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207322#comment-14207322
 ] 

Michael Shuler edited comment on CASSANDRA-8295 at 11/11/14 11:34 PM:
--

Full disk could cause whacky behavior - system.log.3. (just digging through the 
logs; not a cause/effect statement..)
{noformat}
ERROR [CompactionExecutor:302] 2014-11-06 09:21:46,779 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:302,1,main]
FSWriteError in 
/cassandra/data/mfgprod/test_results_new7/mfgprod-test_results_new7-tmp-jb-18858-Filter.db
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:478)
at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:212)
at 
org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:304)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:209)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.write(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:295)
at java.io.DataOutputStream.writeInt(DataOutputStream.java:197)
at 
org.apache.cassandra.utils.BloomFilterSerializer.serialize(BloomFilterSerializer.java:34)
at 
org.apache.cassandra.utils.Murmur3BloomFilter$Murmur3BloomFilterSerializer.serialize(Murmur3BloomFilter.java:44)
at org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:41)
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:471)
... 13 more
ERROR [CompactionExecutor:302] 2014-11-06 09:21:46,780 StorageService.java 
(line 366) Stopping gossiper
 WARN [CompactionExecutor:302] 2014-11-06 09:21:46,781 StorageService.java 
(line 280) Stopping gossip by operator request
 INFO [CompactionExecutor:302] 2014-11-06 09:21:46,781 Gossiper.java (line 
1279) Announcing shutdown
ERROR [CompactionExecutor:302] 2014-11-06 09:21:48,781 StorageService.java 
(line 371) Stopping RPC server
 INFO [CompactionExecutor:302] 2014-11-06 09:21:48,781 ThriftServer.java (line 
141) Stop listening to thrift clients
ERROR [CompactionExecutor:302] 2014-11-06 09:21:48,782 StorageService.java 
(line 376) Stopping native transport
 INFO [CompactionExecutor:302] 2014-11-06 09:21:48,789 Server.java (line 182) 
Stop listening for CQL clients
ERROR [CompactionExecutor:302] 2014-11-06 09:21:48,789 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:302,1,main]
FSWriteError in 
/cassandra/data/mfgprod/test_results_new7/mfgprod-test_results_new7-tmp-jb-18858-Filter.db
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:478)
at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:212)
at 
org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:304)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:209)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.write(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:295)
at 

[jira] [Commented] (CASSANDRA-8292) From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local fil

2014-11-11 Thread Brandon Kearby (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207340#comment-14207340
 ] 

Brandon Kearby commented on CASSANDRA-8292:
---

[~JoshuaMcKenzie], Hey thanks for the quick turnaround! I applied the patch and 
it looks good.

Thanks
-Brandon

 From Pig: org.apache.cassandra.exceptions.ConfigurationException: Expecting 
 URI in variable: [cassandra.config].  Please prefix the file with file:/// 
 for local files or file://server/ for remote files.
 

 Key: CASSANDRA-8292
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8292
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Kearby
Assignee: Joshua McKenzie
 Attachments: 8292_v1.txt


 Getting this error from Pig:
 Looks like the client side hadoop code is trying to locate the cassandra.yaml.
 {code}
 ERROR org.apache.cassandra.config.DatabaseDescriptor - Fatal configuration 
 error
 org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
 variable: [cassandra.config].  Please prefix the file with file:/// for local 
 files or file://server/ for remote files.  Aborting.
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73)
   at 
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
   at 
 org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
   at 
 org.apache.cassandra.hadoop.HadoopCompat.clinit(HadoopCompat.java:135)
   at 
 org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:120)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:273)
   at 
 org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1014)
   at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1031)
   at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:531)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:318)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.startReadyJobs(JobControl.java:238)
   at 
 org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:269)
   at java.lang.Thread.run(Thread.java:745)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:260)
 Expecting URI in variable: [cassandra.config].  Please prefix the file with 
 file:/// for local files or file://server/ for remote files.  Aborting.
 Fatal configuration error; unable to start. See log for stacktrace.
 {code}
 Sample Pig Script:
 {code}
 grunt sigs = load 'cql://socialdata/signal' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt a = limit sigs 5;  
 
 grunt dump a;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8290) archiving commitlogs after restart fails

2014-11-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-8290:
--

Assignee: Sam Tunnicliffe

 archiving commitlogs after restart fails 
 -

 Key: CASSANDRA-8290
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8290
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.11 
 Debian wheezy
Reporter: Manuel Lausch
Assignee: Sam Tunnicliffe
Priority: Minor

 After update to Cassandra 2.0.11 Cassandra mostly  fails during startup while 
 archiving commitlogs
 see logfile:
 {noformat}
 RROR [main] 2014-11-03 13:08:59,388 CassandraDaemon.java (line 513) Exception 
 encountered during startup
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:158)
 at 
 org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:124)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:336)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:145)
 ... 4 more
 Caused by: java.lang.RuntimeException: java.io.IOException: Exception while 
 executing the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Exception while executing the command: 
 /bin/ln /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at org.apache.cassandra.utils.FBUtilities.exec(FBUtilities.java:604)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.exec(CommitLogArchiver.java:197)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver.access$100(CommitLogArchiver.java:44)
 at 
 org.apache.cassandra.db.commitlog.CommitLogArchiver$1.runMayThrow(CommitLogArchiver.java:132)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ... 5 more
 ERROR [commitlog_archiver:1] 2014-11-03 13:08:59,388 CassandraDaemon.java 
 (line 199) Exception in thread Thread[commitlog_archiver:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Exception while executing 
 the command: /bin/ln 
 /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log 
 /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: 
 1, command output: /bin/ln: failed to create hard link 
 `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at 
 

[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207369#comment-14207369
 ] 

Michael Shuler commented on CASSANDRA-8295:
---

Some system.log.1 observations:
- lots of node up/down gossip
- box is getting batched to death
- system.hints build up to over 90,000 hints
- the operator shut down gossip?

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
 max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
 max is 8375238656
 {noformat} 
 Their application waits 1 minute before a retry when a timeout is returned
 This is what we find on their heapdumps:
 {noformat}
 Class Name
   
   
| Shallow Heap 
 | Retained Heap | Percentage
 -
 org.apache.cassandra.db.Memtable @ 0x773f52f80
   
   
|   72 
 | 8,086,073,504 | 96.66%
 |- java.util.concurrent.ConcurrentSkipListMap @ 0x724508fe8   
   
   
   

[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207374#comment-14207374
 ] 

Jonathan Ellis commented on CASSANDRA-8295:
---

Full disk is definitely the immediate cause of OOM.  It can't flush, so 
memtables fill up past the configured limit.

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
 max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
 max is 8375238656
 {noformat} 
 Their application waits 1 minute before a retry when a timeout is returned
 This is what we find on their heapdumps:
 {noformat}
 Class Name
   
   
| Shallow Heap 
 | Retained Heap | Percentage
 -
 org.apache.cassandra.db.Memtable @ 0x773f52f80
   
   
|   72 
 | 8,086,073,504 | 96.66%
 |- java.util.concurrent.ConcurrentSkipListMap @ 0x724508fe8   
   
   
|  

[jira] [Commented] (CASSANDRA-6246) EPaxos

2014-11-11 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207383#comment-14207383
 ] 

sankalp kohli commented on CASSANDRA-6246:
--

Regarding reviewing the patch. I have some cleanups/suggestions to the code. I 
am yet to see the whole code. Also I won't note down the things which still 
need to be taken care of or coded.
1) In the DependencyManger, we might want to keep the last executed instance 
otherwise we won't know if the next one depends on the previous one or we have 
missed any in between. 
2) You might want to create java packages and move files there. For example in 
repair code, org.apache.cassandra.repair.messages where we keep all the Request 
Responses. We can do the same for verb handler, etc. 
3) We should add the new verbs to DatabaseDescriptor.getTimout(). Otherwise 
they will use the default timeout. I fixed this for current paxos 
implementation in CASSANDRA-7752
4) PreacceptResponse.failure can also accept missingInstances in the 
constructor. You can make it final and not volatile. 
5) ExecutionSorter.getOrder(). Here if condition uncommitted.size() == 0 is 
always true. Also loadedScc is empty as we don't insert into it. 
6) In ExecuteTask.run(), Instance toExecute = state.loadInstance(toExecuteId); 
should be within the try as we are holding a lock. 
7) EpaxosState.commitCallbacks could be a multi map. 
8) In Instance.java, successors, noop and fastPathPossible are not used. We can 
also get rid of Instance.applyRemote() method.
9) PreacceptCallback.ballot need not be an instance variable as we set 
completed=true after we set it.  
10) PreacceptResponse.missingInstance is not required as it can be calculated 
on the leader in the PreacceptCallback. 
11) EpaxosState.accept(). We can filter out the skipPlaceholderPredicate when 
we calculated missingInstances in PreacceptCallback.getAcceptDecision()
12) PreacceptCallback.getAcceptDecision() We don't need to calculate missingIds 
if accept is going to be false in AcceptDecision. 
13) ParticipantInfo.remoteEndpoints. Here we are not doing any isAlive check 
and just sending messages to all remote endpoints. 
14) ParticipantInfo.endpoints will not be required once we remove the 
Epaxos.getSuccessors()
15) Accept is send to live local endpoints and to all remote endpoints. In 
AcceptCallback, I think we should count response from only local endpoints 
16) When we execute the instance in ExecuteTask, what if we crash after 
executing the instance but before recording it. 

 EPaxos
 --

 Key: CASSANDRA-6246
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6246
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Blake Eggleston
Priority: Minor

 One reason we haven't optimized our Paxos implementation with Multi-paxos is 
 that Multi-paxos requires leader election and hence, a period of 
 unavailability when the leader dies.
 EPaxos is a Paxos variant that requires (1) less messages than multi-paxos, 
 (2) is particularly useful across multiple datacenters, and (3) allows any 
 node to act as coordinator: 
 http://sigops.org/sosp/sosp13/papers/p358-moraru.pdf
 However, there is substantial additional complexity involved if we choose to 
 implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8193) Multi-DC parallel snapshot repair

2014-11-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207377#comment-14207377
 ] 

Yuki Morishita commented on CASSANDRA-8193:
---

First of all, thanks for the patch!
I review it based on 2.0, but because the patch adds new feature, I'd rather 
put this to 2.1+. (So go ahead and apply 2.0.x yourself after review).

So, some comments:

* If replication factor is set to be 1 for each DC, then it will be the same as 
ParallelRequestCoordinator. There needs fall back to current behavior in this 
case.
* It looks like ParallelRequestCoordinator class can be {{... implements 
IRequestCoordinatorInetAddress}}.
* DatacenterAwareRequestCoordinator uses AtomicInteger, but primitive int just 
works here.
* nit: put braces on a new line.

 Multi-DC parallel snapshot repair
 -

 Key: CASSANDRA-8193
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8193
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jimmy Mårdell
Assignee: Jimmy Mårdell
Priority: Minor
 Fix For: 2.0.12

 Attachments: cassandra-2.0-8193-1.txt


 The current behaviour of snapshot repair is to let one node at a time 
 calculate a merkle tree. This is to ensure only one node at a time is doing 
 the expensive calculation. The drawback is that it takes even longer time to 
 do the merkle tree calculation.
 In a multi-DC setup, I think it would make more sense to have one node in 
 each DC calculate the merkle tree at the same time. This would yield a 
 significant improvement when you have many data centers.
 I'm not sure how relevant this is in 2.1, but I don't see us upgrading to 2.1 
 any time soon. Unless there is an obvious drawback that I'm missing, I'd like 
 to implement this in the 2.0 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jose Martinez Poblete (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207409#comment-14207409
 ] 

Jose Martinez Poblete commented on CASSANDRA-8295:
--

Customer uses 10k rotational disks on these nodes 
Initially, cassandra data was configured as raid10 but that will be changed to 
raid0

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
 max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
 max is 8375238656
 {noformat} 
 Their application waits 1 minute before a retry when a timeout is returned
 This is what we find on their heapdumps:
 {noformat}
 Class Name
   
   
| Shallow Heap 
 | Retained Heap | Percentage
 -
 org.apache.cassandra.db.Memtable @ 0x773f52f80
   
   
|   72 
 | 8,086,073,504 | 96.66%
 |- java.util.concurrent.ConcurrentSkipListMap @ 0x724508fe8   
   
   
  

[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jose Martinez Poblete (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207420#comment-14207420
 ] 

Jose Martinez Poblete commented on CASSANDRA-8295:
--

I'm not sure the disk can get full, they have lots of disk space
Attaching the df -h output soon

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
 max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
 max is 8375238656
 {noformat} 
 Their application waits 1 minute before a retry when a timeout is returned
 This is what we find on their heapdumps:
 {noformat}
 Class Name
   
   
| Shallow Heap 
 | Retained Heap | Percentage
 -
 org.apache.cassandra.db.Memtable @ 0x773f52f80
   
   
|   72 
 | 8,086,073,504 | 96.66%
 |- java.util.concurrent.ConcurrentSkipListMap @ 0x724508fe8   
   
   
|   

[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-11 Thread Jose Martinez Poblete (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207423#comment-14207423
 ] 

Jose Martinez Poblete commented on CASSANDRA-8295:
--

Here is the disk usage on the three nodes:

{noformat}
alln01-ats-cas1 
Filesystem Size Used Avail Use% Mounted on 
/dev/mapper/vg01-lv_root 50G 9.3G 38G 20% / 
tmpfs 95G 0 95G 0% /dev/shm 
/dev/sda1 485M 40M 421M 9% /boot 
/dev/mapper/vg01-lv_home 20G 172M 19G 1% /home 
/dev/mapper/vg01-lv_tmp 20G 173M 19G 1% /tmp 
/dev/mapper/vg01-lv_var 20G 19G 17M 100% /var 
/dev/mapper/vg01-lv_varlog 20G 6.5G 13G 35% /var/log 
/dev/mapper/vg01-lv_vartmp 20G 172M 19G 1% /var/tmp 
/dev/mapper/vgcommit-lvcommit 825G 4.2G 779G 1% /cassandra/commitlog 
/dev/mapper/vgcache-lvcache 825G 290G 493G 37% /cassandra/saved_caches 
/dev/md1 4.1T 415G 3.5T 11% /cassandra/data 
alln01-ats-cas2 
Filesystem Size Used Avail Use% Mounted on 
/dev/mapper/vg01-lv_root 50G 4.6G 43G 10% / 
tmpfs 95G 0 95G 0% /dev/shm 
/dev/sda1 485M 40M 421M 9% /boot 
/dev/mapper/vg01-lv_home 20G 172M 19G 1% /home 
/dev/mapper/vg01-lv_tmp 20G 173M 19G 1% /tmp 
/dev/mapper/vg01-lv_var 20G 736M 18G 4% /var 
/dev/mapper/vg01-lv_varlog 20G 9.8G 9.0G 53% /var/log 
/dev/mapper/vg01-lv_vartmp 20G 172M 19G 1% /var/tmp 
/dev/mapper/vgcommit-lvcommit 825G 5.0G 778G 1% /cassandra/commitlog 
/dev/mapper/vgcache-lvcache 825G 271G 512G 35% /cassandra/saved_caches 
/dev/md1 4.1T 366G 3.5T 10% /cassandra/data 
alln01-ats-cas3 
Filesystem Size Used Avail Use% Mounted on 
/dev/mapper/vg01-lv_root 50G 5.1G 42G 11% / 
tmpfs 95G 0 95G 0% /dev/shm 
/dev/sda1 485M 40M 421M 9% /boot 
/dev/mapper/vg01-lv_home 20G 172M 19G 1% /home 
/dev/mapper/vg01-lv_tmp 20G 198M 19G 2% /tmp 
/dev/mapper/vg01-lv_var 20G 14G 5.6G 71% /var 
/dev/mapper/vg01-lv_varlog 20G 16G 3.4G 82% /var/log 
/dev/mapper/vg01-lv_vartmp 20G 172M 19G 1% /var/tmp 
/dev/mapper/vgcommit-lvcommit 825G 6.8G 776G 1% /cassandra/commitlog 
/dev/mapper/vgcache-lvcache 825G 264G 519G 34% /cassandra/saved_caches 
/dev/md1 4.1T 334G 3.5T 9% /cassandra/data
{noformat}

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 

  1   2   >