[jira] [Commented] (CASSANDRA-6437) Datastax C# driver not able to execute CAS after upgrade 2.0.2 - 2.0.3

2013-12-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838727#comment-13838727
 ] 

Michał Ziemski commented on CASSANDRA-6437:
---

I wasn't able to find a 2.x C# driver.
I've seen a branch named v2 on the project's github site.
Is my understanding correct that as of yet the v2 C# driver has not been 
released?

 Datastax C# driver not able to execute CAS after upgrade 2.0.2 - 2.0.3
 ---

 Key: CASSANDRA-6437
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6437
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Drivers (now out of tree)
 Environment: 4 node Centod 6.4 x64 casandra 2.0.3 (datastax community)
Reporter: Michał Ziemski

 The following code:
   var cl = 
 Cluster.Builder().WithConnectionString(ConfigurationManager.ConnectionStrings.[Cassandra].ConnectionString).Build();
   var  ses = cl.Connect();
   ses.Execute(INSERT INTO appsrv(id) values ('abc') IF NOT EXISTS, 
 ConsistencyLevel.Quorum);
 Worked fine with cassandra 2.0.2
 After upgrading to 2.0.3 I get an error stating that conditional updates are 
 not supported by the protocol version and I should upgrade to v2.
 I'm not really sure if it's a problem with C* or the Datastax C# Driver.
 The error appeared afeter upgrading C* so I decided to post it here.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4476) Support 2ndary index queries with only non-EQ clauses

2013-12-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838743#comment-13838743
 ] 

Sylvain Lebresne commented on CASSANDRA-4476:
-

bq. how much more complicated does CASSANDRA-4511 make this?

Depends what you mean by this. Under the hood, I could be missing something but 
a priori I don't think CASSANDRA-4511 adds much complexity, if any. But if we 
want to extend non-EQ clause to collections, we'd need to come up with a syntax 
to express where set s has a value greater than 3. But I'd definitively 
advise leaving that to a follow up ticket, especially because I'm not entirely 
sure this is generally useful.

A priori, this ticket is not really all that hard. All we need to do is that 
when we query the index, instead of querying one index row, we support querying 
a range of them. After that, the rest of the index code should remain unchanged.

Of course, we will need to modify SelectStatement to let queries with no-EQ 
clause pass validation but that shouldn't be too difficult. As said above, the 
only remaining question is how to select which index to query when you have 
multiple indexed columns in the WHERE clause and some of them have non-EQ 
clauses: how do you estimate which index is likely to be the most selective?  
That being said, more than one indexed column means ALLOW FILTERING, for which 
all bets are off in terms of performance anyway, so for a first version of the 
patch we could go with a very simplistic heuristic (say, prefer the index with 
an EQ clause if there is one and if there is none just pick the first index) 
and leave smarter heuristic for later.


 Support 2ndary index queries with only non-EQ clauses
 -

 Key: CASSANDRA-4476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4476
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1


 Currently, a query that uses 2ndary indexes must have at least one EQ clause 
 (on an indexed column). Given that indexed CFs are local (and use 
 LocalPartitioner that order the row by the type of the indexed column), we 
 should extend 2ndary indexes to allow querying indexed columns even when no 
 EQ clause is provided.
 As far as I can tell, the main problem to solve for this is to update 
 KeysSearcher.highestSelectivityPredicate(). I.e. how do we estimate the 
 selectivity of non-EQ clauses? I note however that if we can do that estimate 
 reasonably accurately, this might provide better performance even for index 
 queries that both EQ and non-EQ clauses, because some non-EQ clauses may have 
 a much better selectivity than EQ ones (say you index both the user country 
 and birth date, for SELECT * FROM users WHERE country = 'US' AND birthdate  
 'Jan 2009' AND birtdate  'July 2009', you'd better use the birthdate index 
 first).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6438) Decide if we want to make user types keyspace scoped

2013-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6438:


Attachment: 6438.txt

Attaching patch for this. I'll note that because type are now dependant on the 
keyspace, we now have to parse types into unprepared types that needs to be 
prepared by providing the currently logged keyspace later on. And because of 
type-casts, this means we now have to pass the current keyspace to quite a 
bunch of methods. Not a big deal, those are trivial changes, but that's why the 
patch impact so many files that don't seem initially related.


 Decide if we want to make user types keyspace scoped
 

 Key: CASSANDRA-6438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6438
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
 Attachments: 6438.txt


 Currently, user types are declared at the top level. I wonder however if we 
 might not want to make them scoped to a given keyspace. It was not done in 
 the initial patch for simplicity and because I was not sure of the advantages 
 of doing so. However, if we ever want to use user types in system tables, 
 having them scoped by keyspace means we won't have to care about the new type 
 conflicting with another existing type.
 Besides, having user types be part of a keyspace would allow for slightly 
 more fine grained permissions on them. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6418) auto_snapshots are not removable via 'nodetool clearsnapshot'

2013-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6418:


Fix Version/s: (was: 2.0.3)
   2.0.4

 auto_snapshots are not removable via 'nodetool clearsnapshot'
 -

 Key: CASSANDRA-6418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6418
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: auto_snapshot: true
Reporter: J. Ryan Earl
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.4

 Attachments: 6418_cassandra-2.0.patch


 Snapshots of deleted CFs created via the auto_snapshot configuration 
 parameter appear to not be tracked.  The result is that 'nodetool 
 clearsnapshot keyspace with deleted CFs' does nothing, and short of 
 manually removing the files from the filesystem, deleted CFs remain 
 indefinitely taking up space.
 I'm not sure if this is intended, but it seems pretty counter-intuitive.  I 
 haven't found any documentation that indicates auto_snapshots would be 
 ignored by 'nodetool clearsnapshot'.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6419) Setting max_hint_window_in_ms explicitly to null causes problems with JMX view

2013-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6419:


Fix Version/s: (was: 2.0.3)
   2.0.4

 Setting max_hint_window_in_ms explicitly to null causes problems with JMX view
 --

 Key: CASSANDRA-6419
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6419
 Project: Cassandra
  Issue Type: Bug
  Components: Config
Reporter: Nate McCall
Assignee: Nate McCall
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6419-1.2.patch, 6419-2.0.patch


 Setting max_hint_window_in_ms to null in cassandra.yaml makes the 
 StorageProxy mbean inaccessable. 
 Stack trace when trying to view the bean through MX4J:
 {code}
 Exception during http request
 javax.management.RuntimeMBeanException: java.lang.NullPointerException
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
   at 
 mx4j.tools.adaptor.http.MBeanCommandProcessor.createMBeanElement(MBeanCommandProcessor.java:119)
   at 
 mx4j.tools.adaptor.http.MBeanCommandProcessor.executeRequest(MBeanCommandProcessor.java:56)
   at 
 mx4j.tools.adaptor.http.HttpAdaptor$HttpClient.run(HttpAdaptor.java:980)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.cassandra.config.DatabaseDescriptor.getMaxHintWindow(DatabaseDescriptor.java:1161)
   at 
 org.apache.cassandra.service.StorageProxy.getMaxHintWindow(StorageProxy.java:1506)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at 
 com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
   at 
 com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
   ... 4 more
 Exception during http request
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-5549) Remove Table.switchLock

2013-12-04 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reassigned CASSANDRA-5549:
---

Assignee: Benedict  (was: Vijay)

 Remove Table.switchLock
 ---

 Key: CASSANDRA-5549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5549
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 5549-removed-switchlock.png, 5549-sunnyvale.png


 As discussed in CASSANDRA-5422, Table.switchLock is a bottleneck on the write 
 path.  ReentrantReadWriteLock is not lightweight, even if there is no 
 contention per se between readers and writers of the lock (in Cassandra, 
 memtable updates and switches).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6445) Cassandra 2.0.3 is not starting on RHEL with HotSpot JDK

2013-12-04 Thread Hari (JIRA)
Hari created CASSANDRA-6445:
---

 Summary: Cassandra 2.0.3 is not starting on RHEL with HotSpot JDK
 Key: CASSANDRA-6445
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6445
 Project: Cassandra
  Issue Type: Bug
 Environment: [root@BRANDYBUCKVM2 bin]# java -version
java version 1.7.0_45
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

Reporter: Hari


1. I copied apache-cassandra-2.0.3-bin.tar to my linux box (RHEL)
2. Did untar, created directories in /var/log/cassandra and /var/lib/cassandra
3. Java -version
[root@BRANDYBUCKVM2 bin]# java -version
java version 1.7.0_45
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
4. But when i try to start it, by /bin/cassandra -f, I am getting the following 
error, 
[root@BRANDYBUCKVM2 bin]# ./cassandra -f
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorit   yPolicy=42 -Xms1024M 
-Xmx1024M -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
numactl: execution of `': No such file or directory

What could be the error cause?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5549) Remove Table.switchLock

2013-12-04 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838782#comment-13838782
 ] 

Benedict commented on CASSANDRA-5549:
-

bq. how does the switch RW Lock to a kind of CAS operation change this 
schematics?

In forceFlush() we obtain the writeLock, but do not relinquish it until we have 
successfully added to the flushWriters queue. The flushWriters queue length 
also configures how often we should flush, so that once it is full, we are 
effectively out of memory. This is hardly a *precise* mechanism for memory 
control, but it is the one we currently use, and it definitely needs a 
replacement.

bq. IMHO, that might not be good enough since Java's memory over head is not 
considered. And calculating the object size is not cheap either

I don't see your concerns here? We can easily and cheaply calculate the costs - 
we precompute the overheads, and simply apply them on a per row and per key 
basis. The overheads are pretty fixed for both - for SnapTreeMap they're 
exactly the same for each key, and with CASSANDRA-6271 they are relatively 
easily computable (or simply countable, in log(32,N) time - probably I will opt 
for computing first, then counting after insertion to get exact amount used). 
Java's memory overhead is included in any calculation.



 Remove Table.switchLock
 ---

 Key: CASSANDRA-5549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5549
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 5549-removed-switchlock.png, 5549-sunnyvale.png


 As discussed in CASSANDRA-5422, Table.switchLock is a bottleneck on the write 
 path.  ReentrantReadWriteLock is not lightweight, even if there is no 
 contention per se between readers and writers of the lock (in Cassandra, 
 memtable updates and switches).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6364) There should be different disk_failure_policies for data and commit volumes or commit volume failure should always cause node exit

2013-12-04 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838786#comment-13838786
 ] 

Benedict commented on CASSANDRA-6364:
-

How far do we want to go with this?

Adding a simple exit on error is very straightforward, but in my experience you 
can have hang-style failures, so we should definitely have a separate thread 
checking the liveness of the CLSegmentManager and CLService. Probably a 
user-configurable not-alive time in the yaml should be used to mark the CL as 
non-responsive if either hasn't heartbeated in that time. Probably we don't 
want to immediately die on an error too, but simply not heartbeat and die if 
the error doesn't recover in some interval, so that anyone monitoring the error 
logs has time to correct the issue (let's say it's just out of space) before it 
dies.

The bigger question is, do we want to do anything clever if we don't want to 
die? Should we start draining the mutation stage and just dropping the 
messages? If so, should we attempt to recover if the drive starts responding 
again after draining the mutation stage?



 There should be different disk_failure_policies for data and commit volumes 
 or commit volume failure should always cause node exit
 --

 Key: CASSANDRA-6364
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6364
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: JBOD, single dedicated commit disk
Reporter: J. Ryan Earl
Assignee: Benedict
 Fix For: 2.0.4


 We're doing fault testing on a pre-production Cassandra cluster.  One of the 
 tests was to simulation failure of the commit volume/disk, which in our case 
 is on a dedicated disk.  We expected failure of the commit volume to be 
 handled somehow, but what we found was that no action was taken by Cassandra 
 when the commit volume fail.  We simulated this simply by pulling the 
 physical disk that backed the commit volume, which resulted in filesystem I/O 
 errors on the mount point.
 What then happened was that the Cassandra Heap filled up to the point that it 
 was spending 90% of its time doing garbage collection.  No errors were logged 
 in regards to the failed commit volume.  Gossip on other nodes in the cluster 
 eventually flagged the node as down.  Gossip on the local node showed itself 
 as up, and all other nodes as down.
 The most serious problem was that connections to the coordinator on this node 
 became very slow due to the on-going GC, as I assume uncommitted writes piled 
 up on the JVM heap.  What we believe should have happened is that Cassandra 
 should have caught the I/O error and exited with a useful log message, or 
 otherwise done some sort of useful cleanup.  Otherwise the node goes into a 
 sort of Zombie state, spending most of its time in GC, and thus slowing down 
 any transactions that happen to use the coordinator on said node.
 A limit on in-memory, unflushed writes before refusing requests may also 
 work.  Point being, something should be done to handle the commit volume 
 dying as doing nothing results in affecting the entire cluster.  I should 
 note, we are using: disk_failure_policy: best_effort



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6445) Cassandra 2.0.3 is not starting on RHEL with HotSpot JDK

2013-12-04 Thread Hari (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari updated CASSANDRA-6445:


Priority: Trivial  (was: Major)

 Cassandra 2.0.3 is not starting on RHEL with HotSpot JDK
 

 Key: CASSANDRA-6445
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6445
 Project: Cassandra
  Issue Type: Bug
 Environment: [root@BRANDYBUCKVM2 bin]# java -version
 java version 1.7.0_45
 Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
 Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
Reporter: Hari
Priority: Trivial

 1. I copied apache-cassandra-2.0.3-bin.tar to my linux box (RHEL)
 2. Did untar, created directories in /var/log/cassandra and /var/lib/cassandra
 3. Java -version
 [root@BRANDYBUCKVM2 bin]# java -version
 java version 1.7.0_45
 Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
 Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
 4. But when i try to start it, by /bin/cassandra -f, I am getting the 
 following error, 
 [root@BRANDYBUCKVM2 bin]# ./cassandra -f
 xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
 -XX:ThreadPriorit   yPolicy=42 -Xms1024M 
 -Xmx1024M -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 numactl: execution of `': No such file or directory
 What could be the error cause?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6445) Cassandra 2.0.3 is not starting on RHEL with HotSpot JDK

2013-12-04 Thread Hari (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838795#comment-13838795
 ] 

Hari commented on CASSANDRA-6445:
-

Its resolved, issue is with JAVA_HOME not properly set.


 Cassandra 2.0.3 is not starting on RHEL with HotSpot JDK
 

 Key: CASSANDRA-6445
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6445
 Project: Cassandra
  Issue Type: Bug
 Environment: [root@BRANDYBUCKVM2 bin]# java -version
 java version 1.7.0_45
 Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
 Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
Reporter: Hari

 1. I copied apache-cassandra-2.0.3-bin.tar to my linux box (RHEL)
 2. Did untar, created directories in /var/log/cassandra and /var/lib/cassandra
 3. Java -version
 [root@BRANDYBUCKVM2 bin]# java -version
 java version 1.7.0_45
 Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
 Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
 4. But when i try to start it, by /bin/cassandra -f, I am getting the 
 following error, 
 [root@BRANDYBUCKVM2 bin]# ./cassandra -f
 xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
 -XX:ThreadPriorit   yPolicy=42 -Xms1024M 
 -Xmx1024M -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 numactl: execution of `': No such file or directory
 What could be the error cause?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5549) Remove Table.switchLock

2013-12-04 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838796#comment-13838796
 ] 

Vijay commented on CASSANDRA-5549:
--

Well it is not exactly a constant overhead you might want to look at 
o.a.c.u.ObjectSizes (CASSANDRA-4860)...

 Remove Table.switchLock
 ---

 Key: CASSANDRA-5549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5549
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 5549-removed-switchlock.png, 5549-sunnyvale.png


 As discussed in CASSANDRA-5422, Table.switchLock is a bottleneck on the write 
 path.  ReentrantReadWriteLock is not lightweight, even if there is no 
 contention per se between readers and writers of the lock (in Cassandra, 
 memtable updates and switches).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5549) Remove Table.switchLock

2013-12-04 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838802#comment-13838802
 ] 

Benedict commented on CASSANDRA-5549:
-

For a given run of the JVM the overhead is constant for each type of object 
allocated, and the objects allocated can be predicted accurately given the 
number of columns we are storing. I've done object size measurement before :-)

I don't see anything in CASSANDRA-4860 that is surprising, but perhaps I've 
missed something specific you're worrying about? In general it's dealing with 
miscalculating the portion of a ByteBuffer we're referencing. This is a 
concern for live bytes, not retained bytes, and my scheme outlined above was 
for retained bytes which is what we care about for memory constraints, but I 
will also be replacing the live bytes calculation, since it will be easy to do 
at the same time. But the same approach works, care is just needed.

 Remove Table.switchLock
 ---

 Key: CASSANDRA-5549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5549
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 5549-removed-switchlock.png, 5549-sunnyvale.png


 As discussed in CASSANDRA-5422, Table.switchLock is a bottleneck on the write 
 path.  ReentrantReadWriteLock is not lightweight, even if there is no 
 contention per se between readers and writers of the lock (in Cassandra, 
 memtable updates and switches).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6446) Faster range tombstones on wide rows

2013-12-04 Thread Oleg Anastasyev (JIRA)
Oleg Anastasyev created CASSANDRA-6446:
--

 Summary: Faster range tombstones on wide rows
 Key: CASSANDRA-6446
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6446
 Project: Cassandra
  Issue Type: Improvement
Reporter: Oleg Anastasyev


Having wide CQL rows (~1M in single partition) and after deleting some of them, 
we found inefficiencies in handling of range tombstones on both write and read 
paths.

I attached 2 patches here, one for write path 
(RangeTombstonesWriteOptimization.diff) and another on read 
(RangeTombstonesReadOptimization.diff).

On write path, when you have some CQL rows deletions by primary key, each of 
deletion is represented by range tombstone. On put of this tombstone to 
memtable the original code takes all columns from memtable from partition and 
checks DeletionInfo.isDeleted by brute for loop to decide, should this column 
stay in memtable or it was deleted by new tombstone. Needless to say, more 
columns you have on partition the slower deletions you have heating your CPU 
with brute range tombstones check. 
The RangeTombstonesWriteOptimization.diff patch for partitions with more than 
1 columns loops by tombstones instead and checks existance of columns for 
each of them. Also it copies of whole memtable range tombstone list only if 
there are changes to be made there (original code copies range tombstone list 
on every write).

On read path, original code scans whole range tombstone list of a partition to 
match sstable columns to their range tomstones. The 
RangeTombstonesReadOptimization.diff patch scans only necessary range of 
tombstones, according to filter used for read.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6446) Faster range tombstones on wide rows

2013-12-04 Thread Oleg Anastasyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Anastasyev updated CASSANDRA-6446:
---

Attachment: RangeTombstonesWriteOptimization.diff
RangeTombstonesReadOptimization.diff

 Faster range tombstones on wide rows
 

 Key: CASSANDRA-6446
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6446
 Project: Cassandra
  Issue Type: Improvement
Reporter: Oleg Anastasyev
 Attachments: RangeTombstonesReadOptimization.diff, 
 RangeTombstonesWriteOptimization.diff


 Having wide CQL rows (~1M in single partition) and after deleting some of 
 them, we found inefficiencies in handling of range tombstones on both write 
 and read paths.
 I attached 2 patches here, one for write path 
 (RangeTombstonesWriteOptimization.diff) and another on read 
 (RangeTombstonesReadOptimization.diff).
 On write path, when you have some CQL rows deletions by primary key, each of 
 deletion is represented by range tombstone. On put of this tombstone to 
 memtable the original code takes all columns from memtable from partition and 
 checks DeletionInfo.isDeleted by brute for loop to decide, should this column 
 stay in memtable or it was deleted by new tombstone. Needless to say, more 
 columns you have on partition the slower deletions you have heating your CPU 
 with brute range tombstones check. 
 The RangeTombstonesWriteOptimization.diff patch for partitions with more than 
 1 columns loops by tombstones instead and checks existance of columns for 
 each of them. Also it copies of whole memtable range tombstone list only if 
 there are changes to be made there (original code copies range tombstone list 
 on every write).
 On read path, original code scans whole range tombstone list of a partition 
 to match sstable columns to their range tomstones. The 
 RangeTombstonesReadOptimization.diff patch scans only necessary range of 
 tombstones, according to filter used for read.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6446) Faster range tombstones on wide partitions

2013-12-04 Thread Oleg Anastasyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Anastasyev updated CASSANDRA-6446:
---

Summary: Faster range tombstones on wide partitions  (was: Faster range 
tombstones on wide rows)

 Faster range tombstones on wide partitions
 --

 Key: CASSANDRA-6446
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6446
 Project: Cassandra
  Issue Type: Improvement
Reporter: Oleg Anastasyev
 Attachments: RangeTombstonesReadOptimization.diff, 
 RangeTombstonesWriteOptimization.diff


 Having wide CQL rows (~1M in single partition) and after deleting some of 
 them, we found inefficiencies in handling of range tombstones on both write 
 and read paths.
 I attached 2 patches here, one for write path 
 (RangeTombstonesWriteOptimization.diff) and another on read 
 (RangeTombstonesReadOptimization.diff).
 On write path, when you have some CQL rows deletions by primary key, each of 
 deletion is represented by range tombstone. On put of this tombstone to 
 memtable the original code takes all columns from memtable from partition and 
 checks DeletionInfo.isDeleted by brute for loop to decide, should this column 
 stay in memtable or it was deleted by new tombstone. Needless to say, more 
 columns you have on partition the slower deletions you have heating your CPU 
 with brute range tombstones check. 
 The RangeTombstonesWriteOptimization.diff patch for partitions with more than 
 1 columns loops by tombstones instead and checks existance of columns for 
 each of them. Also it copies of whole memtable range tombstone list only if 
 there are changes to be made there (original code copies range tombstone list 
 on every write).
 On read path, original code scans whole range tombstone list of a partition 
 to match sstable columns to their range tomstones. The 
 RangeTombstonesReadOptimization.diff patch scans only necessary range of 
 tombstones, according to filter used for read.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6447) SELECT someColumns FROM table results in AssertionError in AbstractQueryPager.discardFirst

2013-12-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Aymé updated CASSANDRA-6447:
---

Attachment: stacktrace.txt

the stacktrace

 SELECT someColumns FROM table results in AssertionError in 
 AbstractQueryPager.discardFirst
 --

 Key: CASSANDRA-6447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6447
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cluster: single node server (ubuntu)
 Cassandra version: 2.0.3 (server/client)
 Client: Datastax cassandra-driver-core 2.0.0-rc1
Reporter: Julien Aymé
 Attachments: stacktrace.txt


 I have a query which must read all the rows from the table:
 Query: SELECT key, col1, col2, col3 FROM mytable
 Here is the corresponding code (this is using datastax driver):
 {code}
 ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM 
 mytable);
 for (Row row : result) {
  // do some work with row
 }
 {code}
 Messages sent from the client to Cassandra:
 * 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=null, serialCl=ONE])}}
 * 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
 cap=410474], serialCl=ONE])}}
 On the first message, everything is fine, and the server returns 5000 rows.
 On the second message, paging is in progress, and the server fails in 
 AbstractQueryPager.discardFirst: AssertionError (stack trace attached).
 Here is some more info (step by step debugging on reception of 2nd message):
 {code}
 AbstractQueryPager.fetchPage(int):
 * pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
 * containsPreviousLast(rows.get(0)) returns true
 - AbstractQueryPager.discardFirst(ListRow):
 * rows size=5002
 * first=TreeMapBackedSortedColumns[with TreeMap size=1]
 - AbstractQueryPager.discardHead(ColumnFamily, ...):
 * counter = ColumnCounter$GroupByPrefix
 * iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
 * Column c = DeletedColumn
 * counter.count() - c.isLive returns false (c is DeletedColumn)
 * counter.live() = 0
 * iter.hasNext() returns false
 * Math.min(0, toDiscard==1) returns 0
 - AbstractQueryPager.discardFirst(ListRow):
 discarded = 0;
 {code}
 -  assert discarded == 1 *throws AssertionError*



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6447) SELECT someColumns FROM table results in AssertionError in AbstractQueryPager.discardFirst

2013-12-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838838#comment-13838838
 ] 

Julien Aymé commented on CASSANDRA-6447:


I think this issue occurs because the row to discard is not live, but I may be 
wrong here.

 SELECT someColumns FROM table results in AssertionError in 
 AbstractQueryPager.discardFirst
 --

 Key: CASSANDRA-6447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6447
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cluster: single node server (ubuntu)
 Cassandra version: 2.0.3 (server/client)
 Client: Datastax cassandra-driver-core 2.0.0-rc1
Reporter: Julien Aymé
 Attachments: stacktrace.txt


 I have a query which must read all the rows from the table:
 Query: SELECT key, col1, col2, col3 FROM mytable
 Here is the corresponding code (this is using datastax driver):
 {code}
 ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM 
 mytable);
 for (Row row : result) {
  // do some work with row
 }
 {code}
 Messages sent from the client to Cassandra:
 * 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=null, serialCl=ONE])}}
 * 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
 cap=410474], serialCl=ONE])}}
 On the first message, everything is fine, and the server returns 5000 rows.
 On the second message, paging is in progress, and the server fails in 
 AbstractQueryPager.discardFirst: AssertionError (stack trace attached).
 Here is some more info (step by step debugging on reception of 2nd message):
 {code}
 AbstractQueryPager.fetchPage(int):
 * pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
 * containsPreviousLast(rows.get(0)) returns true
 - AbstractQueryPager.discardFirst(ListRow):
 * rows size=5002
 * first=TreeMapBackedSortedColumns[with TreeMap size=1]
 - AbstractQueryPager.discardHead(ColumnFamily, ...):
 * counter = ColumnCounter$GroupByPrefix
 * iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
 * Column c = DeletedColumn
 * counter.count() - c.isLive returns false (c is DeletedColumn)
 * counter.live() = 0
 * iter.hasNext() returns false
 * Math.min(0, toDiscard==1) returns 0
 - AbstractQueryPager.discardFirst(ListRow):
 discarded = 0;
 {code}
 -  assert discarded == 1 *throws AssertionError*



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6447) SELECT someColumns FROM table results in AssertionError in AbstractQueryPager.discardFirst

2013-12-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Aymé updated CASSANDRA-6447:
---

Description: 
I have a query which must read all the rows from the table:
Query: SELECT key, col1, col2, col3 FROM mytable

Here is the corresponding code (this is using datastax driver):
{code}
ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM mytable);
for (Row row : result) {
 // do some work with row
}
{code}

Messages sent from the client to Cassandra:
* 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
skip=false, psize=5000, state=null, serialCl=ONE])}}

* 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
cap=410474], serialCl=ONE])}}

On the first message, everything is fine, and the server returns 5000 rows.
On the second message, paging is in progress, and the server fails in 
AbstractQueryPager.discardFirst: AssertionError (stack trace attached).

Here is some more info (step by step debugging on reception of 2nd message):
{code}
AbstractQueryPager.fetchPage(int):
* pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
* containsPreviousLast(rows.get(0)) returns true

- AbstractQueryPager.discardFirst(ListRow):
* rows size=5002
* first=TreeMapBackedSortedColumns[with TreeMap size=1]

- AbstractQueryPager.discardHead(ColumnFamily, ...):
* counter = ColumnCounter$GroupByPrefix
* iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
* Column c = DeletedColumn
* counter.count() - c.isLive returns false (c is DeletedColumn)
* counter.live() = 0
* iter.hasNext() returns false
* Math.min(0, toDiscard==1) returns 0

- AbstractQueryPager.discardFirst(ListRow):
* discarded = 0;
* count = newCf.getColumnCount() = 0;
{code}
-  assert discarded == 1 *throws AssertionError*



  was:
I have a query which must read all the rows from the table:
Query: SELECT key, col1, col2, col3 FROM mytable

Here is the corresponding code (this is using datastax driver):
{code}
ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM mytable);
for (Row row : result) {
 // do some work with row
}
{code}

Messages sent from the client to Cassandra:
* 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
skip=false, psize=5000, state=null, serialCl=ONE])}}

* 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
cap=410474], serialCl=ONE])}}

On the first message, everything is fine, and the server returns 5000 rows.
On the second message, paging is in progress, and the server fails in 
AbstractQueryPager.discardFirst: AssertionError (stack trace attached).

Here is some more info (step by step debugging on reception of 2nd message):
{code}
AbstractQueryPager.fetchPage(int):
* pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
* containsPreviousLast(rows.get(0)) returns true

- AbstractQueryPager.discardFirst(ListRow):
* rows size=5002
* first=TreeMapBackedSortedColumns[with TreeMap size=1]

- AbstractQueryPager.discardHead(ColumnFamily, ...):
* counter = ColumnCounter$GroupByPrefix
* iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
* Column c = DeletedColumn
* counter.count() - c.isLive returns false (c is DeletedColumn)
* counter.live() = 0
* iter.hasNext() returns false
* Math.min(0, toDiscard==1) returns 0

- AbstractQueryPager.discardFirst(ListRow):
discarded = 0;
{code}
-  assert discarded == 1 *throws AssertionError*




 SELECT someColumns FROM table results in AssertionError in 
 AbstractQueryPager.discardFirst
 --

 Key: CASSANDRA-6447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6447
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cluster: single node server (ubuntu)
 Cassandra version: 2.0.3 (server/client)
 Client: Datastax cassandra-driver-core 2.0.0-rc1
Reporter: Julien Aymé
 Attachments: stacktrace.txt


 I have a query which must read all the rows from the table:
 Query: SELECT key, col1, col2, col3 FROM mytable
 Here is the corresponding code (this is using datastax driver):
 {code}
 ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM 
 mytable);
 for (Row row : result) {
  // do some work with row
 }
 {code}
 Messages sent from the client to Cassandra:
 * 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=null, serialCl=ONE])}}
 * 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
 cap=410474], 

[jira] [Commented] (CASSANDRA-6447) SELECT someColumns FROM table results in AssertionError in AbstractQueryPager.discardFirst

2013-12-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838841#comment-13838841
 ] 

Julien Aymé commented on CASSANDRA-6447:


Also, since newCf.getColumnCount() == 0, the rest of the code is still valid 
(the row will not be included in the returned Rows).
Therefore, I think that the assert statement should be transformed to:
{code}
assert discarded == 1 || discarded == 0;
{code}
Or that the assert statement should be dropped.

 SELECT someColumns FROM table results in AssertionError in 
 AbstractQueryPager.discardFirst
 --

 Key: CASSANDRA-6447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6447
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cluster: single node server (ubuntu)
 Cassandra version: 2.0.3 (server/client)
 Client: Datastax cassandra-driver-core 2.0.0-rc1
Reporter: Julien Aymé
 Attachments: stacktrace.txt


 I have a query which must read all the rows from the table:
 Query: SELECT key, col1, col2, col3 FROM mytable
 Here is the corresponding code (this is using datastax driver):
 {code}
 ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM 
 mytable);
 for (Row row : result) {
  // do some work with row
 }
 {code}
 Messages sent from the client to Cassandra:
 * 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=null, serialCl=ONE])}}
 * 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
 cap=410474], serialCl=ONE])}}
 On the first message, everything is fine, and the server returns 5000 rows.
 On the second message, paging is in progress, and the server fails in 
 AbstractQueryPager.discardFirst: AssertionError (stack trace attached).
 Here is some more info (step by step debugging on reception of 2nd message):
 {code}
 AbstractQueryPager.fetchPage(int):
 * pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
 * containsPreviousLast(rows.get(0)) returns true
 - AbstractQueryPager.discardFirst(ListRow):
 * rows size=5002
 * first=TreeMapBackedSortedColumns[with TreeMap size=1]
 - AbstractQueryPager.discardHead(ColumnFamily, ...):
 * counter = ColumnCounter$GroupByPrefix
 * iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
 * Column c = DeletedColumn
 * counter.count() - c.isLive returns false (c is DeletedColumn)
 * counter.live() = 0
 * iter.hasNext() returns false
 * Math.min(0, toDiscard==1) returns 0
 - AbstractQueryPager.discardFirst(ListRow):
 * discarded = 0;
 * count = newCf.getColumnCount() = 0;
 {code}
 -  assert discarded == 1 *throws AssertionError*



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6447) SELECT someColumns FROM table results in AssertionError in AbstractQueryPager.discardFirst

2013-12-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838841#comment-13838841
 ] 

Julien Aymé edited comment on CASSANDRA-6447 at 12/4/13 12:13 PM:
--

Also, since newCf.getColumnCount() == 0, the rest of the code is still valid 
(the row will not be included in the returned Rows).
Therefore, I think that the assert statement should be transformed to:
{code}
assert discarded == 1 || discarded == 0;
{code}
Or that the assert statement should be dropped, since it was introduced in the 
last commit on AbstractQueryPager: 

See: 
https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=3c9760bdb986f6c2430adfc13c86ecb75c3246ac



was (Author: julien.a...@gmail.com):
Also, since newCf.getColumnCount() == 0, the rest of the code is still valid 
(the row will not be included in the returned Rows).
Therefore, I think that the assert statement should be transformed to:
{code}
assert discarded == 1 || discarded == 0;
{code}
Or that the assert statement should be dropped.

 SELECT someColumns FROM table results in AssertionError in 
 AbstractQueryPager.discardFirst
 --

 Key: CASSANDRA-6447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6447
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cluster: single node server (ubuntu)
 Cassandra version: 2.0.3 (server/client)
 Client: Datastax cassandra-driver-core 2.0.0-rc1
Reporter: Julien Aymé
 Attachments: stacktrace.txt


 I have a query which must read all the rows from the table:
 Query: SELECT key, col1, col2, col3 FROM mytable
 Here is the corresponding code (this is using datastax driver):
 {code}
 ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM 
 mytable);
 for (Row row : result) {
  // do some work with row
 }
 {code}
 Messages sent from the client to Cassandra:
 * 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=null, serialCl=ONE])}}
 * 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
 cap=410474], serialCl=ONE])}}
 On the first message, everything is fine, and the server returns 5000 rows.
 On the second message, paging is in progress, and the server fails in 
 AbstractQueryPager.discardFirst: AssertionError (stack trace attached).
 Here is some more info (step by step debugging on reception of 2nd message):
 {code}
 AbstractQueryPager.fetchPage(int):
 * pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
 * containsPreviousLast(rows.get(0)) returns true
 - AbstractQueryPager.discardFirst(ListRow):
 * rows size=5002
 * first=TreeMapBackedSortedColumns[with TreeMap size=1]
 - AbstractQueryPager.discardHead(ColumnFamily, ...):
 * counter = ColumnCounter$GroupByPrefix
 * iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
 * Column c = DeletedColumn
 * counter.count() - c.isLive returns false (c is DeletedColumn)
 * counter.live() = 0
 * iter.hasNext() returns false
 * Math.min(0, toDiscard==1) returns 0
 - AbstractQueryPager.discardFirst(ListRow):
 * discarded = 0;
 * count = newCf.getColumnCount() = 0;
 {code}
 -  assert discarded == 1 *throws AssertionError*



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6447) SELECT someColumns FROM table results in AssertionError in AbstractQueryPager.discardFirst

2013-12-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838838#comment-13838838
 ] 

Julien Aymé edited comment on CASSANDRA-6447 at 12/4/13 12:30 PM:
--

I think this issue occurs because the row to discard has only one column, and 
this column is not live, but I may be wrong here.


was (Author: julien.a...@gmail.com):
I think this issue occurs because the row to discard is not live, but I may be 
wrong here.

 SELECT someColumns FROM table results in AssertionError in 
 AbstractQueryPager.discardFirst
 --

 Key: CASSANDRA-6447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6447
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cluster: single node server (ubuntu)
 Cassandra version: 2.0.3 (server/client)
 Client: Datastax cassandra-driver-core 2.0.0-rc1
Reporter: Julien Aymé
 Attachments: stacktrace.txt


 I have a query which must read all the rows from the table:
 Query: SELECT key, col1, col2, col3 FROM mytable
 Here is the corresponding code (this is using datastax driver):
 {code}
 ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM 
 mytable);
 for (Row row : result) {
  // do some work with row
 }
 {code}
 Messages sent from the client to Cassandra:
 * 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=null, serialCl=ONE])}}
 * 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
 cap=410474], serialCl=ONE])}}
 On the first message, everything is fine, and the server returns 5000 rows.
 On the second message, paging is in progress, and the server fails in 
 AbstractQueryPager.discardFirst: AssertionError (stack trace attached).
 Here is some more info (step by step debugging on reception of 2nd message):
 {code}
 AbstractQueryPager.fetchPage(int):
 * pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
 * containsPreviousLast(rows.get(0)) returns true
 - AbstractQueryPager.discardFirst(ListRow):
 * rows size=5002
 * first=TreeMapBackedSortedColumns[with TreeMap size=1]
 - AbstractQueryPager.discardHead(ColumnFamily, ...):
 * counter = ColumnCounter$GroupByPrefix
 * iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
 * Column c = DeletedColumn
 * counter.count() - c.isLive returns false (c is DeletedColumn)
 * counter.live() = 0
 * iter.hasNext() returns false
 * Math.min(0, toDiscard==1) returns 0
 - AbstractQueryPager.discardFirst(ListRow):
 * discarded = 0;
 * count = newCf.getColumnCount() = 0;
 {code}
 -  assert discarded == 1 *throws AssertionError*



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2013-12-04 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838926#comment-13838926
 ] 

Lyuben Todorov commented on CASSANDRA-5351:
---

Added checks to LeveledManifest#replace and LeveledManifest#add to ensure 
un-repaired data is kept at L0 ([patch 
here|https://github.com/lyubent/cassandra/commit/70f63e577f531f904997934c53022c1d6a94b9f3])
 The out-of-order key error is still a problem, logs show same error as above 
comment. 

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2013-12-04 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838926#comment-13838926
 ] 

Lyuben Todorov edited comment on CASSANDRA-5351 at 12/4/13 2:40 PM:


Added checks to LeveledManifest#replace and LeveledManifest#add to ensure 
un-repaired data is kept at L0 ([patch 
here|https://github.com/lyubent/cassandra/commit/70f63e577f531f904997934c53022c1d6a94b9f3])
 The out-of-order key error is still a problem, logs show same error as above 
comment. One thing that I haven't accounted for so far is, tables being added 
straight to levels higher than L0, is it possible for newly flushed data to go 
straight to a level  L0


was (Author: lyubent):
Added checks to LeveledManifest#replace and LeveledManifest#add to ensure 
un-repaired data is kept at L0 ([patch 
here|https://github.com/lyubent/cassandra/commit/70f63e577f531f904997934c53022c1d6a94b9f3])
 The out-of-order key error is still a problem, logs show same error as above 
comment. 

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6412) Custom creation and merge functions for user-defined column types

2013-12-04 Thread Nicolas Favre-Felix (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839028#comment-13839028
 ] 

Nicolas Favre-Felix commented on CASSANDRA-6412:


Thanks for the feedback, [~slebresne].

I like your suggestion to use user-defined types, this is definitely better 
than the home-made candlestick structure.
I also like that having fixed types with custom resolver makes it easier to 
write type-safe code with minimal changes to the Cassandra code base.

As you point out, we can use the same technique as for counter deletion. I 
understand that counter deletes are somewhat broken, and that columns with a 
custom resolver would suffer from a similar defect (CASSANDRA-2774).

I don't think that there is an easy solution to this problem; only deleting al 
CL.ALL would prevent old values from being merged with newer ones.

 Custom creation and merge functions for user-defined column types
 -

 Key: CASSANDRA-6412
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6412
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Nicolas Favre-Felix

 This is a proposal for a new feature, mapping custom types to Cassandra 
 columns.
 These types would provide a creation function and a merge function, to be 
 implemented in Java by the user.
 This feature relates to the concept of CRDTs; the proposal is to replicate 
 operations on these types during write, to apply these operations 
 internally during merge (Column.reconcile), and to also merge their values on 
 read.
 The following operations are made possible without reading back any data:
 * MIN or MAX(value) for a column
 * First value for a column
 * Count Distinct
 * HyperLogLog
 * Count-Min
 And any composition of these too, e.g. a Candlestick type includes first, 
 last, min, and max.
 The merge operations exposed by these types need to be commutative; this is 
 the case for many functions used in analytics.
 This feature is incomplete without some integration with CASSANDRA-4775 
 (Counters 2.0) which provides a Read-Modify-Write implementation for 
 distributed counters. Integrating custom creation and merge functions with 
 new counters would let users implement complex CRDTs in Cassandra, including:
 * Averages  related (sum of squares, standard deviation)
 * Graphs
 * Sets
 * Custom registers (even with vector clocks)
 I have a working prototype with implementations for min, max, and Candlestick 
 at https://github.com/acunu/cassandra/tree/crdts - I'd appreciate any 
 feedback on the design and interfaces.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6369) Fix prepared statement size computation

2013-12-04 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-6369:
---

Since Version: 1.2.11

 Fix prepared statement size computation
 ---

 Key: CASSANDRA-6369
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6369
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.12, 2.0.3

 Attachments: 6369.txt


 When computed the size of CQLStatement to limit the prepared statement cache 
 (CASSANDRA-6107), we overestimate the actual memory used because the 
 statement include a reference to the table CFMetaData which measureDeep 
 counts. And as it happens, that reference is big: on a simple test preparing 
 a very trivial select statement, I was able to only prepare 87 statements 
 before some started to be evicted because each statement was more than 93K 
 big and more than 92K of that was the CFMetaData object. As it happens there 
 is no reason to account the CFMetaData object at all since it's in memory 
 anyway whether or not there is prepared statements or not.
 Attaching a simple (if not extremely elegant) patch to remove what we don't 
 care about of the computation. Another solution would be to use the 
 MemoryMeter.withTrackerProvider option as we do in Memtable, but in the 
 QueryProcessor case we currently use only one MemoryMeter, not one per CF, so 
 it didn't felt necessarilly cleaner. We could create one-shot MemoryMeter 
 object each time we need to measure a CQLStatement but that doesn't feel a 
 lot simpler/cleaner either. But if someone feels religious about some other 
 solution, I don't care.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6307) Switch cqlsh from cassandra-dbapi2 to python-driver

2013-12-04 Thread Andy Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839092#comment-13839092
 ] 

Andy Zhao commented on CASSANDRA-6307:
--

Any estimate when python-driver 1.0 will be ready? Would love to help with the 
development. 

 Switch cqlsh from cassandra-dbapi2 to python-driver
 ---

 Key: CASSANDRA-6307
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6307
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.1


 python-driver is hitting 1.0 soon. cassandra-dbapi2 development has stalled.
 It's time to switch cqlsh to native protocol and cassandra-dbapi2, especially 
 now that
 1. Some CQL3 things are not supported by Thrift transport
 2. cqlsh no longer has to support CQL2 (dropped in 2.0)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6449) Tools error out if they can't make ~/.cassandra

2013-12-04 Thread Jeremiah Jordan (JIRA)
Jeremiah Jordan created CASSANDRA-6449:
--

 Summary: Tools error out if they can't make ~/.cassandra
 Key: CASSANDRA-6449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6449
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan


We shouldn't error out if we can't make the .cassandra folder for the new 
history stuff.

{noformat}
Exception in thread main FSWriteError in /usr/share/opscenter-agent/.cassandra
at 
org.apache.cassandra.io.util.FileUtils.createDirectory(FileUtils.java:261)
at 
org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:627)
at org.apache.cassandra.tools.NodeCmd.printHistory(NodeCmd.java:1403)
at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1122)
Caused by: java.io.IOException: Failed to mkdirs 
/usr/share/opscenter-agent/.cassandra
... 4 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6450) sstable2json hangs if keyspace uses authentication

2013-12-04 Thread Josh Dzielak (JIRA)
Josh Dzielak created CASSANDRA-6450:
---

 Summary: sstable2json hangs if keyspace uses authentication
 Key: CASSANDRA-6450
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6450
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12
Reporter: Josh Dzielak
Priority: Minor


Running sstable2json against an authenticated keyspace hangs indefinitely. True 
for other utilities based on SSTableExport as well.

Running sstable2json against other unauthenticated keyspaces in the same 
node/cluster was successful. Running against any CF in the keyspace with 
password authentication on resulted in a hang.

It looks like it gets about to:

Table table = Table.open(descriptor.ksname); or
table.getColumnFamilyStore(baseName);

in SSTableExport.java but no farther.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6423) Some histogram metrics of long[] type are unusable with Graphite

2013-12-04 Thread mat gomes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839219#comment-13839219
 ] 

mat gomes commented on CASSANDRA-6423:
--

We are also having the same issue.
invalid line received from client IP ignoring. Flooding the logs

Please share resolutions to this issue.
Thanks.


 Some histogram metrics of long[] type are unusable with Graphite
 

 Key: CASSANDRA-6423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6423
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.3, Oracle Linux 6.3, graphite
Reporter: Nikolai Grigoriev
Priority: Minor

 I am not entirely sure if this is a Cassandra issue or the limitation of the 
 graphite reporter agent. But since the metrics in question are created by 
 Cassandra itself I have decided that it may be appropriate to report it here.
 I am using Graphite with Cassandra 2.0.x and I have recently noticed frequent 
 'invalid line' messages in Graphite log. Unfortunately Graphite did not 
 provide enough details so I hacked it a bit to get more verbose error message 
 with the metric name. And here is what I have found:
 {code}
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedRowSizeHistogram.value
  [J@17c26ba5 1385741236received from client 192.168.20.157:3, ignoring
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedColumnCountHistogram.value
  [J@5d2929d2 1385741236received from client 192.168.20.157:3, ignoring
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedRowSizeHistogram.value
  [J@3978c9c6 1385741236received from client 192.168.20.157:3, ignoring
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedColumnCountHistogram.value
  [J@290703a4 1385741236received from client 192.168.20.157:3, ignoring
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedRowSizeHistogram.value
  [J@b801907 1385741236received from client 192.168.20.157:3, ignoring
 {code}
 Then a quick search through Cassandre code confirmed that there is a number 
 of histograms created as Gauge with long[] data. So, when they are serialized 
 by  GraphiteReporter they are just printed as long[].toString(), making these 
 metrics useless.
 I am not sure what would be the best solution to it. I do see some histograms 
 (LiveScannedHistogram etc) that are implemented differently and they are 
 properly sent to Graphite.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: fix MoveTest

2013-12-04 Thread marcuse
Updated Branches:
  refs/heads/cassandra-1.2 f634ac7ea - d8c4e89b3


fix MoveTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d8c4e89b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d8c4e89b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d8c4e89b

Branch: refs/heads/cassandra-1.2
Commit: d8c4e89b3e85e8cb41a438963845cb10a923a3d6
Parents: f634ac7
Author: Marcus Eriksson marc...@spotify.com
Authored: Wed Dec 4 20:17:30 2013 +0100
Committer: Marcus Eriksson marc...@spotify.com
Committed: Wed Dec 4 20:17:30 2013 +0100

--
 test/unit/org/apache/cassandra/service/MoveTest.java | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d8c4e89b/test/unit/org/apache/cassandra/service/MoveTest.java
--
diff --git a/test/unit/org/apache/cassandra/service/MoveTest.java 
b/test/unit/org/apache/cassandra/service/MoveTest.java
index e30bbde..07a9590 100644
--- a/test/unit/org/apache/cassandra/service/MoveTest.java
+++ b/test/unit/org/apache/cassandra/service/MoveTest.java
@@ -109,6 +109,7 @@ public class MoveTest
 
 // Third node leaves
 ss.onChange(hosts.get(MOVING_NODE), ApplicationState.STATUS, 
valueFactory.moving(newToken));
+PendingRangeCalculatorService.instance.blockUntilFinished();
 
 assertTrue(tmd.isMoving(hosts.get(MOVING_NODE)));
 
@@ -197,6 +198,7 @@ public class MoveTest
 ss.onChange(boot2,
 ApplicationState.STATUS,
 
valueFactory.bootstrapping(Collections.Tokensingleton(keyTokens.get(7;
+PendingRangeCalculatorService.instance.blockUntilFinished();
 
 // don't require test update every time a new keyspace is added to 
test/conf/cassandra.yaml
 MapString, AbstractReplicationStrategy tableStrategyMap = new 
HashMapString, AbstractReplicationStrategy();



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-04 Thread marcuse
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4d44724
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4d44724
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4d44724

Branch: refs/heads/trunk
Commit: e4d447240a4b68a56596445ac5e4f4fbbe0c50af
Parents: b34d43f 32dbe58
Author: Marcus Eriksson marc...@spotify.com
Authored: Wed Dec 4 20:17:45 2013 +0100
Committer: Marcus Eriksson marc...@spotify.com
Committed: Wed Dec 4 20:17:45 2013 +0100

--
 test/unit/org/apache/cassandra/service/MoveTest.java | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4d44724/test/unit/org/apache/cassandra/service/MoveTest.java
--



[1/2] git commit: fix MoveTest

2013-12-04 Thread marcuse
Updated Branches:
  refs/heads/cassandra-2.0 1334f94e4 - 32dbe5825


fix MoveTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d8c4e89b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d8c4e89b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d8c4e89b

Branch: refs/heads/cassandra-2.0
Commit: d8c4e89b3e85e8cb41a438963845cb10a923a3d6
Parents: f634ac7
Author: Marcus Eriksson marc...@spotify.com
Authored: Wed Dec 4 20:17:30 2013 +0100
Committer: Marcus Eriksson marc...@spotify.com
Committed: Wed Dec 4 20:17:30 2013 +0100

--
 test/unit/org/apache/cassandra/service/MoveTest.java | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d8c4e89b/test/unit/org/apache/cassandra/service/MoveTest.java
--
diff --git a/test/unit/org/apache/cassandra/service/MoveTest.java 
b/test/unit/org/apache/cassandra/service/MoveTest.java
index e30bbde..07a9590 100644
--- a/test/unit/org/apache/cassandra/service/MoveTest.java
+++ b/test/unit/org/apache/cassandra/service/MoveTest.java
@@ -109,6 +109,7 @@ public class MoveTest
 
 // Third node leaves
 ss.onChange(hosts.get(MOVING_NODE), ApplicationState.STATUS, 
valueFactory.moving(newToken));
+PendingRangeCalculatorService.instance.blockUntilFinished();
 
 assertTrue(tmd.isMoving(hosts.get(MOVING_NODE)));
 
@@ -197,6 +198,7 @@ public class MoveTest
 ss.onChange(boot2,
 ApplicationState.STATUS,
 
valueFactory.bootstrapping(Collections.Tokensingleton(keyTokens.get(7;
+PendingRangeCalculatorService.instance.blockUntilFinished();
 
 // don't require test update every time a new keyspace is added to 
test/conf/cassandra.yaml
 MapString, AbstractReplicationStrategy tableStrategyMap = new 
HashMapString, AbstractReplicationStrategy();



[1/3] git commit: fix MoveTest

2013-12-04 Thread marcuse
Updated Branches:
  refs/heads/trunk b34d43f97 - e4d447240


fix MoveTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d8c4e89b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d8c4e89b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d8c4e89b

Branch: refs/heads/trunk
Commit: d8c4e89b3e85e8cb41a438963845cb10a923a3d6
Parents: f634ac7
Author: Marcus Eriksson marc...@spotify.com
Authored: Wed Dec 4 20:17:30 2013 +0100
Committer: Marcus Eriksson marc...@spotify.com
Committed: Wed Dec 4 20:17:30 2013 +0100

--
 test/unit/org/apache/cassandra/service/MoveTest.java | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d8c4e89b/test/unit/org/apache/cassandra/service/MoveTest.java
--
diff --git a/test/unit/org/apache/cassandra/service/MoveTest.java 
b/test/unit/org/apache/cassandra/service/MoveTest.java
index e30bbde..07a9590 100644
--- a/test/unit/org/apache/cassandra/service/MoveTest.java
+++ b/test/unit/org/apache/cassandra/service/MoveTest.java
@@ -109,6 +109,7 @@ public class MoveTest
 
 // Third node leaves
 ss.onChange(hosts.get(MOVING_NODE), ApplicationState.STATUS, 
valueFactory.moving(newToken));
+PendingRangeCalculatorService.instance.blockUntilFinished();
 
 assertTrue(tmd.isMoving(hosts.get(MOVING_NODE)));
 
@@ -197,6 +198,7 @@ public class MoveTest
 ss.onChange(boot2,
 ApplicationState.STATUS,
 
valueFactory.bootstrapping(Collections.Tokensingleton(keyTokens.get(7;
+PendingRangeCalculatorService.instance.blockUntilFinished();
 
 // don't require test update every time a new keyspace is added to 
test/conf/cassandra.yaml
 MapString, AbstractReplicationStrategy tableStrategyMap = new 
HashMapString, AbstractReplicationStrategy();



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-04 Thread marcuse
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32dbe582
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32dbe582
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32dbe582

Branch: refs/heads/cassandra-2.0
Commit: 32dbe58254115b8a92a97e95566bb4374bb9c051
Parents: 1334f94 d8c4e89
Author: Marcus Eriksson marc...@spotify.com
Authored: Wed Dec 4 20:17:38 2013 +0100
Committer: Marcus Eriksson marc...@spotify.com
Committed: Wed Dec 4 20:17:38 2013 +0100

--
 test/unit/org/apache/cassandra/service/MoveTest.java | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32dbe582/test/unit/org/apache/cassandra/service/MoveTest.java
--
diff --cc test/unit/org/apache/cassandra/service/MoveTest.java
index f239671,07a9590..6ecd500
--- a/test/unit/org/apache/cassandra/service/MoveTest.java
+++ b/test/unit/org/apache/cassandra/service/MoveTest.java
@@@ -197,12 -198,13 +198,13 @@@ public class MoveTes
  ss.onChange(boot2,
  ApplicationState.STATUS,
  
valueFactory.bootstrapping(Collections.Tokensingleton(keyTokens.get(7;
+ PendingRangeCalculatorService.instance.blockUntilFinished();
  
  // don't require test update every time a new keyspace is added to 
test/conf/cassandra.yaml
 -MapString, AbstractReplicationStrategy tableStrategyMap = new 
HashMapString, AbstractReplicationStrategy();
 +MapString, AbstractReplicationStrategy keyspaceStrategyMap = new 
HashMapString, AbstractReplicationStrategy();
  for (int i = 1; i = 4; i++)
  {
 -tableStrategyMap.put(Keyspace + i, getStrategy(Keyspace + i, 
tmd));
 +keyspaceStrategyMap.put(Keyspace + i, getStrategy(Keyspace + 
i, tmd));
  }
  
 /**



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-04 Thread marcuse
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32dbe582
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32dbe582
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32dbe582

Branch: refs/heads/trunk
Commit: 32dbe58254115b8a92a97e95566bb4374bb9c051
Parents: 1334f94 d8c4e89
Author: Marcus Eriksson marc...@spotify.com
Authored: Wed Dec 4 20:17:38 2013 +0100
Committer: Marcus Eriksson marc...@spotify.com
Committed: Wed Dec 4 20:17:38 2013 +0100

--
 test/unit/org/apache/cassandra/service/MoveTest.java | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32dbe582/test/unit/org/apache/cassandra/service/MoveTest.java
--
diff --cc test/unit/org/apache/cassandra/service/MoveTest.java
index f239671,07a9590..6ecd500
--- a/test/unit/org/apache/cassandra/service/MoveTest.java
+++ b/test/unit/org/apache/cassandra/service/MoveTest.java
@@@ -197,12 -198,13 +198,13 @@@ public class MoveTes
  ss.onChange(boot2,
  ApplicationState.STATUS,
  
valueFactory.bootstrapping(Collections.Tokensingleton(keyTokens.get(7;
+ PendingRangeCalculatorService.instance.blockUntilFinished();
  
  // don't require test update every time a new keyspace is added to 
test/conf/cassandra.yaml
 -MapString, AbstractReplicationStrategy tableStrategyMap = new 
HashMapString, AbstractReplicationStrategy();
 +MapString, AbstractReplicationStrategy keyspaceStrategyMap = new 
HashMapString, AbstractReplicationStrategy();
  for (int i = 1; i = 4; i++)
  {
 -tableStrategyMap.put(Keyspace + i, getStrategy(Keyspace + i, 
tmd));
 +keyspaceStrategyMap.put(Keyspace + i, getStrategy(Keyspace + 
i, tmd));
  }
  
 /**



[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool

2013-12-04 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839296#comment-13839296
 ] 

Lyuben Todorov commented on CASSANDRA-6421:
---

[~cscetbon] Maybe I'm missing something but I cant get the autocomplete to kick 
in. Is my expectation of the below incorrect?
{code}
./nodetool cfhisto
# I was expecting the above to be autocompleted to the below when I press [TAB] 
 
./nodetool cfhistograms
{code}

When i try running *./etc/bash_completion.d/nodetool* I get the below error on 
OSX and Linux Ubuntu (didn't test other distros)
{code}
./etc/bash_completion.d/nodetool: line 1: have: command not found
{code}

Are any steps necessary  to activate the autocomplete, or should it just work 
with nodetool? 

 Add bash completion to nodetool
 ---

 Key: CASSANDRA-6421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Cyril Scetbon
Assignee: Cyril Scetbon
Priority: Trivial
 Fix For: 2.0.4


 You can find the patch from my commit here :
 https://github.com/cscetbon/cassandra/commit/07a10b99778f14362ac05c70269c108870555bf3.patch
 it uses cqlsh to get keyspaces and namespaces and could use an environment 
 variable (not implemented) to get access which cqlsh if authentification is 
 needed. But I think that's really a good start :)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6307) Switch cqlsh from cassandra-dbapi2 to python-driver

2013-12-04 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839343#comment-13839343
 ] 

Jeremiah Jordan commented on CASSANDRA-6307:


[~andy888] Good places to check on the Python Driver:
https://groups.google.com/a/lists.datastax.com/forum/#!forum/python-driver-user
https://github.com/datastax/python-driver
https://datastax-oss.atlassian.net/browse/PYTHON

 Switch cqlsh from cassandra-dbapi2 to python-driver
 ---

 Key: CASSANDRA-6307
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6307
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.1


 python-driver is hitting 1.0 soon. cassandra-dbapi2 development has stalled.
 It's time to switch cqlsh to native protocol and cassandra-dbapi2, especially 
 now that
 1. Some CQL3 things are not supported by Thrift transport
 2. cqlsh no longer has to support CQL2 (dropped in 2.0)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6451) Allow Cassandra-Stress to Set Compaction Strategy Options

2013-12-04 Thread Russell Alexander Spitzer (JIRA)
Russell Alexander Spitzer created CASSANDRA-6451:


 Summary: Allow Cassandra-Stress to Set Compaction Strategy Options
 Key: CASSANDRA-6451
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6451
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Russell Alexander Spitzer
Priority: Minor


I was just running some tests with Cassandra-Stress and discovered that I was 
unable to set the compaction_strategy_options I needed. I've made a small patch 
to add yet another parameter to stress allowing the user to set 
strategy_options.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-6451) Allow Cassandra-Stress to Set Compaction Strategy Options

2013-12-04 Thread Russell Alexander Spitzer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Alexander Spitzer reassigned CASSANDRA-6451:


Assignee: Russell Alexander Spitzer

 Allow Cassandra-Stress to Set Compaction Strategy Options
 -

 Key: CASSANDRA-6451
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6451
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Russell Alexander Spitzer
Assignee: Russell Alexander Spitzer
Priority: Minor

 I was just running some tests with Cassandra-Stress and discovered that I was 
 unable to set the compaction_strategy_options I needed. I've made a small 
 patch to add yet another parameter to stress allowing the user to set 
 strategy_options.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6451) Allow Cassandra-Stress to Set Compaction Strategy Options

2013-12-04 Thread Russell Alexander Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839360#comment-13839360
 ] 

Russell Alexander Spitzer commented on CASSANDRA-6451:
--

I understand this is old code and low priority, but if anyone else happens to 
need this patch it is available ^.

 Allow Cassandra-Stress to Set Compaction Strategy Options
 -

 Key: CASSANDRA-6451
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6451
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Russell Alexander Spitzer
Assignee: Russell Alexander Spitzer
Priority: Minor
 Attachments: trunk-6451.txt


 I was just running some tests with Cassandra-Stress and discovered that I was 
 unable to set the compaction_strategy_options I needed. I've made a small 
 patch to add yet another parameter to stress allowing the user to set 
 strategy_options.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6451) Allow Cassandra-Stress to Set Compaction Strategy Options

2013-12-04 Thread Russell Alexander Spitzer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Alexander Spitzer updated CASSANDRA-6451:
-

Attachment: trunk-6451.txt

Patch to add new -z option for Compaction_Strategy_Options

 Allow Cassandra-Stress to Set Compaction Strategy Options
 -

 Key: CASSANDRA-6451
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6451
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Russell Alexander Spitzer
Assignee: Russell Alexander Spitzer
Priority: Minor
 Attachments: trunk-6451.txt


 I was just running some tests with Cassandra-Stress and discovered that I was 
 unable to set the compaction_strategy_options I needed. I've made a small 
 patch to add yet another parameter to stress allowing the user to set 
 strategy_options.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6451) Allow Cassandra-Stress to Set Compaction Strategy Options

2013-12-04 Thread Russell Alexander Spitzer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Alexander Spitzer updated CASSANDRA-6451:
-

Description: 
I was just running some tests with Cassandra-Stress and discovered that I was 
unable to set the compaction_strategy_options I needed. I've made a small patch 
to add yet another parameter to stress allowing the user to set 
strategy_options.

Usage like so:
./cassandra-stress -Z MyStrat -z option1=10,option2=5

  was:I was just running some tests with Cassandra-Stress and discovered that I 
was unable to set the compaction_strategy_options I needed. I've made a small 
patch to add yet another parameter to stress allowing the user to set 
strategy_options.


 Allow Cassandra-Stress to Set Compaction Strategy Options
 -

 Key: CASSANDRA-6451
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6451
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Russell Alexander Spitzer
Assignee: Russell Alexander Spitzer
Priority: Minor
 Attachments: trunk-6451.txt


 I was just running some tests with Cassandra-Stress and discovered that I was 
 unable to set the compaction_strategy_options I needed. I've made a small 
 patch to add yet another parameter to stress allowing the user to set 
 strategy_options.
 Usage like so:
 ./cassandra-stress -Z MyStrat -z option1=10,option2=5



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6364) There should be different disk_failure_policies for data and commit volumes or commit volume failure should always cause node exit

2013-12-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839383#comment-13839383
 ] 

Jonathan Ellis commented on CASSANDRA-6364:
---

Just make it die on IOError like the existing code.  For now people can deal 
with hangs instead of erroring manually.

 There should be different disk_failure_policies for data and commit volumes 
 or commit volume failure should always cause node exit
 --

 Key: CASSANDRA-6364
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6364
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: JBOD, single dedicated commit disk
Reporter: J. Ryan Earl
Assignee: Benedict
 Fix For: 2.0.4


 We're doing fault testing on a pre-production Cassandra cluster.  One of the 
 tests was to simulation failure of the commit volume/disk, which in our case 
 is on a dedicated disk.  We expected failure of the commit volume to be 
 handled somehow, but what we found was that no action was taken by Cassandra 
 when the commit volume fail.  We simulated this simply by pulling the 
 physical disk that backed the commit volume, which resulted in filesystem I/O 
 errors on the mount point.
 What then happened was that the Cassandra Heap filled up to the point that it 
 was spending 90% of its time doing garbage collection.  No errors were logged 
 in regards to the failed commit volume.  Gossip on other nodes in the cluster 
 eventually flagged the node as down.  Gossip on the local node showed itself 
 as up, and all other nodes as down.
 The most serious problem was that connections to the coordinator on this node 
 became very slow due to the on-going GC, as I assume uncommitted writes piled 
 up on the JVM heap.  What we believe should have happened is that Cassandra 
 should have caught the I/O error and exited with a useful log message, or 
 otherwise done some sort of useful cleanup.  Otherwise the node goes into a 
 sort of Zombie state, spending most of its time in GC, and thus slowing down 
 any transactions that happen to use the coordinator on said node.
 A limit on in-memory, unflushed writes before refusing requests may also 
 work.  Point being, something should be done to handle the commit volume 
 dying as doing nothing results in affecting the entire cluster.  I should 
 note, we are using: disk_failure_policy: best_effort



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6446) Faster range tombstones on wide partitions

2013-12-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6446:
--

 Reviewer: Sylvain Lebresne
  Component/s: Core
Fix Version/s: 2.0.4
 Assignee: Oleg Anastasyev

 Faster range tombstones on wide partitions
 --

 Key: CASSANDRA-6446
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6446
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Oleg Anastasyev
Assignee: Oleg Anastasyev
 Fix For: 2.0.4

 Attachments: RangeTombstonesReadOptimization.diff, 
 RangeTombstonesWriteOptimization.diff


 Having wide CQL rows (~1M in single partition) and after deleting some of 
 them, we found inefficiencies in handling of range tombstones on both write 
 and read paths.
 I attached 2 patches here, one for write path 
 (RangeTombstonesWriteOptimization.diff) and another on read 
 (RangeTombstonesReadOptimization.diff).
 On write path, when you have some CQL rows deletions by primary key, each of 
 deletion is represented by range tombstone. On put of this tombstone to 
 memtable the original code takes all columns from memtable from partition and 
 checks DeletionInfo.isDeleted by brute for loop to decide, should this column 
 stay in memtable or it was deleted by new tombstone. Needless to say, more 
 columns you have on partition the slower deletions you have heating your CPU 
 with brute range tombstones check. 
 The RangeTombstonesWriteOptimization.diff patch for partitions with more than 
 1 columns loops by tombstones instead and checks existance of columns for 
 each of them. Also it copies of whole memtable range tombstone list only if 
 there are changes to be made there (original code copies range tombstone list 
 on every write).
 On read path, original code scans whole range tombstone list of a partition 
 to match sstable columns to their range tomstones. The 
 RangeTombstonesReadOptimization.diff patch scans only necessary range of 
 tombstones, according to filter used for read.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839413#comment-13839413
 ] 

Jonathan Ellis commented on CASSANDRA-6413:
---

Hmm...  Maybe the 3762 line was supposed to be

{code}
if (file.isFile()  !file.getName().endsWith(CURRENT_VERSION + .db))
{code}

to clean out obsolete cache files?

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
 Fix For: 1.2.13, 2.0.4

 Attachments: CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6413:
--

 Priority: Minor  (was: Major)
Fix Version/s: 2.0.4
   1.2.13

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5201) Cassandra/Hadoop does not support current Hadoop releases

2013-12-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839417#comment-13839417
 ] 

Jonathan Ellis commented on CASSANDRA-5201:
---

WDYT [~dbrosius]?

 Cassandra/Hadoop does not support current Hadoop releases
 -

 Key: CASSANDRA-5201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5201
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.0
Reporter: Brian Jeltema
Assignee: Dave Brosius
 Attachments: 5201_a.txt, hadoopCompat.patch


 Using Hadoop 0.22.0 with Cassandra results in the stack trace below.
 It appears that version 0.21+ changed org.apache.hadoop.mapreduce.JobContext
 from a class to an interface.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 interface org.apache.hadoop.mapreduce.JobContext, but class was expected
   at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:103)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:445)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:462)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:357)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1045)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1042)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1042)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1062)
   at MyHadoopApp.run(MyHadoopApp.java:163)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
   at MyHadoopApp.main(MyHadoopApp.java:82)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:192)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6423) Some histogram metrics of long[] type are unusable with Graphite

2013-12-04 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839449#comment-13839449
 ] 

Mikhail Stepura commented on CASSANDRA-6423:


I wonder if it's possible to convert those {{Gaugelong[]}} to {{Histogram}}, 
i.e. populate metric's  {{Histogram}} from  {{EstimatedHistogram}}

 Some histogram metrics of long[] type are unusable with Graphite
 

 Key: CASSANDRA-6423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6423
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.3, Oracle Linux 6.3, graphite
Reporter: Nikolai Grigoriev
Priority: Minor

 I am not entirely sure if this is a Cassandra issue or the limitation of the 
 graphite reporter agent. But since the metrics in question are created by 
 Cassandra itself I have decided that it may be appropriate to report it here.
 I am using Graphite with Cassandra 2.0.x and I have recently noticed frequent 
 'invalid line' messages in Graphite log. Unfortunately Graphite did not 
 provide enough details so I hacked it a bit to get more verbose error message 
 with the metric name. And here is what I have found:
 {code}
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedRowSizeHistogram.value
  [J@17c26ba5 1385741236received from client 192.168.20.157:3, ignoring
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedColumnCountHistogram.value
  [J@5d2929d2 1385741236received from client 192.168.20.157:3, ignoring
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedRowSizeHistogram.value
  [J@3978c9c6 1385741236received from client 192.168.20.157:3, ignoring
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedColumnCountHistogram.value
  [J@290703a4 1385741236received from client 192.168.20.157:3, ignoring
 29/11/2013 16:07:16 :: invalid line 
 .org.apache.cassandra.metrics.ColumnFamily.mykeyspace.some_cf.EstimatedRowSizeHistogram.value
  [J@b801907 1385741236received from client 192.168.20.157:3, ignoring
 {code}
 Then a quick search through Cassandre code confirmed that there is a number 
 of histograms created as Gauge with long[] data. So, when they are serialized 
 by  GraphiteReporter they are just printed as long[].toString(), making these 
 metrics useless.
 I am not sure what would be the best solution to it. I do see some histograms 
 (LiveScannedHistogram etc) that are implemented differently and they are 
 properly sent to Graphite.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6435) nodetool outputs xss and jamm errors in 1.2.12

2013-12-04 Thread Capn Crunch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839452#comment-13839452
 ] 

Capn Crunch commented on CASSANDRA-6435:


Duplicated in 2.0.3 as well.

 nodetool outputs xss and jamm errors in 1.2.12
 --

 Key: CASSANDRA-6435
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6435
 Project: Cassandra
  Issue Type: Bug
Reporter: Karl Mueller
Assignee: Brandon Williams
Priority: Minor

 Since 1.2.12, just running nodetool is producing this output. Probably this 
 is related to CASSANDRA-6273.
 it's unclear to me whether jamm is actually not being loaded, but clearly 
 nodetool should not be having this output, which is likely from 
 cassandra-env.sh
 [cassandra@dev-cass00 cassandra]$ /data2/cassandra/bin/nodetool ring
 xss =  -ea -javaagent:/data2/cassandra/bin/../lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms14G -Xmx14G -Xmn1G 
 -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 Note: Ownership information does not include topology; for complete 
 information, specify a keyspace
 Datacenter: datacenter1
 ==
 Address  RackStatus State   LoadOwns
 Token
 
 170141183460469231731687303715884105727
 10.93.15.10  rack1   Up Normal  123.82 GB   20.00%  
 34028236692093846346337460743176821145
 10.93.15.11  rack1   Up Normal  124 GB  20.00%  
 68056473384187692692674921486353642290
 10.93.15.12  rack1   Up Normal  123.97 GB   20.00%  
 102084710076281539039012382229530463436
 10.93.15.13  rack1   Up Normal  124.03 GB   20.00%  
 136112946768375385385349842972707284581
 10.93.15.14  rack1   Up Normal  123.93 GB   20.00%  
 170141183460469231731687303715884105727
 ERROR 16:20:01,408 Unable to initialize MemoryMeter (jamm not specified as 
 javaagent).  This means Cassandra will be unable to measure object sizes 
 accurately and may consequently OOM.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6435) nodetool outputs xss and jamm errors in 1.2.12

2013-12-04 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6435:
---

Fix Version/s: 2.0.4
   1.2.13

 nodetool outputs xss and jamm errors in 1.2.12
 --

 Key: CASSANDRA-6435
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6435
 Project: Cassandra
  Issue Type: Bug
Reporter: Karl Mueller
Assignee: Brandon Williams
Priority: Minor
 Fix For: 1.2.13, 2.0.4


 Since 1.2.12, just running nodetool is producing this output. Probably this 
 is related to CASSANDRA-6273.
 it's unclear to me whether jamm is actually not being loaded, but clearly 
 nodetool should not be having this output, which is likely from 
 cassandra-env.sh
 [cassandra@dev-cass00 cassandra]$ /data2/cassandra/bin/nodetool ring
 xss =  -ea -javaagent:/data2/cassandra/bin/../lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms14G -Xmx14G -Xmn1G 
 -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 Note: Ownership information does not include topology; for complete 
 information, specify a keyspace
 Datacenter: datacenter1
 ==
 Address  RackStatus State   LoadOwns
 Token
 
 170141183460469231731687303715884105727
 10.93.15.10  rack1   Up Normal  123.82 GB   20.00%  
 34028236692093846346337460743176821145
 10.93.15.11  rack1   Up Normal  124 GB  20.00%  
 68056473384187692692674921486353642290
 10.93.15.12  rack1   Up Normal  123.97 GB   20.00%  
 102084710076281539039012382229530463436
 10.93.15.13  rack1   Up Normal  124.03 GB   20.00%  
 136112946768375385385349842972707284581
 10.93.15.14  rack1   Up Normal  123.93 GB   20.00%  
 170141183460469231731687303715884105727
 ERROR 16:20:01,408 Unable to initialize MemoryMeter (jamm not specified as 
 javaagent).  This means Cassandra will be unable to measure object sizes 
 accurately and may consequently OOM.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-2527) Add ability to snapshot data as input to hadoop jobs

2013-12-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839472#comment-13839472
 ] 

Jonathan Ellis commented on CASSANDRA-2527:
---

bq. Not really feasible; Hadoop is a special case since we can seq scan 
sstables without having to fully open them (sample indexes, populate key 
cache, etc)

... so, I'm not sure how interesting that leaves this given that we're trying 
to do predicate pushdown for Hadoop queries that could be indexed, for instance.

 Add ability to snapshot data as input to hadoop jobs
 

 Key: CASSANDRA-2527
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2527
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremy Hanna
Assignee: Tyler Hobbs
Priority: Minor
  Labels: hadoop
 Fix For: 2.1


 It is desirable to have immutable inputs to hadoop jobs for the duration of 
 the job.  That way re-execution of individual tasks do not alter the output.  
 One way to accomplish this would be to snapshot the data that is used as 
 input to a job.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6420) CassandraStorage should not assume all DataBags are DefaultDataBags

2013-12-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6420:
--

Fix Version/s: (was: 1.2.12)
   (was: 2.0.2)
   (was: 2.1)
   2.0.4
   1.2.13

 CassandraStorage should not assume all DataBags are DefaultDataBags
 ---

 Key: CASSANDRA-6420
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6420
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
 Environment: All environments
Reporter: Mike Spertus
  Labels: pig
 Fix For: 1.2.13, 2.0.4

 Attachments: patch.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 CassandraStorage improperly assumes all DataBags are DefaultDataBags. As a 
 result, natural Pig code can't be used with CassandraStorage. For example:
 {quote}
 {{B = FOREACH A GENERATE $0, TOBAG(TOTUPLE($1, $2));}}
 {{STORE B into  'cassandra://MyKeySpace/MyColumnFamily' using 
 CassandraStorage();}}
 {quote}
 fails with a complaint that a {{NonSpillableDataBag}} can't be converted into 
 a {{DefaultDataBag}}.
 Since the {{CassandraStorage}} code only calls methods from {{DataBag}}, 
 there is no need for this artifical restriction. After applying the attached 
 patch, the above code works fine, making CassandraStorage much easier to use.
 This is my first submission to Cassandra, so I apologize for any incorrect 
 process. Please let me know what I should do differently. In particular, I am 
 a little unclear where I should put the test. I am thinking I should put it 
 in ThriftColumnFamilyTest.java. Is this correct or should it be somewhere 
 else? I'll create a test as soon as I understand. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6379) Replace index_interval with min/max_index_interval

2013-12-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6379:
--

Priority: Major  (was: Minor)

 Replace index_interval with min/max_index_interval
 --

 Key: CASSANDRA-6379
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6379
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 2.1


 As a continuation of the work in CASSANDRA-5519, we want to replace the 
 {{index_interval}} attribute of tables with {{min_index_interval}} and 
 {{max_index_interval}}.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6378) sstableloader does not support client encryption on Cassandra 2.0

2013-12-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6378:
--

Fix Version/s: (was: 2.0.2)
   2.0.4

 sstableloader does not support client encryption on Cassandra 2.0
 -

 Key: CASSANDRA-6378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6378
 Project: Cassandra
  Issue Type: Bug
Reporter: David Laube
  Labels: client, encryption, ssl, sstableloader
 Fix For: 2.0.4


 We have been testing backup/restore from one ring to another and we recently 
 stumbled upon an issue with sstableloader. When client_enc_enable: true, the 
 exception below is generated. However, when client_enc_enable is set to 
 false, the sstableloader is able to get to the point where it is discovers 
 endpoints, connects to stream data, etc.
 ==BEGIN EXCEPTION==
 sstableloader --debug -d x.x.x.248,x.x.x.108,x.x.x.113 
 /tmp/import/keyspace_name/columnfamily_name
 Exception in thread main java.lang.RuntimeException: Could not retrieve 
 endpoint ranges:
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:226)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.thrift.transport.TTransportException: Frame size 
 (352518400) larger than max length (16384000)!
 at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
 at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_describe_partitioner(Cassandra.java:1292)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.describe_partitioner(Cassandra.java:1280)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:199)
 ... 2 more
 ==END EXCEPTION==



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-2527) Add ability to snapshot data as input to hadoop jobs

2013-12-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2527:
--

Priority: Minor  (was: Major)

 Add ability to snapshot data as input to hadoop jobs
 

 Key: CASSANDRA-2527
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2527
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremy Hanna
Assignee: Tyler Hobbs
Priority: Minor
  Labels: hadoop
 Fix For: 2.1


 It is desirable to have immutable inputs to hadoop jobs for the duration of 
 the job.  That way re-execution of individual tasks do not alter the output.  
 One way to accomplish this would be to snapshot the data that is used as 
 input to a job.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4476) Support 2ndary index queries with only non-EQ clauses

2013-12-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839473#comment-13839473
 ] 

Jonathan Ellis commented on CASSANDRA-4476:
---

bq. we will need to modify SelectStatement to let queries with no-EQ clause 
pass validation

Is that just updating Type.allowsIndexQuery?  I also see EQ referenced in 
updateRestriction but I'm not really sure what's going on there.

bq. how do you estimate which index is likely to be the most selective

Well, we can compute the range of index partitions we'd need to scan, and we 
have stats on average cells-per-partition, so I think that gives us the 
cells-per-expression we need for highestSelectivityPredicate.

 Support 2ndary index queries with only non-EQ clauses
 -

 Key: CASSANDRA-4476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4476
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1


 Currently, a query that uses 2ndary indexes must have at least one EQ clause 
 (on an indexed column). Given that indexed CFs are local (and use 
 LocalPartitioner that order the row by the type of the indexed column), we 
 should extend 2ndary indexes to allow querying indexed columns even when no 
 EQ clause is provided.
 As far as I can tell, the main problem to solve for this is to update 
 KeysSearcher.highestSelectivityPredicate(). I.e. how do we estimate the 
 selectivity of non-EQ clauses? I note however that if we can do that estimate 
 reasonably accurately, this might provide better performance even for index 
 queries that both EQ and non-EQ clauses, because some non-EQ clauses may have 
 a much better selectivity than EQ ones (say you index both the user country 
 and birth date, for SELECT * FROM users WHERE country = 'US' AND birthdate  
 'Jan 2009' AND birtdate  'July 2009', you'd better use the birthdate index 
 first).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6288) Make compaction a priority queue

2013-12-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6288:
--

Priority: Minor  (was: Major)

 Make compaction a priority queue
 

 Key: CASSANDRA-6288
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6288
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.1


 We should prioritize compacting CFs by how many reads/s its preferred 
 candidate would save, divided by the number of bytes in the sstables.
 (Note that STCS currently divides by number of keys; ISTM that bytes will 
 work better since that does not penalize narrow rows.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-04 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839479#comment-13839479
 ] 

Mikhail Stepura edited comment on CASSANDRA-6413 at 12/4/13 11:29 PM:
--

[~jbellis]
In that case the obsolete files  would be never deleted. Their filenames end 
with cacheType-CURRENT_VERSION.db, so they would fall through those two 
conditions.


was (Author: mishail):
[~jbellis]
In that case the obsolete files  would be never deleted. Their filenames end 
with {cacheType-CURRENT_VERSION.db}, so they would fall through those two 
conditions.

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-04 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839479#comment-13839479
 ] 

Mikhail Stepura commented on CASSANDRA-6413:


[~jbellis]
In that case the obsolete files  would be never deleted. Their filenames end 
with {cacheType-CURRENT_VERSION.db}, so they would fall through those two 
conditions.

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[10/10] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-04 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6772247f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6772247f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6772247f

Branch: refs/heads/trunk
Commit: 6772247f828e3740aff737b8c4463612ba8d4b17
Parents: e4d4472 cf83c81
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 17:56:45 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 17:56:45 2013 -0600

--
 CHANGES.txt  | 6 --
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 2 +-
 2 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6772247f/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6772247f/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--



[07/10] git commit: Merge remote-tracking branch 'origin/cassandra-2.0' into cassandra-2.0

2013-12-04 Thread jbellis
Merge remote-tracking branch 'origin/cassandra-2.0' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b892c09a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b892c09a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b892c09a

Branch: refs/heads/trunk
Commit: b892c09aca6a5a97553d3a02a008da7250498565
Parents: 447c64c 32dbe58
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 17:56:00 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 17:56:00 2013 -0600

--
 CHANGES.txt|  1 +
 .../cassandra/db/marshal/CollectionType.java   | 17 +
 .../org/apache/cassandra/db/marshal/ListType.java  |  2 ++
 .../org/apache/cassandra/db/marshal/MapType.java   |  2 ++
 .../org/apache/cassandra/db/marshal/SetType.java   |  2 ++
 .../org/apache/cassandra/service/MoveTest.java |  2 ++
 6 files changed, 26 insertions(+)
--




[05/10] git commit: simplify

2013-12-04 Thread jbellis
simplify


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/447c64c4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/447c64c4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/447c64c4

Branch: refs/heads/trunk
Commit: 447c64c407f653467eaca73c8b8eda9b29fa91a4
Parents: 6724964
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 17:40:24 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 17:40:24 2013 -0600

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/447c64c4/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 7cdc4e6..e84b819 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1118,7 +1118,7 @@ public class DatabaseDescriptor
 
 public static File getSerializedCachePath(String ksName, String cfName, 
CacheService.CacheType cacheType, String version)
 {
-return new File(conf.saved_caches_directory + File.separator + ksName 
+ - + cfName + - + cacheType + (version == null ?  : - + version + 
.db));
+return new File(conf.saved_caches_directory, ksName + - + cfName + 
- + cacheType + (version == null ?  : - + version + .db));
 }
 
 public static int getDynamicUpdateInterval()



[02/10] git commit: CHANGES

2013-12-04 Thread jbellis
CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e40bc759
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e40bc759
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e40bc759

Branch: refs/heads/cassandra-2.0
Commit: e40bc759cd34bd6d8839bd115d5b395842410759
Parents: d8c4e89
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 16:59:16 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 16:59:16 2013 -0600

--
 CHANGES.txt | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e40bc759/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8e6cffa..09a3b07 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,9 +5,11 @@
  * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)
  * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172)
  * Improve gossip performance for typical messages (CASSANDRA-6409)
- * Throw IRE if a prepared has more markers than supported (CASSANDRA-5598)
+ * Throw IRE if a prepared statement has more markers than supported 
+   (CASSANDRA-5598)
  * Expose Thread metrics for the native protocol server (CASSANDRA-6234)
- * Change snapshot response message verb (CASSANDRA-6415)
+ * Change snapshot response message verb to INTERNAL to avoid dropping it 
+   (CASSANDRA-6415)
  * Warn when collection read has  65K elements (CASSANDRA-5428)
 
 



[06/10] git commit: Merge remote-tracking branch 'origin/cassandra-2.0' into cassandra-2.0

2013-12-04 Thread jbellis
Merge remote-tracking branch 'origin/cassandra-2.0' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b892c09a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b892c09a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b892c09a

Branch: refs/heads/cassandra-2.0
Commit: b892c09aca6a5a97553d3a02a008da7250498565
Parents: 447c64c 32dbe58
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 17:56:00 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 17:56:00 2013 -0600

--
 CHANGES.txt|  1 +
 .../cassandra/db/marshal/CollectionType.java   | 17 +
 .../org/apache/cassandra/db/marshal/ListType.java  |  2 ++
 .../org/apache/cassandra/db/marshal/MapType.java   |  2 ++
 .../org/apache/cassandra/db/marshal/SetType.java   |  2 ++
 .../org/apache/cassandra/service/MoveTest.java |  2 ++
 6 files changed, 26 insertions(+)
--




[08/10] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-04 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cf83c81d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cf83c81d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cf83c81d

Branch: refs/heads/cassandra-2.0
Commit: cf83c81d85fb20b50f988c23743ba2510308bf42
Parents: b892c09 e40bc75
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 17:56:25 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 17:56:25 2013 -0600

--
 CHANGES.txt | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cf83c81d/CHANGES.txt
--
diff --cc CHANGES.txt
index a7ab215,09a3b07..d485a69
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,15 +1,20 @@@
 -1.2.13
 +2.0.4
 + * Reduce gossip memory use by interning VersionedValue strings 
(CASSANDRA-6410)
 + * Allow specifying datacenters to participate in a repair (CASSANDRA-6218)
 + * Fix divide-by-zero in PCI (CASSANDRA-6403)
 + * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
 + * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395)
 + * Expose a total memtable size metric for a CF (CASSANDRA-6391)
 + * cqlsh: hanlde symlinks properly (CASSANDRA-6425)
 +Merged from 1.2:
   * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
 - * Optimize FD phi calculation (CASSANDRA-6386)
 - * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 - * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)
   * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172)
   * Improve gossip performance for typical messages (CASSANDRA-6409)
-  * Throw IRE if a prepared has more markers than supported (CASSANDRA-5598)
+  * Throw IRE if a prepared statement has more markers than supported 
+(CASSANDRA-5598)
   * Expose Thread metrics for the native protocol server (CASSANDRA-6234)
-  * Change snapshot response message verb (CASSANDRA-6415)
+  * Change snapshot response message verb to INTERNAL to avoid dropping it 
+(CASSANDRA-6415)
   * Warn when collection read has  65K elements (CASSANDRA-5428)
  
  



[03/10] git commit: CHANGES

2013-12-04 Thread jbellis
CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e40bc759
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e40bc759
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e40bc759

Branch: refs/heads/trunk
Commit: e40bc759cd34bd6d8839bd115d5b395842410759
Parents: d8c4e89
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 16:59:16 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 16:59:16 2013 -0600

--
 CHANGES.txt | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e40bc759/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8e6cffa..09a3b07 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,9 +5,11 @@
  * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)
  * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172)
  * Improve gossip performance for typical messages (CASSANDRA-6409)
- * Throw IRE if a prepared has more markers than supported (CASSANDRA-5598)
+ * Throw IRE if a prepared statement has more markers than supported 
+   (CASSANDRA-5598)
  * Expose Thread metrics for the native protocol server (CASSANDRA-6234)
- * Change snapshot response message verb (CASSANDRA-6415)
+ * Change snapshot response message verb to INTERNAL to avoid dropping it 
+   (CASSANDRA-6415)
  * Warn when collection read has  65K elements (CASSANDRA-5428)
 
 



[04/10] git commit: simplify

2013-12-04 Thread jbellis
simplify


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/447c64c4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/447c64c4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/447c64c4

Branch: refs/heads/cassandra-2.0
Commit: 447c64c407f653467eaca73c8b8eda9b29fa91a4
Parents: 6724964
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 17:40:24 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 17:40:24 2013 -0600

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/447c64c4/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 7cdc4e6..e84b819 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1118,7 +1118,7 @@ public class DatabaseDescriptor
 
 public static File getSerializedCachePath(String ksName, String cfName, 
CacheService.CacheType cacheType, String version)
 {
-return new File(conf.saved_caches_directory + File.separator + ksName 
+ - + cfName + - + cacheType + (version == null ?  : - + version + 
.db));
+return new File(conf.saved_caches_directory, ksName + - + cfName + 
- + cacheType + (version == null ?  : - + version + .db));
 }
 
 public static int getDynamicUpdateInterval()



[01/10] git commit: CHANGES

2013-12-04 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 d8c4e89b3 - e40bc759c
  refs/heads/cassandra-2.0 32dbe5825 - cf83c81d8
  refs/heads/trunk e4d447240 - 6772247f8


CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e40bc759
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e40bc759
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e40bc759

Branch: refs/heads/cassandra-1.2
Commit: e40bc759cd34bd6d8839bd115d5b395842410759
Parents: d8c4e89
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 16:59:16 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 16:59:16 2013 -0600

--
 CHANGES.txt | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e40bc759/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8e6cffa..09a3b07 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,9 +5,11 @@
  * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)
  * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172)
  * Improve gossip performance for typical messages (CASSANDRA-6409)
- * Throw IRE if a prepared has more markers than supported (CASSANDRA-5598)
+ * Throw IRE if a prepared statement has more markers than supported 
+   (CASSANDRA-5598)
  * Expose Thread metrics for the native protocol server (CASSANDRA-6234)
- * Change snapshot response message verb (CASSANDRA-6415)
+ * Change snapshot response message verb to INTERNAL to avoid dropping it 
+   (CASSANDRA-6415)
  * Warn when collection read has  65K elements (CASSANDRA-5428)
 
 



[09/10] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-04 Thread jbellis
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cf83c81d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cf83c81d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cf83c81d

Branch: refs/heads/trunk
Commit: cf83c81d85fb20b50f988c23743ba2510308bf42
Parents: b892c09 e40bc75
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Dec 4 17:56:25 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Dec 4 17:56:25 2013 -0600

--
 CHANGES.txt | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cf83c81d/CHANGES.txt
--
diff --cc CHANGES.txt
index a7ab215,09a3b07..d485a69
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,15 +1,20 @@@
 -1.2.13
 +2.0.4
 + * Reduce gossip memory use by interning VersionedValue strings 
(CASSANDRA-6410)
 + * Allow specifying datacenters to participate in a repair (CASSANDRA-6218)
 + * Fix divide-by-zero in PCI (CASSANDRA-6403)
 + * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
 + * Add sub-ms precision formats to the timestamp parser (CASSANDRA-6395)
 + * Expose a total memtable size metric for a CF (CASSANDRA-6391)
 + * cqlsh: hanlde symlinks properly (CASSANDRA-6425)
 +Merged from 1.2:
   * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
 - * Optimize FD phi calculation (CASSANDRA-6386)
 - * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 - * Don't list CQL3 table in CLI describe even if named explicitely 
(CASSANDRA-5750)
   * cqlsh: quote single quotes in strings inside collections (CASSANDRA-6172)
   * Improve gossip performance for typical messages (CASSANDRA-6409)
-  * Throw IRE if a prepared has more markers than supported (CASSANDRA-5598)
+  * Throw IRE if a prepared statement has more markers than supported 
+(CASSANDRA-5598)
   * Expose Thread metrics for the native protocol server (CASSANDRA-6234)
-  * Change snapshot response message verb (CASSANDRA-6415)
+  * Change snapshot response message verb to INTERNAL to avoid dropping it 
+(CASSANDRA-6415)
   * Warn when collection read has  65K elements (CASSANDRA-5428)
  
  



[jira] [Updated] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6413:
--

Attachment: 6413-v2.txt

v2 attached to clean out both old- and new- format files of the correct type.

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6413-v2.txt, CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-04 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839517#comment-13839517
 ] 

Mikhail Stepura commented on CASSANDRA-6413:


[~jbellis]
Do you think there will be problems with the simple 
{{contains(cacheType.toString())}} approach?

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6413-v2.txt, CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6448) Give option to stream just primary replica via sstableloader

2013-12-04 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839747#comment-13839747
 ] 

Nick Bailey commented on CASSANDRA-6448:


CASSANDRA-4756 mentions this approach as an option. But a more complete 
solution might be to merge the snapshots to resolve inconsistencies first. 
Perhaps this can serve as the ticket for just this approach.

 Give option to stream just primary replica via sstableloader
 

 Key: CASSANDRA-6448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Adam Hattrell
Priority: Minor

 A fair number of people use sstableloader to migrate data between clusters.
 Without vnodes it's usually possible to pick a set of nodes that ensure that 
 you don't stream multiple copies of your sstables (where RF  1) - but with 
 vnodes that seems to be much harder.
 Would it be feasible to get sstableloader only to stream data from primary 
 replicas?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4959) CQLSH insert help has typo

2013-12-04 Thread Andy Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839798#comment-13839798
 ] 

Andy Zhao commented on CASSANDRA-4959:
--

There are two more typo in this help text:
* there is an extra ] in *\[USING TIMESTAMP timestamp\]* which should be 
removed. 
* the first ] in *\[AND TTL timeToLive\]\]* should be changed to 

For now we could just fix the original issue as below:
{quote}
\[USING TIMESTAMP timestamp | TTL timeToLive \[AND TTL timeToLive | 
TIMESTAMP timestamp \] \];
{quote}
It seems verbose but at least it won't mislead people into thinking AND TTL 
timeToLive can be used without USING TIMESTAMP. Same fix needs to be 
applied to other help topic too, such as UPDATE command. 

I think having AND keyword in the grammar is a mistake because parser doesn't 
need it and it seems verbose, but this is out of scope.

 CQLSH insert help has typo
 --

 Key: CASSANDRA-4959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4959
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Affects Versions: 1.2.0 beta 2
Reporter: Edward Capriolo
Priority: Trivial

 [cqlsh 2.3.0 | Cassandra 1.2.0-beta2-SNAPSHOT | CQL spec 3.0.0 | Thrift 
 protocol 19.35.0]
 Use HELP for help.
 cqlsh help INSERT
 INSERT INTO [keyspace.]tablename
 ( colname1, colname2 [, colname3 [, ...]] )
VALUES ( colval1, colval2 [, colval3 [, ...]] )
[USING TIMESTAMP timestamp]
  [AND TTL timeToLive]];
 Should be. 
 {quote}
 [AND TTL timeToLive]];
 {quote}
 Also it was not clear to me initially that you could just do:
 {quote}
 USING TTL timeToLive
 {quote}
 But maybe that is just me.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-4959) CQLSH insert help has typo

2013-12-04 Thread Andy Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839798#comment-13839798
 ] 

Andy Zhao edited comment on CASSANDRA-4959 at 12/5/13 4:45 AM:
---

There are two more typo in this help text:
* there is an extra ] in *\[USING TIMESTAMP timestamp\]* which should be 
removed. 
* the first ] in *\[AND TTL timeToLive\]\]* should be changed to 

For now we could just fix these typos as below:
{quote}
\[USING TIMESTAMP timestamp | TTL timeToLive \[AND TTL timeToLive | 
TIMESTAMP timestamp \] \];
{quote}
It seems verbose but at least it won't mislead people into thinking AND TTL 
timeToLive can be used without USING TIMESTAMP. Same fix needs to be 
applied to other help topic too, such as UPDATE command. 

I think having AND keyword in the grammar is a mistake because parser doesn't 
need it and it seems verbose, but this is out of scope.


was (Author: andy888):
There are two more typo in this help text:
* there is an extra ] in *\[USING TIMESTAMP timestamp\]* which should be 
removed. 
* the first ] in *\[AND TTL timeToLive\]\]* should be changed to 

For now we could just fix the original issue as below:
{quote}
\[USING TIMESTAMP timestamp | TTL timeToLive \[AND TTL timeToLive | 
TIMESTAMP timestamp \] \];
{quote}
It seems verbose but at least it won't mislead people into thinking AND TTL 
timeToLive can be used without USING TIMESTAMP. Same fix needs to be 
applied to other help topic too, such as UPDATE command. 

I think having AND keyword in the grammar is a mistake because parser doesn't 
need it and it seems verbose, but this is out of scope.

 CQLSH insert help has typo
 --

 Key: CASSANDRA-4959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4959
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Affects Versions: 1.2.0 beta 2
Reporter: Edward Capriolo
Priority: Trivial

 [cqlsh 2.3.0 | Cassandra 1.2.0-beta2-SNAPSHOT | CQL spec 3.0.0 | Thrift 
 protocol 19.35.0]
 Use HELP for help.
 cqlsh help INSERT
 INSERT INTO [keyspace.]tablename
 ( colname1, colname2 [, colname3 [, ...]] )
VALUES ( colval1, colval2 [, colval3 [, ...]] )
[USING TIMESTAMP timestamp]
  [AND TTL timeToLive]];
 Should be. 
 {quote}
 [AND TTL timeToLive]];
 {quote}
 Also it was not clear to me initially that you could just do:
 {quote}
 USING TTL timeToLive
 {quote}
 But maybe that is just me.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6418) auto_snapshots are not removable via 'nodetool clearsnapshot'

2013-12-04 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839830#comment-13839830
 ] 

Mikhail Stepura commented on CASSANDRA-6418:


{code}[javac] 
C:\Users\mishail\workspace\cassandra\src\java\org\apache\cassandra\db\compaction\CompactionManager.java:810:
 error: cannot find symbol
[javac] cfs.clearSnapshot(snapshotName);
[javac]^
[javac]   symbol:   method clearSnapshot(String)
[javac]   location: variable cfs of type ColumnFamilyStore
{code}

For *getAllKSDirectories*
* {{ListFile snapshotDirs = new ArrayList();}} .It's a raw data type.  You 
probably meant either {{newArrayList()}} or {{new ArrayList()}}
* {{new File(dataDirectory + /  + ksName)}} and {{File(dataDirectory + /  + 
ksName + / + cfDir)}} - you should use 
{{org.apache.cassandra.db.Directories.join(String...)}} instead that 
concatenation.
* I think it would be better to use one of {{File.listFiles}} methods instead 
of {{File.list()}} 

 auto_snapshots are not removable via 'nodetool clearsnapshot'
 -

 Key: CASSANDRA-6418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6418
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: auto_snapshot: true
Reporter: J. Ryan Earl
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.4

 Attachments: 6418_cassandra-2.0.patch


 Snapshots of deleted CFs created via the auto_snapshot configuration 
 parameter appear to not be tracked.  The result is that 'nodetool 
 clearsnapshot keyspace with deleted CFs' does nothing, and short of 
 manually removing the files from the filesystem, deleted CFs remain 
 indefinitely taking up space.
 I'm not sure if this is intended, but it seems pretty counter-intuitive.  I 
 haven't found any documentation that indicates auto_snapshots would be 
 ignored by 'nodetool clearsnapshot'.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839840#comment-13839840
 ] 

Jonathan Ellis commented on CASSANDRA-6413:
---

Wouldn't that match CF or KS names or even other parts of the path?

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6413-v2.txt, CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-04 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839849#comment-13839849
 ] 

Mikhail Stepura commented on CASSANDRA-6413:


That will match for sure. But what are chances?
Nevertheless, I'm ok with your patch.

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6413-v2.txt, CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)