[jira] [Commented] (CASSANDRA-4131) Integrate Hive support to be in core cassandra

2013-07-04 Thread Rohit Rai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699805#comment-13699805
 ] 

Rohit Rai commented on CASSANDRA-4131:
--

Sorry for the mess there, I was just trying to port CFS and Hive metastore 
too... but those tests don't work right now, so put it on hold, getting it to 
work with CQL3 Column Families is a priority for me right now, so will come 
back to those later.

Just for the Hive handler, please look at the (cas-support-simple-hive) branch -
https://github.com/milliondreams/hive/tree/cas-support-simple-hive

All the test cases (whatever few they had) pass there and it is working 
perfectly with Thrift/Compact storage Column Families.

 Integrate Hive support to be in core cassandra
 --

 Key: CASSANDRA-4131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4131
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeremy Hanna
Assignee: Edward Capriolo
  Labels: hadoop, hive

 The standalone hive support (at https://github.com/riptano/hive) would be 
 great to have in-tree so that people don't have to go out to github to 
 download it and wonder if it's a left-for-dead external shim.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4131) Integrate Hive support to be in core cassandra

2013-07-04 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699835#comment-13699835
 ] 

Cyril Scetbon commented on CASSANDRA-4131:
--

bq. getting it to work with CQL3 Column Families is a priority for me right now
Okay, that's exactly the feature I'm waiting for :) You should find inspiration 
in [CASSANDRA-5234|https://issues.apache.org/jira/browse/CASSANDRA-5234] like 
paging


 Integrate Hive support to be in core cassandra
 --

 Key: CASSANDRA-4131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4131
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeremy Hanna
Assignee: Edward Capriolo
  Labels: hadoop, hive

 The standalone hive support (at https://github.com/riptano/hive) would be 
 great to have in-tree so that people don't have to go out to github to 
 download it and wonder if it's a left-for-dead external shim.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5723) the datetime comparetion is not right in cql3!

2013-07-04 Thread zhouhero (JIRA)
zhouhero created CASSANDRA-5723:
---

 Summary: the datetime comparetion is not right in cql3!
 Key: CASSANDRA-5723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5723
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: zhouhero
 Fix For: 1.2.3


- this bug be confirmed by fellow:


1.create table like fellow:

create table test2 (
id varchar,
c varchar,
create_date timestamp,
primary key(id)
);

create index idx_test2_c on test2 (c);
create index idx_test2_create_date on test2 (create_date);


2.insert data like fellow;

cqlsh:pgl update test2 set create_date='1950-01-01', c='1' where id='111';
cqlsh:pgl update test2 set create_date='1917-01-01', c='1' where id='111';
cqlsh:pgl update test2 set create_date='2013-01-01', c='1' where id='111';

3.select data :
cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
12:00:01' ALLOW FILTERING ;

id | c | create_date
-+---+--
111 | 1 | 2012-12-31 15:00:00+

4. add data:
update test2 set create_date='1917-05-01', c='1' where id='111';

5.select data:
cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
12:00:01' ALLOW FILTERING ;

id | c | create_date
-+---+--
111 | 1 | 1917-04-30 15:00:00+
↑
the search result is not right!
it should be fellow:

id | c | create_date
-+---+--
111 | 1 | 2012-12-31 15:00:00+

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5723) the datetime compare is not right in cql3!

2013-07-04 Thread zhouhero (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouhero updated CASSANDRA-5723:


Summary: the datetime compare is not right in cql3!  (was: the datetime 
comparetion is not right in cql3!)

 the datetime compare is not right in cql3!
 --

 Key: CASSANDRA-5723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5723
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: zhouhero
 Fix For: 1.2.3


 - this bug can be confirmed by fellow:
 1.create table like fellow:
 create table test2 (
 id varchar,
 c varchar,
 create_date timestamp,
 primary key(id)
 );
 create index idx_test2_c on test2 (c);
 create index idx_test2_create_date on test2 (create_date);
 2.insert data like fellow;
 cqlsh:pgl update test2 set create_date='1950-01-01', c='1' where id='111';
 cqlsh:pgl update test2 set create_date='1917-01-01', c='1' where id='111';
 cqlsh:pgl update test2 set create_date='2013-01-01', c='1' where id='111';
 3.select data :
 cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
 12:00:01' ALLOW FILTERING ;
 id | c | create_date
 -+---+--
 111 | 1 | 2012-12-31 15:00:00+
 4. add data:
 update test2 set create_date='1917-05-01', c='1' where id='111';
 5.select data:
 cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
 12:00:01' ALLOW FILTERING ;
 id | c | create_date
 -+---+--
 111 | 1 | 1917-04-30 15:00:00+
 ↑
 the search result is not right!
 it should be fellow:
 id | c | create_date
 -+---+--
 111 | 1 | 2012-12-31 15:00:00+

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5723) the datetime comparetion is not right in cql3!

2013-07-04 Thread zhouhero (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouhero updated CASSANDRA-5723:


Description: 
- this bug can be confirmed by fellow:


1.create table like fellow:

create table test2 (
id varchar,
c varchar,
create_date timestamp,
primary key(id)
);

create index idx_test2_c on test2 (c);
create index idx_test2_create_date on test2 (create_date);


2.insert data like fellow;

cqlsh:pgl update test2 set create_date='1950-01-01', c='1' where id='111';
cqlsh:pgl update test2 set create_date='1917-01-01', c='1' where id='111';
cqlsh:pgl update test2 set create_date='2013-01-01', c='1' where id='111';

3.select data :
cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
12:00:01' ALLOW FILTERING ;

id | c | create_date
-+---+--
111 | 1 | 2012-12-31 15:00:00+

4. add data:
update test2 set create_date='1917-05-01', c='1' where id='111';

5.select data:
cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
12:00:01' ALLOW FILTERING ;

id | c | create_date
-+---+--
111 | 1 | 1917-04-30 15:00:00+
↑
the search result is not right!
it should be fellow:

id | c | create_date
-+---+--
111 | 1 | 2012-12-31 15:00:00+

  was:
- this bug be confirmed by fellow:


1.create table like fellow:

create table test2 (
id varchar,
c varchar,
create_date timestamp,
primary key(id)
);

create index idx_test2_c on test2 (c);
create index idx_test2_create_date on test2 (create_date);


2.insert data like fellow;

cqlsh:pgl update test2 set create_date='1950-01-01', c='1' where id='111';
cqlsh:pgl update test2 set create_date='1917-01-01', c='1' where id='111';
cqlsh:pgl update test2 set create_date='2013-01-01', c='1' where id='111';

3.select data :
cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
12:00:01' ALLOW FILTERING ;

id | c | create_date
-+---+--
111 | 1 | 2012-12-31 15:00:00+

4. add data:
update test2 set create_date='1917-05-01', c='1' where id='111';

5.select data:
cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
12:00:01' ALLOW FILTERING ;

id | c | create_date
-+---+--
111 | 1 | 1917-04-30 15:00:00+
↑
the search result is not right!
it should be fellow:

id | c | create_date
-+---+--
111 | 1 | 2012-12-31 15:00:00+


 the datetime comparetion is not right in cql3!
 --

 Key: CASSANDRA-5723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5723
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: zhouhero
 Fix For: 1.2.3


 - this bug can be confirmed by fellow:
 1.create table like fellow:
 create table test2 (
 id varchar,
 c varchar,
 create_date timestamp,
 primary key(id)
 );
 create index idx_test2_c on test2 (c);
 create index idx_test2_create_date on test2 (create_date);
 2.insert data like fellow;
 cqlsh:pgl update test2 set create_date='1950-01-01', c='1' where id='111';
 cqlsh:pgl update test2 set create_date='1917-01-01', c='1' where id='111';
 cqlsh:pgl update test2 set create_date='2013-01-01', c='1' where id='111';
 3.select data :
 cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
 12:00:01' ALLOW FILTERING ;
 id | c | create_date
 -+---+--
 111 | 1 | 2012-12-31 15:00:00+
 4. add data:
 update test2 set create_date='1917-05-01', c='1' where id='111';
 5.select data:
 cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
 12:00:01' ALLOW FILTERING ;
 id | c | create_date
 -+---+--
 111 | 1 | 1917-04-30 15:00:00+
 ↑
 the search result is not right!
 it should be fellow:
 id | c | create_date
 -+---+--
 111 | 1 | 2012-12-31 15:00:00+

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5723) the datetime compare not right in cql3!

2013-07-04 Thread zhouhero (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouhero updated CASSANDRA-5723:


Summary: the datetime compare not right in cql3!  (was: the datetime 
compare is not right in cql3!)

 the datetime compare not right in cql3!
 ---

 Key: CASSANDRA-5723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5723
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: zhouhero
 Fix For: 1.2.3


 - this bug can be confirmed by fellow:
 1.create table like fellow:
 create table test2 (
 id varchar,
 c varchar,
 create_date timestamp,
 primary key(id)
 );
 create index idx_test2_c on test2 (c);
 create index idx_test2_create_date on test2 (create_date);
 2.insert data like fellow;
 cqlsh:pgl update test2 set create_date='1950-01-01', c='1' where id='111';
 cqlsh:pgl update test2 set create_date='1917-01-01', c='1' where id='111';
 cqlsh:pgl update test2 set create_date='2013-01-01', c='1' where id='111';
 3.select data :
 cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
 12:00:01' ALLOW FILTERING ;
 id | c | create_date
 -+---+--
 111 | 1 | 2012-12-31 15:00:00+
 4. add data:
 update test2 set create_date='1917-05-01', c='1' where id='111';
 5.select data:
 cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
 12:00:01' ALLOW FILTERING ;
 id | c | create_date
 -+---+--
 111 | 1 | 1917-04-30 15:00:00+
 ↑
 the search result is not right!
 it should be fellow:
 id | c | create_date
 -+---+--
 111 | 1 | 2012-12-31 15:00:00+

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5677) Performance improvements of RangeTombstones/IntervalTree

2013-07-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700035#comment-13700035
 ] 

Sylvain Lebresne commented on CASSANDRA-5677:
-

So first, let's remark how inefficient is our current use of the IntervalTree. 
I wrote a small benchmark test (1 node, locally, nothing fancy) that does the 
following:
* Creates the following table: CREATE TABLE test (k int, v int, PRIMARY KEY (k, 
v))
* Inserts N (CQL3) rows for a given (fixed) partition key (so: INSERT INTO 
test(k, v) VALUES (0, n)).
* Deletes those N (CQL3) rows (DELETE FROM test WHERE k=0 AND v=n). This 
involves insert a range tombstone (because it's not a compact table).
* Queries all rows for that partition key (SELECT * FROM test WHERE k=0), thus 
getting no results. I also did the same query in revsed order to exercise that 
code path too.
I ran that 10 times (with a different partition key for each run) and timed all 
operation. For N=2K (so pretty small), on trunk the results on my machine are:
{noformat}
| Insertions |  Deletions |  Query | 
Reversed query

 Run 0  |   3418.0ms |  36950.6ms |  26100.5ms |
  26147.3ms
 Run 1  |   2295.7ms |  36073.0ms |  28388.8ms |
  28127.0ms
 Run 2  |   1641.2ms |  36119.4ms |  26953.1ms |
  26177.8ms
 Run 3  |   1647.0ms |  30383.9ms |  28118.1ms |
  27737.7ms
 Run 4  |   1472.9ms |  35913.1ms |  28172.3ms |
  28046.6ms
 Run 5  |679.8ms |  30472.8ms |  28197.5ms |
  27756.0ms
 Run 6  |   1417.5ms |  30428.8ms |  28022.0ms |
  27826.3ms
 Run 7  |657.7ms |  30366.9ms |  28047.5ms |
  28081.4ms
 Run 8  |662.8ms |  30369.6ms |  28123.5ms |
  27768.7ms
 Run 9  |667.2ms |  30459.5ms |  32821.0ms |
  32430.0ms
 Avg|   1456.0ms |  32753.8ms |  28294.4ms |
  28009.9ms
 8 last |   1105.8ms |  31814.3ms |  28556.9ms |
  28228.1ms
{noformat}
Even ignoring the 2 first run (to let the JVM warm up), both deletion and query 
take about 30 seconds each! That's obviously very broken.

Now, Fabien's patch does fix the brokenness. After rebase to trunk (for 
fairness since my tests are on trunk), and for N=10K (so 8x more that the 
previous test, the reason I've only use 2K on bare trunk is that it's too long 
with 10K :)) I get:
{noformat}
| Insertions |  Deletions |  Query | 
Reversed query

 Run 0  |   3460.4ms |   2575.7ms | 69.7ms |
 93.7ms
 Run 1  |   1223.7ms |   1772.9ms | 64.3ms |
 57.4ms
 Run 2  |   1416.7ms |744.3ms | 25.8ms |
 27.9ms
 Run 3  |673.0ms |298.5ms | 39.3ms |
 29.4ms
 Run 4  |470.5ms |666.8ms | 31.7ms |
 25.4ms
 Run 5  |303.0ms |591.8ms | 34.9ms |
 26.4ms
 Run 6  |512.9ms |293.0ms | 26.3ms |
 28.1ms
 Run 7  |437.2ms |595.0ms | 39.0ms |
 24.8ms
 Run 8  |295.6ms |494.2ms | 32.5ms |
 23.7ms
 Run 9  |533.8ms |258.7ms | 32.7ms |
 25.6ms
 Avg|932.7ms |829.1ms | 39.6ms |
 36.2ms
 8 last |580.3ms |492.8ms | 32.8ms |
 26.4ms
{noformat}
So, it's sane again (the query is a lot faster than the writes because my test 
do the insert/deletes sequentially one at a time, I was mostly interested by 
read time anyway).  It's worth noting that it's not really that our current 
centered interval tree implementation is bad in itself, it's just that you 
can't add new interval once built which make it ill-suited for range tombstones 
(but it's fine for our other use case of storing sstables).


However, as hinted in my previous comment, we can do better and generally 
improve our handling of range tombstones by using the following properties:
# we don't care about overlapping range tombstone. If we have say the following 
range tombstones: [0, 10]@3, [5, 8]@1, [8, 15]@4 (which we currently all store 
as-is), then we'd be fine just storing: [0, 8]@3, [8, 15]@4. And in fact, 
storing the 

[jira] [Created] (CASSANDRA-5724) Cassandra upgrade

2013-07-04 Thread Or Sher (JIRA)
Or Sher created CASSANDRA-5724:
--

 Summary: Cassandra upgrade 
 Key: CASSANDRA-5724
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5724
 Project: Cassandra
  Issue Type: Improvement
Reporter: Or Sher




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5724) Timeouts for slice/rangeslice queries while some nodes versions are lower than 1.2 and some higher.

2013-07-04 Thread Or Sher (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Or Sher updated CASSANDRA-5724:
---

  Component/s: Core
  Description: 
When doing a rolling upgrade from 1.0.* or 1.1.* to 1.2.* some slice or range 
slice queries executed against a 1.2.* node fails due to timeout exception:

[default@orTestKS] list orTestCF;
Using default limit of 100
Using default column limit of 100
null
TimedOutException()
at 
org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
at 
org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1489)
at 
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
at 
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:210)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:337)


It seems this issue is because of the new parameter in 1.2.*: 
internode_compression which is set to all by default.

It seems that by setting this parameter to none solves the problem.
I think the question is if Cassandra should support somehow nodes with 
different configuration for this parameter?

Affects Version/s: 1.2.0
  Summary: Timeouts for slice/rangeslice queries while some nodes 
versions are lower than 1.2 and some higher.  (was: Cassandra upgrade )

 Timeouts for slice/rangeslice queries while some nodes versions are lower 
 than 1.2 and some higher.
 ---

 Key: CASSANDRA-5724
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5724
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Or Sher

 When doing a rolling upgrade from 1.0.* or 1.1.* to 1.2.* some slice or range 
 slice queries executed against a 1.2.* node fails due to timeout exception:
 [default@orTestKS] list orTestCF;
 Using default limit of 100
 Using default column limit of 100
 null
 TimedOutException()
   at 
 org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
   at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1489)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:210)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:337)
 It seems this issue is because of the new parameter in 1.2.*: 
 internode_compression which is set to all by default.
 It seems that by setting this parameter to none solves the problem.
 I think the question is if Cassandra should support somehow nodes with 
 different configuration for this parameter?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5725) Silently failing messages in case of schema not fully propagated

2013-07-04 Thread Sergio Bossa (JIRA)
Sergio Bossa created CASSANDRA-5725:
---

 Summary: Silently failing messages in case of schema not fully 
propagated
 Key: CASSANDRA-5725
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5725
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Sergio Bossa


When a new keyspace and/or column family is created on a multi nodes cluster 
(at least three), and then a mutation is executed on such new column family, 
the operations sometimes silently fails by timing out.

I tracked this down to the schema not being fully propagated to all nodes. 
Here's what happens:
1) Node 1 receives the create keyspace/column family request.
2) The same node receives a mutation request at CL.QUORUM and sends to other 
nodes too.
3) Upon receiving the mutation request, other nodes try to deserialize it and 
fail in doing so if the schema is not fully propagated, i.e. because they don't 
find the mutated column family.
4) The connection between node 1 and the failed node is dropped, and the 
request on the former hangs until timing out.

Here is the underlying exception, I had to tweak several log levels to get it: 
{noformat}
INFO 13:11:39,441 IOException reading from socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
cfId=a31c7604-0e40-393b-82d7-ba3d910ad50a
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:184)
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:94)
at 
org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:397)
at 
org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:407)
at 
org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:367)
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:207)
at 
org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:139)
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
{noformat}

Finally, there's probably a correlated failure happening during repairs of 
newly created/mutated column family, causing the repair process to hang forever 
as follows:
{noformat}
AntiEntropySessions:1 daemon prio=5 tid=7fe981148000 nid=0x11abea000 in 
Object.wait() [11abe9000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
at java.lang.Object.wait(Object.java:485)
at 
org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
- locked 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
at 
org.apache.cassandra.service.AntiEntropyService$RepairSession.runMayThrow(AntiEntropyService.java:695)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:680)

http-8983-1 daemon prio=5 tid=7fe97d24d000 nid=0x11a5c8000 in Object.wait() 
[11a5c6000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
at java.lang.Object.wait(Object.java:485)
at 
org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
- locked 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
at 
org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2442)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
org.apache.cassandra.service.StorageService.forceTableRepairRange(StorageService.java:2409)
at 
org.apache.cassandra.service.StorageService.forceTableRepair(StorageService.java:2387)
at 
com.datastax.bdp.cassandra.index.solr.SolrCoreResourceManager.repairResources(SolrCoreResourceManager.java:693)
at 

git commit: Fix testing CQL2 key aliases

2013-07-04 Thread slebresne
Updated Branches:
  refs/heads/trunk 67ccdabfe - b7e49b3ab


Fix testing CQL2 key aliases


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b7e49b3a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b7e49b3a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b7e49b3a

Branch: refs/heads/trunk
Commit: b7e49b3ab16ad601df76b57ba47a6c685f28578d
Parents: 67ccdab
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Jul 4 15:32:41 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Jul 4 15:32:41 2013 +0200

--
 src/java/org/apache/cassandra/cql/QueryProcessor.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7e49b3a/src/java/org/apache/cassandra/cql/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql/QueryProcessor.java
index 9e437b1..e68aa7f 100644
--- a/src/java/org/apache/cassandra/cql/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql/QueryProcessor.java
@@ -270,7 +270,7 @@ public class QueryProcessor
 public static void validateKeyAlias(CFMetaData cfm, String key) throws 
InvalidRequestException
 {
 assert key.toUpperCase().equals(key); // should always be uppercased 
by caller
-String realKeyAlias = cfm.getCQL2KeyName();
+String realKeyAlias = cfm.getCQL2KeyName().toUpperCase();
 if (!realKeyAlias.equals(key))
 throw new InvalidRequestException(String.format(Expected key '%s' 
to be present in WHERE clause for '%s', realKeyAlias, cfm.cfName));
 }



[jira] [Commented] (CASSANDRA-5702) ALTER RENAME is broken in trunk

2013-07-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700069#comment-13700069
 ] 

Sylvain Lebresne commented on CASSANDRA-5702:
-

Had forgot a toUppercase() call when checking the CQL2 key alias. I've 
committed the trivial fix as b7e49b3.

 ALTER RENAME is broken in trunk
 ---

 Key: CASSANDRA-5702
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5702
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 5702.txt


 CASSANDRA-5125 has broken {{ALTER RENAME}} when the column is a default alias 
 (for thrift column families where the PK columns haven't been renamed yet).
 The problem is basically that while we assign default aliases to PK columns 
 when they don't have one, we currently fake those default aliases and do 
 not persist them. Concretely, CFDefinition is aware of them, but CFMetaData 
 is not, which break renaming post CASSANDRA-5125.
 We could fix rename punctually, but there is another related problem: for the 
 same reason, if you try to create an index on a column that is a non-renamed 
 default alias, this doesn't work with the arguably confusing message No 
 column definition found for column X. Here again, we could fix it 
 punctually, but it starts to sound like we need a more general fix.
 So I suggest stopping to fake those default aliases, but instead to just 
 create real aliases (that are known of CFMetaData and persisted in the 
 schema) when there is none. After all, from a user point of view, why should 
 a default column name be any special. And on top of fixing the issues above, 
 this also:
 # fix CASSANDRA-5489 in a somewhat simpler way
 # makes it easier for clients reading the schema CFs. They won't to infer the 
 default aliases anymore.
 The only theoretical downside is that we lose the information that a given 
 CQL3 column name is one assigned by default versus one set up by the user, 
 but given the user can rename those column names anyway, not sure this 
 matters in any way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5702) ALTER RENAME is broken in trunk

2013-07-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-5702.
-

Resolution: Fixed

 ALTER RENAME is broken in trunk
 ---

 Key: CASSANDRA-5702
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5702
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 5702.txt


 CASSANDRA-5125 has broken {{ALTER RENAME}} when the column is a default alias 
 (for thrift column families where the PK columns haven't been renamed yet).
 The problem is basically that while we assign default aliases to PK columns 
 when they don't have one, we currently fake those default aliases and do 
 not persist them. Concretely, CFDefinition is aware of them, but CFMetaData 
 is not, which break renaming post CASSANDRA-5125.
 We could fix rename punctually, but there is another related problem: for the 
 same reason, if you try to create an index on a column that is a non-renamed 
 default alias, this doesn't work with the arguably confusing message No 
 column definition found for column X. Here again, we could fix it 
 punctually, but it starts to sound like we need a more general fix.
 So I suggest stopping to fake those default aliases, but instead to just 
 create real aliases (that are known of CFMetaData and persisted in the 
 schema) when there is none. After all, from a user point of view, why should 
 a default column name be any special. And on top of fixing the issues above, 
 this also:
 # fix CASSANDRA-5489 in a somewhat simpler way
 # makes it easier for clients reading the schema CFs. They won't to infer the 
 default aliases anymore.
 The only theoretical downside is that we lose the information that a given 
 CQL3 column name is one assigned by default versus one set up by the user, 
 but given the user can rename those column names anyway, not sure this 
 matters in any way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5667) Change timestamps used in CAS ballot proposals to be more resilient to clock skew

2013-07-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700073#comment-13700073
 ] 

Sylvain Lebresne commented on CASSANDRA-5667:
-

lgtm, +1

 Change timestamps used in CAS ballot proposals to be more resilient to clock 
 skew
 -

 Key: CASSANDRA-5667
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5667
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0 beta 1
 Environment: n/a
Reporter: Nick Puz
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0 beta 1

 Attachments: 5667.txt


 The current time is used to generate the timeuuid used for CAS ballots 
 proposals with the logic that if a newer proposal exists then the current one 
 needs to complete that and re-propose. The problem is that if a machine has 
 clock skew and drifts into the future it will propose with a large timestamp 
 (which will get accepted) but then subsequent proposals with lower (but 
 correct) timestamps will not be able to proceed. This will prevent CAS write 
 operations and also reads at serializable consistency level. 
 The work around is to initially propose with current time (current behavior) 
 but if the proposal fails due to a larger existing one re-propose (after 
 completing the existing if necessary) with the max of (currentTime, 
 mostRecent+1, proposed+1).
 Since small drift is normal between different nodes in the same datacenter 
 this can happen even if NTP is working properly and a write hits one node and 
 a subsequent serialized read hits another. In the case of NTP config issues 
 (or OS bugs with time esp around DST) the unavailability window could be much 
 larger.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5677) Performance improvements of RangeTombstones/IntervalTree

2013-07-04 Thread Fabien Rousseau (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700093#comment-13700093
 ] 

Fabien Rousseau commented on CASSANDRA-5677:


I had a quick look at the patch (I'll take more time to review it next week), 
and being faster is really great. Keeping only the latest range tombstone (In 
fact, in our use case, we often overwrite range tombstones) was something I 
also add in mind but kept it for later optimization : I wrongly assumed that 
they were kept for a real reason (like repair for example).

I definitely think your approach is better and performance numbers confirms it.
I really think that this patch should be available for 1.2 (either as a patch 
to apply, or directly in 1.2.X).
If in 1.2.X, an option could be added in cassandra.yaml file to switch of 
implementations (just rapidly checked and seen that ondisk format seems 
compatible...)
In 1.2.X : make the current implementation the default to avoid introducing too 
many changes (but users having performance trouble could still switch after 
doing some tests. Also note that it should be possible to log something if more 
than X range tombstones are read and that it is advised to switch of 
implementation...)
In 2.0 : make the RangeTombstoneList the default
(It is just raw ideas...)

I can try to rebase your patch for 1.2 next week if you're interested...

 Performance improvements of RangeTombstones/IntervalTree
 

 Key: CASSANDRA-5677
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5677
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Fabien Rousseau
Priority: Minor
 Attachments: 5677-new-IntervalTree-implementation.patch


 Using massively range tombstones leads to bad response time (ie 100-500 
 ranges tombstones per row).
 After investigation, it seems that the culprit is how the DeletionInfo are 
 merged. Each time a RangeTombstone is added into the DeletionInfo, the whole 
 IntervalTree is rebuilt (thus, if you have 100 tombstones in one row, then 
 100 instances of IntervalTree are created, the first one having one interval, 
 the second one 2 intervals,... the 100th one : 100 intervals...)
 It seems that once the IntervalTree is built, it is not possible to add a new 
 Interval. Idea is to change the implementation of the IntervalTree by another 
 one which support insert interval.
 Attached is a proposed patch which :
  - renames the IntervalTree implementation to IntervalTreeCentered (the 
 renaming is inspired from : http://en.wikipedia.org/wiki/Interval_tree)
  - adds a new implementation IntervalTreeAvl (which is described here : 
 http://en.wikipedia.org/wiki/Interval_tree#Augmented_tree and here : 
 http://en.wikipedia.org/wiki/AVL_tree )
  - adds a new interface IIntervalTree to abstract the implementation
  - adds a new configuration option (interval_tree_provider) which allows to 
 choose between the two implementations (defaults to previous 
 IntervalTreeCentered)
  - updates IntervalTreeTest unit tests to test both implementations
  - creates a mini benchmark between the two implementations (tree creation, 
 point lookup, interval lookup)
  - creates a mini benchmark between the two implementations when merging 
 DeletionInfo (which shows a big performance improvement when using 500 
 tombstones for a row)
 This patch applies for 1.2 branch...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (CASSANDRA-5619) CAS UPDATE for a lost race: save round trip by returning column values

2013-07-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reopened CASSANDRA-5619:
-


I think I'm having a problem getting that issue right. This is now CQL3 that is 
kind of broken. More precisely, if you do:
{noformat}
UPDATE foo SET v=3 WHERE k=0 IF v=2
{noformat}
and there is no row at all for k=0, then CQL3 currently returns an empty result 
set. The intention was that it should return a result with {{v=null}} in it but:
# it's not the case so I'd need to fix it
# I realized that this was not totally ideal because returning {{v=null}} would 
somewhat suggest that the row exist but v is null, while the row doesn't exist.

So not sure what's the best course of action here. Do we consider that it's ok 
not being able to distinguish between the row exists but the value of the 
column in the condition is null and the row doesn't exist in that specific 
case (in which case we still need to fix it to return null as said above), or 
do we add back a result boolean column to tell us if the statement applied?

 CAS UPDATE for a lost race: save round trip by returning column values
 --

 Key: CASSANDRA-5619
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5619
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0 beta 1
Reporter: Blair Zajac
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 5619_thrift_fixup.txt, 5619.txt


 Looking at the new CAS CQL3 support examples [1], if one lost a race for an 
 UPDATE, to save a round trip to get the current values to decide if you need 
 to perform your work, could the columns that were used in the IF clause also 
 be returned to the caller?  Maybe the columns values as part of the SET part 
 could also be returned.
 I don't know if this is generally useful though.
 In the case of creating a new user account with a given username which is the 
 partition key, if one lost the race to another person creating an account 
 with the same username, it doesn't matter to the loser what the column values 
 are, just that they lost.
 I'm new to Cassandra, so maybe there's other use cases, such as doing 
 incremental amount of work on a row.  In pure Java projects I've done while 
 loops around AtomicReference.html#compareAndSet() until the work was done on 
 the referenced object to handle multiple threads each making forward progress 
 in updating the references object.
 [1] https://github.com/riptano/cassandra-dtest/blob/master/cql_tests.py#L3044

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5715) CAS on 'primary key only' table

2013-07-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5715:


Attachment: 0002-Support-updating-the-PK-only.txt
0001-Conditions-on-INSERT.txt

bq. We could special case it then as UPDATE test SET PRIMARY KEY WHERE k=0

Wfm, attached trivial patch for that.

Also attaching patch to allow conditions on INSERT. I'll note that the patch 
allows all the same conditions that UPDATE support, meaning that you can write:
{noformat}
INSERT INTO test(c1, c2, c3) VALUES(0, 1, 2) IF c2 = 4
{noformat}
which, arguably, is weird from a SQL point of view, but I guess it's not 
weirder than
{noformat}
UPDATE test SET c2=1, c3=2 WHERE c1=0 IF NOT EXISTS
{noformat}


 CAS on 'primary key only' table
 ---

 Key: CASSANDRA-5715
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor
 Attachments: 0001-Conditions-on-INSERT.txt, 
 0002-Support-updating-the-PK-only.txt


 Given a table with only a primary key, like
 {noformat}
 CREATE TABLE test (k int PRIMARY KEY)
 {noformat}
 there is currently no way to CAS a row in that table into existing because:
 # INSERT doesn't currently support IF
 # UPDATE has no way to update such table
 So we should probably allow IF conditions on INSERT statements.
 In addition (or alternatively), we could work on allowing UPDATE to update 
 such table. One motivation for that could be to make UPDATE always be more 
 general to INSERT. That is currently, there is a bunch of operation that 
 INSERT cannot do (counter increments, collection appends), but that primary 
 key table case is, afaik, the only case where you *need* to use INSERT. 
 However, because CQL forces segregation of PK value to the WHERE clause and 
 not to the SET one, the only syntax that I can see work would be:
 {noformat}
 UPDATE WHERE k=0;
 {noformat}
 which maybe is too ugly to allow?
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5723) the datetime compare not right in cql3!

2013-07-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700164#comment-13700164
 ] 

Sylvain Lebresne commented on CASSANDRA-5723:
-

bq. it should be fellow

No, it should be empty, because you're overwriting the same row all the time, 
so after that update in 4, the row '111' should contain 
{{create_date='1917-05-01'}} and you could rightfully expect to not get that 
row back with your query.

The reason it's actually returned is that for some reason, DateType.compare() 
(the comparator used for the timestamp CQL3 type) use an unsigned comparison, 
and since 1917 is before the unix epoch, it's timestamp is negative and 
wrongfully sort after any post-epoch date. This is *not* a CQL3 specific bug in 
particular.

The simple fix would be to change the comparison to be signed, but there is 
obviously backward compatibility concerns (since DateType has done that for 
years). In any case, in the meantime, avoid pre-epoch dates.

 the datetime compare not right in cql3!
 ---

 Key: CASSANDRA-5723
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5723
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: zhouhero
 Fix For: 1.2.3


 - this bug can be confirmed by fellow:
 1.create table like fellow:
 create table test2 (
 id varchar,
 c varchar,
 create_date timestamp,
 primary key(id)
 );
 create index idx_test2_c on test2 (c);
 create index idx_test2_create_date on test2 (create_date);
 2.insert data like fellow;
 cqlsh:pgl update test2 set create_date='1950-01-01', c='1' where id='111';
 cqlsh:pgl update test2 set create_date='1917-01-01', c='1' where id='111';
 cqlsh:pgl update test2 set create_date='2013-01-01', c='1' where id='111';
 3.select data :
 cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
 12:00:01' ALLOW FILTERING ;
 id | c | create_date
 -+---+--
 111 | 1 | 2012-12-31 15:00:00+
 4. add data:
 update test2 set create_date='1917-05-01', c='1' where id='111';
 5.select data:
 cqlsh:pgl select * from test2 where c='1' and create_date'2011-01-01 
 12:00:01' ALLOW FILTERING ;
 id | c | create_date
 -+---+--
 111 | 1 | 1917-04-30 15:00:00+
 ↑
 the search result is not right!
 it should be fellow:
 id | c | create_date
 -+---+--
 111 | 1 | 2012-12-31 15:00:00+

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5725) Silently failing messages in case of schema not fully propagated

2013-07-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700177#comment-13700177
 ] 

Jonathan Ellis commented on CASSANDRA-5725:
---

This is working as designed.  What do you think should happen instead?

 Silently failing messages in case of schema not fully propagated
 

 Key: CASSANDRA-5725
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5725
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Sergio Bossa

 When a new keyspace and/or column family is created on a multi nodes cluster 
 (at least three), and then a mutation is executed on such new column family, 
 the operations sometimes silently fails by timing out.
 I tracked this down to the schema not being fully propagated to all nodes. 
 Here's what happens:
 1) Node 1 receives the create keyspace/column family request.
 2) The same node receives a mutation request at CL.QUORUM and sends to other 
 nodes too.
 3) Upon receiving the mutation request, other nodes try to deserialize it and 
 fail in doing so if the schema is not fully propagated, i.e. because they 
 don't find the mutated column family.
 4) The connection between node 1 and the failed node is dropped, and the 
 request on the former hangs until timing out.
 Here is the underlying exception, I had to tweak several log levels to get 
 it: 
 {noformat}
 INFO 13:11:39,441 IOException reading from socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
 cfId=a31c7604-0e40-393b-82d7-ba3d910ad50a
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:184)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:94)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:397)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:407)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:367)
   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:207)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:139)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
 {noformat}
 Finally, there's probably a correlated failure happening during repairs of 
 newly created/mutated column family, causing the repair process to hang 
 forever as follows:
 {noformat}
 AntiEntropySessions:1 daemon prio=5 tid=7fe981148000 nid=0x11abea000 in 
 Object.wait() [11abe9000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.AntiEntropyService$RepairSession.runMayThrow(AntiEntropyService.java:695)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:680)
 http-8983-1 daemon prio=5 tid=7fe97d24d000 nid=0x11a5c8000 in Object.wait() 
 [11a5c6000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2442)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 

[jira] [Commented] (CASSANDRA-5677) Performance improvements of RangeTombstones/IntervalTree

2013-07-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700181#comment-13700181
 ] 

Jonathan Ellis commented on CASSANDRA-5677:
---

Adding switches doesn't really reduce the risk, it just adds complexity.

 Performance improvements of RangeTombstones/IntervalTree
 

 Key: CASSANDRA-5677
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5677
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Fabien Rousseau
Priority: Minor
 Attachments: 5677-new-IntervalTree-implementation.patch


 Using massively range tombstones leads to bad response time (ie 100-500 
 ranges tombstones per row).
 After investigation, it seems that the culprit is how the DeletionInfo are 
 merged. Each time a RangeTombstone is added into the DeletionInfo, the whole 
 IntervalTree is rebuilt (thus, if you have 100 tombstones in one row, then 
 100 instances of IntervalTree are created, the first one having one interval, 
 the second one 2 intervals,... the 100th one : 100 intervals...)
 It seems that once the IntervalTree is built, it is not possible to add a new 
 Interval. Idea is to change the implementation of the IntervalTree by another 
 one which support insert interval.
 Attached is a proposed patch which :
  - renames the IntervalTree implementation to IntervalTreeCentered (the 
 renaming is inspired from : http://en.wikipedia.org/wiki/Interval_tree)
  - adds a new implementation IntervalTreeAvl (which is described here : 
 http://en.wikipedia.org/wiki/Interval_tree#Augmented_tree and here : 
 http://en.wikipedia.org/wiki/AVL_tree )
  - adds a new interface IIntervalTree to abstract the implementation
  - adds a new configuration option (interval_tree_provider) which allows to 
 choose between the two implementations (defaults to previous 
 IntervalTreeCentered)
  - updates IntervalTreeTest unit tests to test both implementations
  - creates a mini benchmark between the two implementations (tree creation, 
 point lookup, interval lookup)
  - creates a mini benchmark between the two implementations when merging 
 DeletionInfo (which shows a big performance improvement when using 500 
 tombstones for a row)
 This patch applies for 1.2 branch...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5725) Silently failing messages in case of schema not fully propagated

2013-07-04 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700187#comment-13700187
 ] 

Sergio Bossa commented on CASSANDRA-5725:
-

Well, in an ideal world, given C* has the notion of schema, mutations should be 
validated with the schema of the coordinator node and associated to such schema 
version, which should be unique and monotonic (we are the former, not the 
latter): this way, replica nodes could understand if they're missing a schema 
update and request it (which would solve this bug), as well as recognize if a 
partition is ongoing and react accordingly.
By the way, this probably translates in using vector clocks for schema updates, 
and I understand C* has not been designed this way, so let's forget about the 
ideal world.

A more pragmatic solution may be to implement a consistency level for schema 
updates too: right now we only wait for the schema to be applied on the local 
node, while supporting all consistency levels would allow subsequent updates to 
succeed under the same CL specification: i.e., applying a schema update at 
CL.QUORUM would allow subsequent updates at the same CL to succeed too.

Finally, a trivial one may just be to make the schema problem explicit with a 
specific exception.

Certainly, in my opinion, masking a schema problem with a timeout exception is 
pretty much confusing, and may lead to several hours spent in debugging/testing 
or (if the user isn't that smart to do that) increasing the timeouts, which is 
a bad solution to a wrong problem.

Unless I'm missing something in the current design/implementation, which may 
well be :)

 Silently failing messages in case of schema not fully propagated
 

 Key: CASSANDRA-5725
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5725
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Sergio Bossa

 When a new keyspace and/or column family is created on a multi nodes cluster 
 (at least three), and then a mutation is executed on such new column family, 
 the operations sometimes silently fails by timing out.
 I tracked this down to the schema not being fully propagated to all nodes. 
 Here's what happens:
 1) Node 1 receives the create keyspace/column family request.
 2) The same node receives a mutation request at CL.QUORUM and sends to other 
 nodes too.
 3) Upon receiving the mutation request, other nodes try to deserialize it and 
 fail in doing so if the schema is not fully propagated, i.e. because they 
 don't find the mutated column family.
 4) The connection between node 1 and the failed node is dropped, and the 
 request on the former hangs until timing out.
 Here is the underlying exception, I had to tweak several log levels to get 
 it: 
 {noformat}
 INFO 13:11:39,441 IOException reading from socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
 cfId=a31c7604-0e40-393b-82d7-ba3d910ad50a
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:184)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:94)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:397)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:407)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:367)
   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:207)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:139)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
 {noformat}
 Finally, there's probably a correlated failure happening during repairs of 
 newly created/mutated column family, causing the repair process to hang 
 forever as follows:
 {noformat}
 AntiEntropySessions:1 daemon prio=5 tid=7fe981148000 nid=0x11abea000 in 
 Object.wait() [11abe9000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.AntiEntropyService$RepairSession.runMayThrow(AntiEntropyService.java:695)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 

[jira] [Commented] (CASSANDRA-5725) Silently failing messages in case of schema not fully propagated

2013-07-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700213#comment-13700213
 ] 

Jonathan Ellis commented on CASSANDRA-5725:
---

bq. Finally, a trivial one may just be to make the schema problem explicit with 
a specific exception.

This is not trivial since replicas only ack writes on success.

Here's how it's supposed to work: you perform your schema change, then you 
check for schema agreement before starting to write to the new table.

 Silently failing messages in case of schema not fully propagated
 

 Key: CASSANDRA-5725
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5725
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Sergio Bossa

 When a new keyspace and/or column family is created on a multi nodes cluster 
 (at least three), and then a mutation is executed on such new column family, 
 the operations sometimes silently fails by timing out.
 I tracked this down to the schema not being fully propagated to all nodes. 
 Here's what happens:
 1) Node 1 receives the create keyspace/column family request.
 2) The same node receives a mutation request at CL.QUORUM and sends to other 
 nodes too.
 3) Upon receiving the mutation request, other nodes try to deserialize it and 
 fail in doing so if the schema is not fully propagated, i.e. because they 
 don't find the mutated column family.
 4) The connection between node 1 and the failed node is dropped, and the 
 request on the former hangs until timing out.
 Here is the underlying exception, I had to tweak several log levels to get 
 it: 
 {noformat}
 INFO 13:11:39,441 IOException reading from socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
 cfId=a31c7604-0e40-393b-82d7-ba3d910ad50a
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:184)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:94)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:397)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:407)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:367)
   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:207)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:139)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
 {noformat}
 Finally, there's probably a correlated failure happening during repairs of 
 newly created/mutated column family, causing the repair process to hang 
 forever as follows:
 {noformat}
 AntiEntropySessions:1 daemon prio=5 tid=7fe981148000 nid=0x11abea000 in 
 Object.wait() [11abe9000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.AntiEntropyService$RepairSession.runMayThrow(AntiEntropyService.java:695)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:680)
 http-8983-1 daemon prio=5 tid=7fe97d24d000 nid=0x11a5c8000 in Object.wait() 
 [11a5c6000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2442)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 

git commit: use max(current time from system clock, inProgress + 1) as CAS ballot patch by jbellis; reviewed by slebresne for CASSANDRA-5667

2013-07-04 Thread jbellis
Updated Branches:
  refs/heads/trunk b7e49b3ab - 8e003d842


use max(current time from system clock, inProgress + 1) as CAS ballot
patch by jbellis; reviewed by slebresne for CASSANDRA-5667


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e003d84
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e003d84
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e003d84

Branch: refs/heads/trunk
Commit: 8e003d842619bfce3585761684e7ba4114be89db
Parents: b7e49b3
Author: Jonathan Ellis jbel...@apache.org
Authored: Sun Jun 30 23:25:09 2013 -0700
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Jul 4 10:02:05 2013 -0700

--
 CHANGES.txt |   2 +-
 .../apache/cassandra/cql/QueryProcessor.java|   2 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |  10 +-
 .../apache/cassandra/service/StorageProxy.java  | 122 +--
 .../cassandra/service/paxos/PaxosState.java |   2 +-
 .../org/apache/cassandra/utils/UUIDGen.java |  11 ++
 6 files changed, 79 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e003d84/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 281a0aa..74f1753 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -9,7 +9,7 @@
  * Removed compatibility with pre-1.2.5 sstables and network messages
(CASSANDRA-5511)
  * removed PBSPredictor (CASSANDRA-5455)
- * CAS support (CASSANDRA-5062, 5441, 5442, 5443, 5619)
+ * CAS support (CASSANDRA-5062, 5441, 5442, 5443, 5619, 5667)
  * Leveled compaction performs size-tiered compactions in L0 
(CASSANDRA-5371, 5439)
  * Add yaml network topology snitch for mixed ec2/other envs (CASSANDRA-5339)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e003d84/src/java/org/apache/cassandra/cql/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql/QueryProcessor.java
index e68aa7f..8e63021 100644
--- a/src/java/org/apache/cassandra/cql/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql/QueryProcessor.java
@@ -72,7 +72,7 @@ public class QueryProcessor
 public static final String DEFAULT_KEY_NAME = 
CFMetaData.DEFAULT_KEY_ALIAS.toUpperCase();
 
 private static Listorg.apache.cassandra.db.Row getSlice(CFMetaData 
metadata, SelectStatement select, ListByteBuffer variables, long now)
-throws InvalidRequestException, ReadTimeoutException, 
UnavailableException, IsBootstrappingException, WriteTimeoutException
+throws InvalidRequestException, ReadTimeoutException, 
UnavailableException, IsBootstrappingException
 {
 ListReadCommand commands = new ArrayListReadCommand();
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e003d84/src/java/org/apache/cassandra/db/SystemKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java 
b/src/java/org/apache/cassandra/db/SystemKeyspace.java
index d518468..e686f16 100644
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@ -797,15 +797,17 @@ public class SystemKeyspace
 return Math.max(3 * 3600, metadata.getGcGraceSeconds());
 }
 
-public static void savePaxosCommit(Commit commit, boolean 
eraseInProgressProposal)
+public static void savePaxosCommit(Commit commit, UUID inProgressBallot)
 {
-String preserveCql = UPDATE %s USING TIMESTAMP %d AND TTL %d SET 
most_recent_commit_at = %s, most_recent_commit = 0x%s WHERE row_key = 0x%s AND 
cf_id = %s;
+String preserveCql = UPDATE %s USING TIMESTAMP %d AND TTL %d SET 
in_progress_ballot = %s, most_recent_commit_at = %s, most_recent_commit = 0x%s 
WHERE row_key = 0x%s AND cf_id = %s;
 // identical except adds proposal = null
-String eraseCql = UPDATE %s USING TIMESTAMP %d AND TTL %d SET 
proposal = null, most_recent_commit_at = %s, most_recent_commit = 0x%s WHERE 
row_key = 0x%s AND cf_id = %s;
-processInternal(String.format(eraseInProgressProposal ? eraseCql : 
preserveCql,
+String eraseCql = UPDATE %s USING TIMESTAMP %d AND TTL %d SET 
proposal = null, in_progress_ballot = %s, most_recent_commit_at = %s, 
most_recent_commit = 0x%s WHERE row_key = 0x%s AND cf_id = %s;
+boolean proposalAfterCommit = inProgressBallot.timestamp()  
commit.ballot.timestamp();
+processInternal(String.format(proposalAfterCommit ? preserveCql : 
eraseCql,
   PAXOS_CF,
   UUIDGen.microsTimestamp(commit.ballot),
  

[jira] [Resolved] (CASSANDRA-5667) Change timestamps used in CAS ballot proposals to be more resilient to clock skew

2013-07-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5667.
---

Resolution: Fixed

committed

 Change timestamps used in CAS ballot proposals to be more resilient to clock 
 skew
 -

 Key: CASSANDRA-5667
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5667
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0 beta 1
 Environment: n/a
Reporter: Nick Puz
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0 beta 1

 Attachments: 5667.txt


 The current time is used to generate the timeuuid used for CAS ballots 
 proposals with the logic that if a newer proposal exists then the current one 
 needs to complete that and re-propose. The problem is that if a machine has 
 clock skew and drifts into the future it will propose with a large timestamp 
 (which will get accepted) but then subsequent proposals with lower (but 
 correct) timestamps will not be able to proceed. This will prevent CAS write 
 operations and also reads at serializable consistency level. 
 The work around is to initially propose with current time (current behavior) 
 but if the proposal fails due to a larger existing one re-propose (after 
 completing the existing if necessary) with the max of (currentTime, 
 mostRecent+1, proposed+1).
 Since small drift is normal between different nodes in the same datacenter 
 this can happen even if NTP is working properly and a write hits one node and 
 a subsequent serialized read hits another. In the case of NTP config issues 
 (or OS bugs with time esp around DST) the unavailability window could be much 
 larger.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5724) Timeouts for slice/rangeslice queries while some nodes versions are lower than 1.2 and some higher.

2013-07-04 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5724:
--

Assignee: Ryan McGuire

Can we reproduce?  There is already code to special case never use compression 
when speaking to a 1.1 node.

 Timeouts for slice/rangeslice queries while some nodes versions are lower 
 than 1.2 and some higher.
 ---

 Key: CASSANDRA-5724
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5724
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Or Sher
Assignee: Ryan McGuire

 When doing a rolling upgrade from 1.0.* or 1.1.* to 1.2.* some slice or range 
 slice queries executed against a 1.2.* node fails due to timeout exception:
 [default@orTestKS] list orTestCF;
 Using default limit of 100
 Using default column limit of 100
 null
 TimedOutException()
   at 
 org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
   at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1489)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:210)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:337)
 It seems this issue is because of the new parameter in 1.2.*: 
 internode_compression which is set to all by default.
 It seems that by setting this parameter to none solves the problem.
 I think the question is if Cassandra should support somehow nodes with 
 different configuration for this parameter?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5725) Silently failing messages in case of schema not fully propagated

2013-07-04 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700221#comment-13700221
 ] 

Sergio Bossa commented on CASSANDRA-5725:
-

bq. Here's how it's supposed to work: you perform your schema change, then you 
check for schema agreement before starting to write to the new table.

Sure, you can do that, but doesn't look like a great solution to me :)

By the way, if any change to fix this is too big at the moment, or really not 
worth, feel free to close this as won't fix, we'll live with this.

 Silently failing messages in case of schema not fully propagated
 

 Key: CASSANDRA-5725
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5725
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Sergio Bossa

 When a new keyspace and/or column family is created on a multi nodes cluster 
 (at least three), and then a mutation is executed on such new column family, 
 the operations sometimes silently fails by timing out.
 I tracked this down to the schema not being fully propagated to all nodes. 
 Here's what happens:
 1) Node 1 receives the create keyspace/column family request.
 2) The same node receives a mutation request at CL.QUORUM and sends to other 
 nodes too.
 3) Upon receiving the mutation request, other nodes try to deserialize it and 
 fail in doing so if the schema is not fully propagated, i.e. because they 
 don't find the mutated column family.
 4) The connection between node 1 and the failed node is dropped, and the 
 request on the former hangs until timing out.
 Here is the underlying exception, I had to tweak several log levels to get 
 it: 
 {noformat}
 INFO 13:11:39,441 IOException reading from socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
 cfId=a31c7604-0e40-393b-82d7-ba3d910ad50a
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:184)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:94)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:397)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:407)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:367)
   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:207)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:139)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
 {noformat}
 Finally, there's probably a correlated failure happening during repairs of 
 newly created/mutated column family, causing the repair process to hang 
 forever as follows:
 {noformat}
 AntiEntropySessions:1 daemon prio=5 tid=7fe981148000 nid=0x11abea000 in 
 Object.wait() [11abe9000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.AntiEntropyService$RepairSession.runMayThrow(AntiEntropyService.java:695)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:680)
 http-8983-1 daemon prio=5 tid=7fe97d24d000 nid=0x11a5c8000 in Object.wait() 
 [11a5c6000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2442)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 

[jira] [Commented] (CASSANDRA-5619) CAS UPDATE for a lost race: save round trip by returning column values

2013-07-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700225#comment-13700225
 ] 

Jonathan Ellis commented on CASSANDRA-5619:
---

How does adding a boolean help with this example?  Because whether row does not 
exist, or v=null, we'd return cas-failed, but we still need to distinguish what 
the existing value is.

 CAS UPDATE for a lost race: save round trip by returning column values
 --

 Key: CASSANDRA-5619
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5619
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0 beta 1
Reporter: Blair Zajac
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 5619_thrift_fixup.txt, 5619.txt


 Looking at the new CAS CQL3 support examples [1], if one lost a race for an 
 UPDATE, to save a round trip to get the current values to decide if you need 
 to perform your work, could the columns that were used in the IF clause also 
 be returned to the caller?  Maybe the columns values as part of the SET part 
 could also be returned.
 I don't know if this is generally useful though.
 In the case of creating a new user account with a given username which is the 
 partition key, if one lost the race to another person creating an account 
 with the same username, it doesn't matter to the loser what the column values 
 are, just that they lost.
 I'm new to Cassandra, so maybe there's other use cases, such as doing 
 incremental amount of work on a row.  In pure Java projects I've done while 
 loops around AtomicReference.html#compareAndSet() until the work was done on 
 the referenced object to handle multiple threads each making forward progress 
 in updating the references object.
 [1] https://github.com/riptano/cassandra-dtest/blob/master/cql_tests.py#L3044

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5715) CAS on 'primary key only' table

2013-07-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700240#comment-13700240
 ] 

Jonathan Ellis edited comment on CASSANDRA-5715 at 7/4/13 5:26 PM:
---

I guess we could restrict it to UPDATE only gets IF [value], and INSERT only 
gets IF NOT EXISTS...

  was (Author: jbellis):
I guess we could restrict it to UPDATE only gets IF {value}, and INSERT 
only gets IF NOT EXISTS...
  
 CAS on 'primary key only' table
 ---

 Key: CASSANDRA-5715
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0

 Attachments: 0001-Conditions-on-INSERT.txt, 
 0002-Support-updating-the-PK-only.txt


 Given a table with only a primary key, like
 {noformat}
 CREATE TABLE test (k int PRIMARY KEY)
 {noformat}
 there is currently no way to CAS a row in that table into existing because:
 # INSERT doesn't currently support IF
 # UPDATE has no way to update such table
 So we should probably allow IF conditions on INSERT statements.
 In addition (or alternatively), we could work on allowing UPDATE to update 
 such table. One motivation for that could be to make UPDATE always be more 
 general to INSERT. That is currently, there is a bunch of operation that 
 INSERT cannot do (counter increments, collection appends), but that primary 
 key table case is, afaik, the only case where you *need* to use INSERT. 
 However, because CQL forces segregation of PK value to the WHERE clause and 
 not to the SET one, the only syntax that I can see work would be:
 {noformat}
 UPDATE WHERE k=0;
 {noformat}
 which maybe is too ugly to allow?
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5715) CAS on 'primary key only' table

2013-07-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700240#comment-13700240
 ] 

Jonathan Ellis commented on CASSANDRA-5715:
---

I guess we could restrict it to UPDATE only gets IF {value}, and INSERT only 
gets IF NOT EXISTS...

 CAS on 'primary key only' table
 ---

 Key: CASSANDRA-5715
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0

 Attachments: 0001-Conditions-on-INSERT.txt, 
 0002-Support-updating-the-PK-only.txt


 Given a table with only a primary key, like
 {noformat}
 CREATE TABLE test (k int PRIMARY KEY)
 {noformat}
 there is currently no way to CAS a row in that table into existing because:
 # INSERT doesn't currently support IF
 # UPDATE has no way to update such table
 So we should probably allow IF conditions on INSERT statements.
 In addition (or alternatively), we could work on allowing UPDATE to update 
 such table. One motivation for that could be to make UPDATE always be more 
 general to INSERT. That is currently, there is a bunch of operation that 
 INSERT cannot do (counter increments, collection appends), but that primary 
 key table case is, afaik, the only case where you *need* to use INSERT. 
 However, because CQL forces segregation of PK value to the WHERE clause and 
 not to the SET one, the only syntax that I can see work would be:
 {noformat}
 UPDATE WHERE k=0;
 {noformat}
 which maybe is too ugly to allow?
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5725) Silently failing messages in case of schema not fully propagated

2013-07-04 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700246#comment-13700246
 ] 

Jonathan Ellis commented on CASSANDRA-5725:
---

IMO the fix here is to special case UnknownColumnFamilyException so that it 
gets logged at INFO or WARN instead of being swallowed by the default 
IOException handler, which it currently subclasses.

 Silently failing messages in case of schema not fully propagated
 

 Key: CASSANDRA-5725
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5725
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Sergio Bossa

 When a new keyspace and/or column family is created on a multi nodes cluster 
 (at least three), and then a mutation is executed on such new column family, 
 the operations sometimes silently fails by timing out.
 I tracked this down to the schema not being fully propagated to all nodes. 
 Here's what happens:
 1) Node 1 receives the create keyspace/column family request.
 2) The same node receives a mutation request at CL.QUORUM and sends to other 
 nodes too.
 3) Upon receiving the mutation request, other nodes try to deserialize it and 
 fail in doing so if the schema is not fully propagated, i.e. because they 
 don't find the mutated column family.
 4) The connection between node 1 and the failed node is dropped, and the 
 request on the former hangs until timing out.
 Here is the underlying exception, I had to tweak several log levels to get 
 it: 
 {noformat}
 INFO 13:11:39,441 IOException reading from socket; closing
 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
 cfId=a31c7604-0e40-393b-82d7-ba3d910ad50a
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:184)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:94)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:397)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:407)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:367)
   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:94)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:207)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:139)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
 {noformat}
 Finally, there's probably a correlated failure happening during repairs of 
 newly created/mutated column family, causing the repair process to hang 
 forever as follows:
 {noformat}
 AntiEntropySessions:1 daemon prio=5 tid=7fe981148000 nid=0x11abea000 in 
 Object.wait() [11abe9000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c6200840 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.AntiEntropyService$RepairSession.runMayThrow(AntiEntropyService.java:695)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:680)
 http-8983-1 daemon prio=5 tid=7fe97d24d000 nid=0x11a5c8000 in Object.wait() 
 [11a5c6000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
   at java.lang.Object.wait(Object.java:485)
   at 
 org.apache.cassandra.utils.SimpleCondition.await(SimpleCondition.java:34)
   - locked 7c620db58 (a org.apache.cassandra.utils.SimpleCondition)
   at 
 org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2442)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at 

[jira] [Commented] (CASSANDRA-4131) Integrate Hive support to be in core cassandra

2013-07-04 Thread Rohit Rai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700294#comment-13700294
 ] 

Rohit Rai commented on CASSANDRA-4131:
--

Actually, Hive support internally uses the Cassandra Hadoop Input format... and 
thankfully we now have CqlPagingInputFormat support in 1.2.6.

So I have got the basic CQL3 column family support(reading) in, and it is 
working. Haven't done extensive testing and need to write some test cases... 
But I could run it with CQL Column Families with Simple as well as Composite 
primary keys. The code is here if you want to give it a try. 
https://github.com/milliondreams/hive/tree/cas-support-cql



 Integrate Hive support to be in core cassandra
 --

 Key: CASSANDRA-4131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4131
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeremy Hanna
Assignee: Edward Capriolo
  Labels: hadoop, hive

 The standalone hive support (at https://github.com/riptano/hive) would be 
 great to have in-tree so that people don't have to go out to github to 
 download it and wonder if it's a left-for-dead external shim.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5697) cqlsh doesn't allow semicolons in BATCH statements

2013-07-04 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5697:
-

Attachment: 5697.txt

 cqlsh doesn't allow semicolons in BATCH statements
 --

 Key: CASSANDRA-5697
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5697
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.0
 Environment: Mac OSX, cqlsh 3.0.2
Reporter: Russell Alexander Spitzer
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cqlsh
 Attachments: 5697.txt


 The documentation for BATCH statements declares that semicolons are required 
 between update operations. Currently including them results in an error 
 'expecting K_APPLY'. To match the design specifications, semi-colons should 
 be allowed or optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5534) Writing wide row causes high CPU usage after compaction

2013-07-04 Thread Alex Zarutin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Zarutin updated CASSANDRA-5534:


Tester: alexzar  (was: enigmacurry)

 Writing wide row causes high CPU usage after compaction
 ---

 Key: CASSANDRA-5534
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5534
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 2.0 beta 1
Reporter: Ryan McGuire
Assignee: Sylvain Lebresne
 Fix For: 2.0

 Attachments: wide_row_stress.trunk.log.txt.gz


 Introduced in commit -e74c13ff08663d306dcc5cdc99c07e9e6c12ca21- (see below) 
 there is a significant slow down when creating a wide row with 
 cassandra-stress:
 Testing with the prior (good) commit I used this to write a single wide row, 
 which completed rather quickly:
 {code}
 $ ccm create -v git:60f09f0121e0801851b9ab017eddf7e326fa05fb wide-row
 Fetching Cassandra updates...
 Cloning Cassandra (from local cache)
 Checking out requested branch (60f09f0121e0801851b9ab017eddf7e326fa05fb)
 Compiling Cassandra 60f09f0121e0801851b9ab017eddf7e326fa05fb ...
 Current cluster is now: wide-row
 $ ccm populate -n 1
 $ ccm start
 $ time ccm node1 stress -c 1 -S 1000 -n 1
 Created keyspaces. Sleeping 1s for propagation.
 total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
 1,0,0,273.3,273.3,273.3,0
 END
 real  0m7.106s
 user  0m1.710s
 sys   0m0.120s
 {code}
 Using the bugged commit (e74c13ff08663d306dcc5cdc99c07e9e6c12ca21) I get a 
 significant slow down:
 {code}
 02:42 PM:~$ ccm create -v git:e74c13ff08663d306dcc5cdc99c07e9e6c12ca21 
 wide-row
 Fetching Cassandra updates...
 Current cluster is now: wide-row
 02:42 PM:~$ ccm populate -n 1
 02:42 PM:~$ ccm start
 02:42 PM:~$ time ccm node1 stress -c 1 -S 1000 -n 1
 Created keyspaces. Sleeping 1s for propagation.
 total,interval_op_rate,interval_key_rate,latency,95th,99th,elapsed_time
 1,0,0,423.2,423.2,423.2,0
 Total operation time  : 00:00:00
 END
 real  4m16.394s
 user  0m2.230s
 sys   0m0.137s
 {code}
 Interestingly, the commit in question just says it's a merge from 
 cassandra-1.2, but I do not see this same slowdown using that branch, this 
 only occurs in trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5534) Writing wide row causes high CPU usage after compaction

2013-07-04 Thread Alex Zarutin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13700424#comment-13700424
 ] 

Alex Zarutin commented on CASSANDRA-5534:
-

latest test on cassandra-1.2:

$ ccm create -v git:cassandra-1.2 test-1.2.6
Fetching Cassandra updates...
Cloning Cassandra (from local cache)
Checking out requested branch (cassandra-1.2)
Compiling Cassandra cassandra-1.2 ...
Current cluster is now: test-1.2.6

$ ccm populate -n 1
$ ccm start

$ time ccm node1 stress -c 1 -S 1000 -n 1
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
1,0,0,233.6,233.6,233.6,0
END

real0m6.539s
user0m1.714s
sys 0m0.137s

latest test on trunk:

$ ccm create -v git:trunk test-trunk
Fetching Cassandra updates...
Cloning Cassandra (from local cache)
Checking out requested branch (trunk)
Compiling Cassandra trunk ...
Current cluster is now: test-trunk

$ ccm populate -n 1
$ ccm start

$ time ccm node1 stress -c 1 -S 1000 -n 1
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency,95th,99th,elapsed_time
1,0,0,343.1,343.1,343.1,0


Total operation time  : 00:00:00
END

real0m7.333s
user0m1.945s
sys 0m0.136s


 Writing wide row causes high CPU usage after compaction
 ---

 Key: CASSANDRA-5534
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5534
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 2.0 beta 1
Reporter: Ryan McGuire
Assignee: Sylvain Lebresne
 Fix For: 2.0

 Attachments: wide_row_stress.trunk.log.txt.gz


 Introduced in commit -e74c13ff08663d306dcc5cdc99c07e9e6c12ca21- (see below) 
 there is a significant slow down when creating a wide row with 
 cassandra-stress:
 Testing with the prior (good) commit I used this to write a single wide row, 
 which completed rather quickly:
 {code}
 $ ccm create -v git:60f09f0121e0801851b9ab017eddf7e326fa05fb wide-row
 Fetching Cassandra updates...
 Cloning Cassandra (from local cache)
 Checking out requested branch (60f09f0121e0801851b9ab017eddf7e326fa05fb)
 Compiling Cassandra 60f09f0121e0801851b9ab017eddf7e326fa05fb ...
 Current cluster is now: wide-row
 $ ccm populate -n 1
 $ ccm start
 $ time ccm node1 stress -c 1 -S 1000 -n 1
 Created keyspaces. Sleeping 1s for propagation.
 total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
 1,0,0,273.3,273.3,273.3,0
 END
 real  0m7.106s
 user  0m1.710s
 sys   0m0.120s
 {code}
 Using the bugged commit (e74c13ff08663d306dcc5cdc99c07e9e6c12ca21) I get a 
 significant slow down:
 {code}
 02:42 PM:~$ ccm create -v git:e74c13ff08663d306dcc5cdc99c07e9e6c12ca21 
 wide-row
 Fetching Cassandra updates...
 Current cluster is now: wide-row
 02:42 PM:~$ ccm populate -n 1
 02:42 PM:~$ ccm start
 02:42 PM:~$ time ccm node1 stress -c 1 -S 1000 -n 1
 Created keyspaces. Sleeping 1s for propagation.
 total,interval_op_rate,interval_key_rate,latency,95th,99th,elapsed_time
 1,0,0,423.2,423.2,423.2,0
 Total operation time  : 00:00:00
 END
 real  4m16.394s
 user  0m2.230s
 sys   0m0.137s
 {code}
 Interestingly, the commit in question just says it's a merge from 
 cassandra-1.2, but I do not see this same slowdown using that branch, this 
 only occurs in trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira