[jira] [Created] (CASSANDRA-2686) Distributed per row locks

2011-05-23 Thread JIRA
Distributed per row locks
-

 Key: CASSANDRA-2686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2686
 Project: Cassandra
  Issue Type: Wish
  Components: Contrib
Affects Versions: 0.7.4
 Environment: any
Reporter: Luís Ferreira
 Fix For: 0.7.4


Instead of using a centralized locking strategy like cages with zookeeper, I 
would like to have it in a decentralized way. Even if it carries some 
limitations. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-2686) Distributed per row locks

2011-05-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13037965#comment-13037965
 ] 

Luís Ferreira commented on CASSANDRA-2686:
--

Is there any way to do this? Can a consensus be achieved between the nodes 
responsible for a key? Maybe by creating a new kind of column that would have 
write once properties, and by using something like hinted handoff of locks if a 
node is down at the time of locking.

I am going to try to implement this. But it would be really good to get some 
ideas on how to do it.

 Distributed per row locks
 -

 Key: CASSANDRA-2686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2686
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Affects Versions: 0.7.4
 Environment: any
Reporter: Luís Ferreira
  Labels: api-addition, features

 Instead of using a centralized locking strategy like cages with zookeeper, I 
 would like to have it in a decentralized way. Even if it carries some 
 limitations. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-2686) Distributed per row locks

2011-05-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luís Ferreira updated CASSANDRA-2686:
-

Affects Version/s: (was: 0.7.4)

 Distributed per row locks
 -

 Key: CASSANDRA-2686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2686
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: any
Reporter: Luís Ferreira
  Labels: api-addition, features

 Instead of using a centralized locking strategy like cages with zookeeper, I 
 would like to have it in a decentralized way. Even if it carries some 
 limitations. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-2686) Distributed per row locks

2011-05-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13038071#comment-13038071
 ] 

Luís Ferreira commented on CASSANDRA-2686:
--

Yes, but this uses a centralized or semi centralized approach, in which 
zookeeper controls who has the lock. I would like for all (up) the nodes to 
reach an agreement on who has the lock. I don't know if this is possible, and 
if so, at what price...

 Distributed per row locks
 -

 Key: CASSANDRA-2686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2686
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: any
Reporter: Luís Ferreira
  Labels: api-addition, features

 Instead of using a centralized locking strategy like cages with zookeeper, I 
 would like to have it in a decentralized way. Even if it carries some 
 limitations. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-2686) Distributed per row locks

2011-05-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13038090#comment-13038090
 ] 

Luís Ferreira commented on CASSANDRA-2686:
--

Maybe I misunderstood, but to get locks with ZK, every node has to make it's 
presence known to ZK, and ask ZK to get a certain lock (which I think it's what 
cages does). Can't this be a bottleneck? 

I know consensus can't be achieved with one failing node, therefore the need 
for something like ZK. Still, isn't there a way to do this using something like 
hinted handoff? 

Maybe I haven't explained correctly my idea. I'd like to have locks, but 
maintain the general structure of a cassandra cluster, and change as little as 
possible the kind of messages the nodes send. 

 Distributed per row locks
 -

 Key: CASSANDRA-2686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2686
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: any
Reporter: Luís Ferreira
  Labels: api-addition, features

 Instead of using a centralized locking strategy like cages with zookeeper, I 
 would like to have it in a decentralized way. Even if it carries some 
 limitations. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-2686) Distributed per row locks

2011-05-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13038093#comment-13038093
 ] 

Luís Ferreira commented on CASSANDRA-2686:
--

Probably you're right, and the only way to do this is by using some kind of 
protocol as ZAB or Paxos. Still, if other ideas come up it would be great. Or 
even a explanation of why that's the only way... 

 Distributed per row locks
 -

 Key: CASSANDRA-2686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2686
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: any
Reporter: Luís Ferreira
  Labels: api-addition, features

 Instead of using a centralized locking strategy like cages with zookeeper, I 
 would like to have it in a decentralized way. Even if it carries some 
 limitations. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-2749) fine-grained control over data directories

2011-06-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046045#comment-13046045
 ] 

Héctor Izquierdo commented on CASSANDRA-2749:
-

What about being configurable in a separate file like the network topology? 
Could that work as a first approximation?

 fine-grained control over data directories
 --

 Key: CASSANDRA-2749
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2749
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor

 Currently Cassandra supports multiple data directories but no way to control 
 what sstables are placed where. Particularly for systems with mixed SSDs and 
 rotational disks, it would be nice to pin frequently accessed columnfamilies 
 to the SSDs.
 Postgresql does this with tablespaces 
 (http://www.postgresql.org/docs/9.0/static/manage-ag-tablespaces.html) but we 
 should probably avoid using that name because of confusing similarity to 
 keyspaces.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-2634) .NET driver for CQL

2011-06-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13052708#comment-13052708
 ] 

Michal Augustýn commented on CASSANDRA-2634:


I'm very sorry - I'm now very busy at work and so I don't have time for this. I 
could probably implement that in late summer.

 .NET driver for CQL
 ---

 Key: CASSANDRA-2634
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2634
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Affects Versions: 0.8 beta 1
Reporter: Michal Augustýn
Priority: Minor

 The goal is to create ADO.NET driver for Cassandra, using CQL. So we have to 
 implement 
 [IDbConnection|http://msdn.microsoft.com/en-us/library/system.data.idbconnection.aspx]
  interface and all related interfaces (i.e. 
 [IDbCommand|http://msdn.microsoft.com/en-us/library/system.data.idbcommand.aspx]).
 We must ensure that the connection is pooled.
 The implementation will be probably similar to CASSANDRA-1710.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2809) In the Cli, update column family cf with comparator; create Column metadata

2011-06-22 Thread JIRA
In the Cli, update column family cf with comparator; create Column metadata
-

 Key: CASSANDRA-2809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2809
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.8.1
 Environment: Ubuntu 10.10, 32bit
java version 1.6.0_24
installed from Debian packages of Brisk-beta2
Reporter: Silvère Lestang


Using cassandra-cli, I can't update the comparator of a column family with the 
type I want and when I did it with BytesType, Column metadata appear for each 
of my existing columns.
Step to reproduce:
{code}
[default@unknown] create keyspace Test
with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
and strategy_options = [{replication_factor:1}];

[default@unknown] use Test;
Authenticated to keyspace: Test

[default@Test] create column family test;

[default@Test] describe keyspace;
...
ColumnFamily: test
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.BytesType
  Row cache size / save period in seconds: 0.0/0
  Key cache size / save period in seconds: 20.0/14400
  Memtable thresholds: 0.571875/122/1440 (millions of ops/MB/minutes)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: false
  Built indexes: []
...

[default@Test] update column family test with comparator = 'LongType';
comparators do not match.
{code}
why?? the CF is empty
{code}
[default@Test] update column family test with comparator = 'BytesType';
f8e4dcb0-9cca-11e0--d0583497e7ff
Waiting for schema agreement...
... schemas agree across the cluster

[default@Test] describe keyspace;
...
ColumnFamily: test
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.BytesType
  Row cache size / save period in seconds: 0.0/0
  Key cache size / save period in seconds: 20.0/14400
  Memtable thresholds: 0.571875/122/1440 (millions of ops/MB/minutes)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: false
  Built indexes: []
...

[default@Test] set test[ascii('row1')][long(1)]=integer(35);
set test[ascii('row1')][long(2)]=integer(36);
set test[ascii('row1')][long(3)]=integer(38);
set test[ascii('row2')][long(1)]=integer(45);
set test[ascii('row2')][long(2)]=integer(42);
set test[ascii('row2')][long(3)]=integer(33);

[default@Test] list test;
Using default limit of 100
---
RowKey: 726f7731
= (column=0001, value=35, timestamp=1308744931122000)
= (column=0002, value=36, timestamp=1308744931124000)
= (column=0003, value=38, timestamp=1308744931125000)
---
RowKey: 726f7732
= (column=0001, value=45, timestamp=1308744931127000)
= (column=0002, value=42, timestamp=1308744931128000)
= (column=0003, value=33, timestamp=1308744932722000)

2 Rows Returned.

[default@Test] update column family test with comparator = 'LongType';
comparators do not match.
{code}
same question than before, my columns contains only long, why I can't?

{code}
[default@Test] update column family test with comparator = 'BytesType';

[default@Test] describe keyspace;  
Keyspace: Test:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Options: [replication_factor:1]
  Column Families:
ColumnFamily: test
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.BytesType
  Row cache size / save period in seconds: 0.0/0
  Key cache size / save period in seconds: 20.0/14400
  Memtable thresholds: 0.571875/122/1440 (millions of ops/MB/minutes)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: false
  Built indexes: []
  Column Metadata:
Column Name:  (0001)
  Validation Class: org.apache.cassandra.db.marshal.IntegerType
Column Name:  (0003)
  Validation Class: org.apache.cassandra.db.marshal.IntegerType
Column Name:  (0002)
  Validation Class: org.apache.cassandra.db.marshal.IntegerType
{code}
Column Metadata appear from nowhere. I don't think that it's expected.


--
This message is automatically generated by JIRA.
For more information on JIRA

[jira] [Created] (CASSANDRA-2810) RuntimeException in Pig when using dump command on column name

2011-06-22 Thread JIRA
RuntimeException in Pig when using dump command on column name


 Key: CASSANDRA-2810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2810
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.1
 Environment: Ubuntu 10.10, 32 bits
java version 1.6.0_24
Brisk beta-2 installed from Debian packages
Reporter: Silvère Lestang


This bug was previously report on [Brisk bug 
tracker|https://datastax.jira.com/browse/BRISK-232].

In cassandra-cli:
{code}
[default@unknown] create keyspace Test
with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
and strategy_options = [{replication_factor:1}];

[default@unknown] use Test;
Authenticated to keyspace: Test

[default@Test] create column family test;

[default@Test] set test[ascii('row1')][long(1)]=integer(35);
set test[ascii('row1')][long(2)]=integer(36);
set test[ascii('row1')][long(3)]=integer(38);
set test[ascii('row2')][long(1)]=integer(45);
set test[ascii('row2')][long(2)]=integer(42);
set test[ascii('row2')][long(3)]=integer(33);

[default@Test] list test;
Using default limit of 100
---
RowKey: 726f7731
= (column=0001, value=35, timestamp=1308744931122000)
= (column=0002, value=36, timestamp=1308744931124000)
= (column=0003, value=38, timestamp=1308744931125000)
---
RowKey: 726f7732
= (column=0001, value=45, timestamp=1308744931127000)
= (column=0002, value=42, timestamp=1308744931128000)
= (column=0003, value=33, timestamp=1308744932722000)

2 Rows Returned.

[default@Test] describe keyspace;
Keyspace: Test:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:1]
  Column Families:
ColumnFamily: test
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.BytesType
  Row cache size / save period in seconds: 0.0/0
  Key cache size / save period in seconds: 20.0/14400
  Memtable thresholds: 0.571875/122/1440 (millions of ops/MB/minutes)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: false
  Built indexes: []
{code}
In Pig command line:
{code}
grunt test = LOAD 'cassandra://Test/test' USING CassandraStorage() AS 
(rowkey:chararray, columns: bag {T: (name:long, value:int)});

grunt value_test = foreach test generate rowkey, columns.name, columns.value;

grunt dump value_test;
{code}
In /var/log/cassandra/system.log, I have severals time this exception:
{code}
INFO [IPC Server handler 3 on 8012] 2011-06-22 15:03:28,533 TaskInProgress.java 
(line 551) Error from attempt_201106210955_0051_m_00_3: 
java.lang.RuntimeException: Unexpected data type -1 found in stream.
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:478)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:541)
at org.apache.pig.data.BinInterSedes.writeBag(BinInterSedes.java:522)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:361)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:541)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:357)
at 
org.apache.pig.impl.io.InterRecordWriter.write(InterRecordWriter.java:73)
at org.apache.pig.impl.io.InterStorage.putNext(InterStorage.java:87)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:138)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:97)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:638)
at 
org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.runPipeline(PigMapBase.java:239)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.map(PigMapBase.java:232)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.map(PigMapBase.java:53)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
at java.security.AccessController.doPrivileged(Native Method

[jira] [Commented] (CASSANDRA-2686) Distributed per row locks

2011-06-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13053296#comment-13053296
 ] 

Luís Ferreira commented on CASSANDRA-2686:
--

I've taken your advice and used Cages for the Read and Write locks. From this I 
constructed a Transaction system on top of Cassandra. As soon as I have some 
performance test results I'll put them here, as well as the code, if anyone is 
interested. 

It basically implements a write ahead log, taking advantage of the atomicity in 
per row updates, and of  idempotent updates. It also has a pre processing 
mechanism for transactions that do not know à priori the columns they will use 
(when using indexes, for example). 

 Distributed per row locks
 -

 Key: CASSANDRA-2686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2686
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: any
Reporter: Luís Ferreira
  Labels: api-addition, features

 Instead of using a centralized locking strategy like cages with zookeeper, I 
 would like to have it in a decentralized way. Even if it carries some 
 limitations. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2831) Creating or updating CF key_validation_class with the CLI doesn't works

2011-06-27 Thread JIRA
Creating or updating CF key_validation_class with the CLI doesn't works
---

 Key: CASSANDRA-2831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.2
 Environment: Ubuntu 10.10, 32 bits
java version 1.6.0_24
Brisk beta-2 installed from Debian packages
Reporter: Silvère Lestang


In the command line:
{code}
create column family test with key_validation_class = 'AsciiType' and 
comparator = 'LongType' and default_validation_class = 'IntegerType';
describe keyspace;
Keyspace: Test:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:1]
  Column Families:
ColumnFamily: test
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.LongType
  Row cache size / save period in seconds: 0.0/0
  Key cache size / save period in seconds: 20.0/14400
  Memtable thresholds: 0.571875/122/1440 (millions of ops/MB/minutes)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: false
  Built indexes: []
{code}
The Default column value validator is BytesType instead of IntegerType. Also 
tested with other types or with the update column family command, same 
problem occur.

{code}
[default@Test] update column family test with default_validation_class = 
'LongType';
51a37430-a0bb-11e0--ef8993101fdf
Waiting for schema agreement...
... schemas agree across the cluster
[default@Test] describe keyspace;   

Keyspace: Test:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:1]
  Column Families:
ColumnFamily: test
  Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.LongType
  Row cache size / save period in seconds: 0.0/0
  Key cache size / save period in seconds: 20.0/14400
  Memtable thresholds: 0.571875/122/1440 (millions of ops/MB/minutes)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: false
  Built indexes: []
{code}

Btw, they are a typo in file 
src/resources/org/apache/cassandra/cli/CliHelp.yaml line 642: 
key_valiation_class  key_validation_class
Very annoying for people like me who stupidly copy/paste the help.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2810) RuntimeException in Pig when using dump command on column name

2011-06-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13055521#comment-13055521
 ] 

Silvère Lestang commented on CASSANDRA-2810:


I try again after applying [^2810.txt] and the patch from bug [CASSANDRA-2777] 
and the bug is still here.
With the patch, you need to replace
{code}
test = LOAD 'cassandra://Test/test' USING CassandraStorage() AS 
(rowkey:chararray, columns: bag {T: (name:long, value:int)});
{code}
by
{code}
test = LOAD 'cassandra://Test/test' USING CassandraStorage() AS ();
{code}
because CassandraStorage takes care of the schema.

I try:
{code}
grunt describe test;
test: {key: chararray,columns: {(name: long,value: int)}}
{code}
so we can see that the patch from bug 2777 works correctly (I also test with 
different types for value).
But when I dump test, I still have the same exception.

 RuntimeException in Pig when using dump command on column name
 

 Key: CASSANDRA-2810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2810
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.1
 Environment: Ubuntu 10.10, 32 bits
 java version 1.6.0_24
 Brisk beta-2 installed from Debian packages
Reporter: Silvère Lestang
Assignee: Brandon Williams
 Attachments: 2810.txt


 This bug was previously report on [Brisk bug 
 tracker|https://datastax.jira.com/browse/BRISK-232].
 In cassandra-cli:
 {code}
 [default@unknown] create keyspace Test
 with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
 and strategy_options = [{replication_factor:1}];
 [default@unknown] use Test;
 Authenticated to keyspace: Test
 [default@Test] create column family test;
 [default@Test] set test[ascii('row1')][long(1)]=integer(35);
 set test[ascii('row1')][long(2)]=integer(36);
 set test[ascii('row1')][long(3)]=integer(38);
 set test[ascii('row2')][long(1)]=integer(45);
 set test[ascii('row2')][long(2)]=integer(42);
 set test[ascii('row2')][long(3)]=integer(33);
 [default@Test] list test;
 Using default limit of 100
 ---
 RowKey: 726f7731
 = (column=0001, value=35, timestamp=1308744931122000)
 = (column=0002, value=36, timestamp=1308744931124000)
 = (column=0003, value=38, timestamp=1308744931125000)
 ---
 RowKey: 726f7732
 = (column=0001, value=45, timestamp=1308744931127000)
 = (column=0002, value=42, timestamp=1308744931128000)
 = (column=0003, value=33, timestamp=1308744932722000)
 2 Rows Returned.
 [default@Test] describe keyspace;
 Keyspace: Test:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:1]
   Column Families:
 ColumnFamily: test
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: 
 org.apache.cassandra.db.marshal.BytesType
   Columns sorted by: org.apache.cassandra.db.marshal.BytesType
   Row cache size / save period in seconds: 0.0/0
   Key cache size / save period in seconds: 20.0/14400
   Memtable thresholds: 0.571875/122/1440 (millions of ops/MB/minutes)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 1.0
   Replicate on write: false
   Built indexes: []
 {code}
 In Pig command line:
 {code}
 grunt test = LOAD 'cassandra://Test/test' USING CassandraStorage() AS 
 (rowkey:chararray, columns: bag {T: (name:long, value:int)});
 grunt value_test = foreach test generate rowkey, columns.name, columns.value;
 grunt dump value_test;
 {code}
 In /var/log/cassandra/system.log, I have severals time this exception:
 {code}
 INFO [IPC Server handler 3 on 8012] 2011-06-22 15:03:28,533 
 TaskInProgress.java (line 551) Error from 
 attempt_201106210955_0051_m_00_3: java.lang.RuntimeException: Unexpected 
 data type -1 found in stream.
   at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:478)
   at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:541)
   at org.apache.pig.data.BinInterSedes.writeBag(BinInterSedes.java:522)
   at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:361)
   at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:541)
   at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:357)
   at 
 org.apache.pig.impl.io.InterRecordWriter.write(InterRecordWriter.java:73)
   at org.apache.pig.impl.io.InterStorage.putNext(InterStorage.java:87)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:138

[jira] [Commented] (CASSANDRA-2810) RuntimeException in Pig when using dump command on column name

2011-06-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13055585#comment-13055585
 ] 

Silvère Lestang commented on CASSANDRA-2810:


After more test (with both patches), path [^2810.txt] doesn't seems to solve 
the bug.
Here is a new test case:
Create a _Test_ keyspace and a _test_ column family with key_validation_class = 
'AsciiType' and comparator = 'LongType' and default_validation_class = 
'IntegerType' (don't use the cli because of [#CASSANDRA-2831]).
Insert some data:
{code}
set test[ascii('row1')][long(1)]=integer(35);
set test[ascii('row1')][long(2)]=integer(36);
set test[ascii('row1')][long(3)]=integer(38);
set test[ascii('row2')][long(1)]=integer(45);
set test[ascii('row2')][long(2)]=integer(42);
set test[ascii('row2')][long(3)]=integer(33);
{code}

In Pig cli:
{code}
test = LOAD 'cassandra://Test/test' USING CassandraStorage() AS ();
dump test;
{code}
The same exception as before is raised:
{code}
 INFO [IPC Server handler 4 on 8012] 2011-06-27 16:40:28,562 
TaskInProgress.java (line 551) Error from attempt_201106271436_0012_m_00_1: 
java.lang.RuntimeException: Unexpected data type -1 found in stream.
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:478)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:541)
at org.apache.pig.data.BinInterSedes.writeBag(BinInterSedes.java:522)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:361)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:541)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:357)
at 
org.apache.pig.impl.io.InterRecordWriter.write(InterRecordWriter.java:73)
at org.apache.pig.impl.io.InterStorage.putNext(InterStorage.java:87)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:138)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:97)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:638)
at 
org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.map(PigMapBase.java:224)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.map(PigMapBase.java:53)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.Child.main(Child.java:253)

{code}

 RuntimeException in Pig when using dump command on column name
 

 Key: CASSANDRA-2810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2810
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.1
 Environment: Ubuntu 10.10, 32 bits
 java version 1.6.0_24
 Brisk beta-2 installed from Debian packages
Reporter: Silvère Lestang
Assignee: Brandon Williams
 Attachments: 2810.txt


 This bug was previously report on [Brisk bug 
 tracker|https://datastax.jira.com/browse/BRISK-232].
 In cassandra-cli:
 {code}
 [default@unknown] create keyspace Test
 with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
 and strategy_options = [{replication_factor:1}];
 [default@unknown] use Test;
 Authenticated to keyspace: Test
 [default@Test] create column family test;
 [default@Test] set test[ascii('row1')][long(1)]=integer(35);
 set test[ascii('row1')][long(2)]=integer(36);
 set test[ascii('row1')][long(3)]=integer(38);
 set test[ascii('row2')][long(1)]=integer(45);
 set test[ascii('row2')][long(2)]=integer(42);
 set test[ascii('row2')][long(3)]=integer(33);
 [default@Test] list test;
 Using default limit of 100
 ---
 RowKey: 726f7731
 = (column=0001, value=35, timestamp=1308744931122000)
 = (column=0002, value=36, timestamp=1308744931124000)
 = (column=0003, value=38, timestamp=1308744931125000)
 ---
 RowKey: 726f7732
 = (column=0001, value=45, timestamp=1308744931127000)
 = (column=0002, value=42, timestamp

[jira] [Commented] (CASSANDRA-2475) Prepared statements

2011-06-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13057082#comment-13057082
 ] 

Michal Augustýn commented on CASSANDRA-2475:


It would be great if there is this overload in order to eliminate one 
client-server roundtrip:
{noformat}CqlResult execute_cql_query(1:binary query, 2:listbinary 
parameters, 3:Compression compression);{noformat}
In many applications, there is just few queries (max. hundreds?) and so I think 
the _handle_ could be cached server-side (we could limit the cache size via 
configuration).
And do you/we plan to support named parameters?

 Prepared statements
 ---

 Key: CASSANDRA-2475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2475
 Project: Cassandra
  Issue Type: Sub-task
  Components: API, Core
Reporter: Eric Evans
  Labels: cql
 Fix For: 1.0




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2768) AntiEntropyService excluding nodes that are on version 0.7 or sooner

2011-06-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13058061#comment-13058061
 ] 

Héctor Izquierdo commented on CASSANDRA-2768:
-

I'm on 0.8.1 updating from 0.7.6-2 and I have stumbled upong this bug. I can't 
run repair on a node whose disk broke

INFO [manual-repair-02182a20-5659-4aa0-aab9-2fff430f8a71] 2011-06-30 
20:29:51,487 AntiEntropyService.java (line 179) Excluding /10.20.13.80 from 
repair because it is on version 0.7 or sooner. You should consider updating 
this node before running repair again.
 INFO [manual-repair-6b50f51a-f689-4825-bcb9-bebf68664117] 2011-06-30 
20:29:51,487 AntiEntropyService.java (line 179) Excluding /10.20.13.76 from 
repair because it is on version 0.7 or sooner. You should consider updating 
this node before running repair again.
 INFO [manual-repair-6b50f51a-f689-4825-bcb9-bebf68664117] 2011-06-30 
20:29:51,487 AntiEntropyService.java (line 179) Excluding /10.20.13.77 from 
repair because it is on version 0.7 or sooner. You should consider updating 
this node before running repair again.
 INFO [manual-repair-02182a20-5659-4aa0-aab9-2fff430f8a71] 2011-06-30 
20:29:51,487 AntiEntropyService.java (line 179) Excluding /10.20.13.76 from 
repair because it is on version 0.7 or sooner. You should consider updating 
this node before running repair again.
 INFO [manual-repair-c46dd589-ed22-4b7a-809c-d97c094d2354] 2011-06-30 
20:29:51,487 AntiEntropyService.java (line 179) Excluding /10.20.13.80 from 
repair because it is on version 0.7 or sooner. You should consider updating 
this node before running repair again.
 INFO [manual-repair-c46dd589-ed22-4b7a-809c-d97c094d2354] 2011-06-30 
20:29:51,488 AntiEntropyService.java (line 179) Excluding /10.20.13.79 from 
repair because it is on version 0.7 or sooner. You should consider updating 
this node before running repair again.
 INFO [manual-repair-02182a20-5659-4aa0-aab9-2fff430f8a71] 2011-06-30 
20:29:51,488 AntiEntropyService.java (line 782) No neighbors to repair with for 
sbs on 
(141784319550391026443072753096570088105,170141183460469231731687303715884105727]:
 manual-repair-02182a20-5659-4aa0-aab9-2fff430f8a71 completed.
 INFO [manual-repair-6b50f51a-f689-4825-bcb9-bebf68664117] 2011-06-30 
20:29:51,487 AntiEntropyService.java (line 782) No neighbors to repair with for 
sbs on 
(170141183460469231731687303715884105727,28356863910078205288614550619314017621]:
 manual-repair-6b50f51a-f689-4825-bcb9-bebf68664117 completed.
 INFO [manual-repair-c46dd589-ed22-4b7a-809c-d97c094d2354] 2011-06-30 
20:29:51,488 AntiEntropyService.java (line 782) No neighbors to repair with for 
sbs on 
(113427455640312821154458202477256070484,141784319550391026443072753096570088105]:
 manual-repair-c46dd589-ed22-4b7a-809c-d97c094d2354 completed.


All nodes are on 0.8.1

 AntiEntropyService excluding nodes that are on version 0.7 or sooner
 

 Key: CASSANDRA-2768
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2768
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
 Environment: 4 node environment -- 
 Originally 0.7.6-2 with a Keyspace defined with RF=3
 Upgraded all nodes ( 1 at a time ) to version 0.8.0:  For each node, the node 
 was shut down, new version was turned on, using the existing data files / 
 directories and a nodetool repair was run.  
Reporter: Sasha Dolgy

 When I run nodetool repair on any of the nodes, the 
 /var/log/cassandra/system.log reports errors similar to:
 INFO [manual-repair-1c6b33bc-ef14-4ec8-94f6-f1464ec8bdec] 2011-06-13 
 21:28:39,877 AntiEntropyService.java (line 177) Excluding /10.128.34.18 from 
 repair because it is on version 0.7 or sooner. You should consider updating 
 this node before running repair again.
 ERROR [manual-repair-1c6b33bc-ef14-4ec8-94f6-f1464ec8bdec] 2011-06-13 
 21:28:39,877 AbstractCassandraDaemon.java (line 113) Fatal exception in 
 thread Thread[manual-repair-1c6b33bc-ef14-4ec8-94f6-f1464ec8bdec,5,RMI 
 Runtime]
 java.util.ConcurrentModificationException
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
   at java.util.HashMap$KeyIterator.next(HashMap.java:828)
   at 
 org.apache.cassandra.service.AntiEntropyService.getNeighbors(AntiEntropyService.java:173)
   at 
 org.apache.cassandra.service.AntiEntropyService$RepairSession.run(AntiEntropyService.java:776)
 The INFO message and subsequent ERROR message are logged for 2 nodes .. I 
 suspect that this is because RF=3.  
 nodetool ring shows that all nodes are up.  
 Client connections (read / write) are not having issues..  
 nodetool version on all nodes shows that each node is 0.8.0
 At suggestion of some contributors, I have restarted each node and tried to 
 run a nodetool repair

[jira] [Created] (CASSANDRA-2846) Changing replication_factor using update keyspace not working

2011-07-01 Thread JIRA
Changing replication_factor using update keyspace not working
---

 Key: CASSANDRA-2846
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2846
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.1
 Environment: A clean 0.8.1 install using the default configuration
Reporter: Jonas Borgström


Unless I've misunderstood the new way to do this with 0.8 I think update 
keyspace is broken:

{code}
[default@unknown] create keyspace Test with placement_strategy = 
'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
[{replication_factor:1}];
37f70d40-a3e9-11e0--242d50cf1fbf
Waiting for schema agreement...
... schemas agree across the cluster
[default@unknown] describe keyspace Test;
Keyspace: Test:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:1]
  Column Families:
[default@unknown] update keyspace Test with placement_strategy = 
'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
[{replication_factor:2}];
489fe220-a3e9-11e0--242d50cf1fbf
Waiting for schema agreement...
... schemas agree across the cluster
[default@unknown] describe keyspace Test;   

Keyspace: Test:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:1]
  Column Families:
{code}

Isn't the second describe keyspace supposed to to say replication_factor:2?

Relevant bits from system.log:
{code}
Migration.java (line 116) Applying migration 
489fe220-a3e9-11e0--242d50cf1fbf Update keyspace Testrep 
strategy:SimpleStrategy{}durable_writes: true to Testrep 
strategy:SimpleStrategy{}durable_writes: true
UpdateKeyspace.java (line 74) Keyspace updated. Please perform any manual 
operations
{code}


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2846) Changing replication_factor using update keyspace not working

2011-07-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13059373#comment-13059373
 ] 

Jonas Borgström commented on CASSANDRA-2846:


Jonathan, thanks for your fast response. Your patch works for me.

 Changing replication_factor using update keyspace not working
 ---

 Key: CASSANDRA-2846
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2846
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.1
 Environment: A clean 0.8.1 install using the default configuration
Reporter: Jonas Borgström
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.8.2

 Attachments: 2846.txt


 Unless I've misunderstood the new way to do this with 0.8 I think update 
 keyspace is broken:
 {code}
 [default@unknown] create keyspace Test with placement_strategy = 
 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
 [{replication_factor:1}];
 37f70d40-a3e9-11e0--242d50cf1fbf
 Waiting for schema agreement...
 ... schemas agree across the cluster
 [default@unknown] describe keyspace Test;
 Keyspace: Test:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:1]
   Column Families:
 [default@unknown] update keyspace Test with placement_strategy = 
 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
 [{replication_factor:2}];
 489fe220-a3e9-11e0--242d50cf1fbf
 Waiting for schema agreement...
 ... schemas agree across the cluster
 [default@unknown] describe keyspace Test; 
   
 Keyspace: Test:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:1]
   Column Families:
 {code}
 Isn't the second describe keyspace supposed to to say 
 replication_factor:2?
 Relevant bits from system.log:
 {code}
 Migration.java (line 116) Applying migration 
 489fe220-a3e9-11e0--242d50cf1fbf Update keyspace Testrep 
 strategy:SimpleStrategy{}durable_writes: true to Testrep 
 strategy:SimpleStrategy{}durable_writes: true
 UpdateKeyspace.java (line 74) Keyspace updated. Please perform any manual 
 operations
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2863) NPE when writing SSTable generated via repair

2011-07-06 Thread JIRA
NPE when writing SSTable generated via repair
-

 Key: CASSANDRA-2863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2863
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.1
Reporter: Héctor Izquierdo


A NPE is generated during repair when closing an sstable generated via SSTable 
build. It doesn't happen always. The node had been scrubbed and compacted 
before calling repair.

 INFO [CompactionExecutor:2] 2011-07-06 11:11:32,640 SSTableReader.java (line 
158) Opening /d2/cassandra/data/sbs/walf-g-730
ERROR [CompactionExecutor:2] 2011-07-06 11:11:34,327 
AbstractCassandraDaemon.java (line 113) Fatal exception in thread 
Thread[CompactionExecutor:2,1,main] 
java.lang.NullPointerException
at 
org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.close(SSTableWriter.java:382)
at 
org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.index(SSTableWriter.java:370)
at 
org.apache.cassandra.io.sstable.SSTableWriter$Builder.build(SSTableWriter.java:315)
at 
org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1103)
at 
org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1094)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2865) During repair mark a node as being repared, so no reads go to that node

2011-07-06 Thread JIRA
During repair mark a node as being repared, so no reads go to that node
---

 Key: CASSANDRA-2865
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2865
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.8.1
Reporter: Héctor Izquierdo


If a disk breaks and you lose a node data, when you bring it up again to do the 
repair, it will serve reads, and if clients are using CL.ONE, they will get bad 
data. Would it be possible to signal somehow that the node should not be 
trusted and reads should go to any other replica?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2865) During repair mark a node as being repared, so no reads go to that node

2011-07-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13060654#comment-13060654
 ] 

Héctor Izquierdo commented on CASSANDRA-2865:
-

You are absolutely right. But wouldn't it still be useful when you are 
repairing a node and it can take several hours to complete? Why don't treat it 
as bootstrap?

 During repair mark a node as being repared, so no reads go to that node
 ---

 Key: CASSANDRA-2865
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2865
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.8.1
Reporter: Héctor Izquierdo

 If a disk breaks and you lose a node data, when you bring it up again to do 
 the repair, it will serve reads, and if clients are using CL.ONE, they will 
 get bad data. Would it be possible to signal somehow that the node should not 
 be trusted and reads should go to any other replica?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2873) Typo in src/java/org/apache/cassandra/cli/CliClient

2011-07-08 Thread JIRA
Typo in src/java/org/apache/cassandra/cli/CliClient  
-

 Key: CASSANDRA-2873
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2873
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.8.1
 Environment: ubuntu linux 10.4
Reporter: Michał Bartoszewski


I have read your documentation about syntax for creating column family and 
parameters that I can pass.
According to documentation i can use parameter :

 - keys_cache_save_period: Duration in seconds after which Cassandra should
  safe the keys cache. Caches are saved to saved_caches_directory as
  specified in conf/Cassandra.yaml. Default is 14400 or 4 hours. 

but then i was receiving error: No enum const class 
org.apache.cassandra.cli.CliClient$ColumnFamilyArgument.KEYS_CACHE_SAVE_PERIOD


In class mentioned in title we have:

protected enum ColumnFamilyArgument
115 {
116 COLUMN_TYPE,
117 COMPARATOR,
118 SUBCOMPARATOR,
119 COMMENT,
120 ROWS_CACHED,
121 ROW_CACHE_SAVE_PERIOD,
122 KEYS_CACHED,
123 KEY_CACHE_SAVE_PERIOD,    TYPO !
124 READ_REPAIR_CHANCE,
125 GC_GRACE,
126 COLUMN_METADATA,
127 MEMTABLE_OPERATIONS,
128 MEMTABLE_THROUGHPUT,
129 MEMTABLE_FLUSH_AFTER,
130 DEFAULT_VALIDATION_CLASS,
131 MIN_COMPACTION_THRESHOLD,
132 MAX_COMPACTION_THRESHOLD,
133 REPLICATE_ON_WRITE,
134 ROW_CACHE_PROVIDER,
135 KEY_VALIDATION_CLASS
136 } 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2863) NPE when writing SSTable generated via repair

2011-07-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13062191#comment-13062191
 ] 

Héctor Izquierdo commented on CASSANDRA-2863:
-

I don't now if it's the same one, buy I got another during repair on another 
node:

ERROR [Thread-1710] 2011-07-08 21:21:00,514 AbstractCassandraDaemon.java (line 
113) Fatal exception in thread Thread[Thread-1710,5,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at 
org.apache.cassandra.streaming.StreamInSession.closeIfFinished(StreamInSession.java:154)
at 
org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:63)
at 
org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:162)
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:95)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.cassandra.streaming.StreamInSession.closeIfFinished(StreamInSession.java:138)
... 3 more
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.close(SSTableWriter.java:382)
at 
org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.index(SSTableWriter.java:370)
at 
org.apache.cassandra.io.sstable.SSTableWriter$Builder.build(SSTableWriter.java:315)
at 
org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1103)
at 
org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1094)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)


 NPE when writing SSTable generated via repair
 -

 Key: CASSANDRA-2863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2863
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.1
Reporter: Héctor Izquierdo
Assignee: Sylvain Lebresne
 Fix For: 0.8.2


 A NPE is generated during repair when closing an sstable generated via 
 SSTable build. It doesn't happen always. The node had been scrubbed and 
 compacted before calling repair.
  INFO [CompactionExecutor:2] 2011-07-06 11:11:32,640 SSTableReader.java (line 
 158) Opening /d2/cassandra/data/sbs/walf-g-730
 ERROR [CompactionExecutor:2] 2011-07-06 11:11:34,327 
 AbstractCassandraDaemon.java (line 113) Fatal exception in thread 
 Thread[CompactionExecutor:2,1,main] 
 java.lang.NullPointerException
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.close(SSTableWriter.java:382)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.index(SSTableWriter.java:370)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$Builder.build(SSTableWriter.java:315)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1103)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1094)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-2875) Increase index_interval and reopen sstables on low heap situations

2011-07-09 Thread JIRA
Increase index_interval and reopen sstables on low heap situations
--

 Key: CASSANDRA-2875
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2875
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.8.1
Reporter: Héctor Izquierdo


One of the reasons that can cause an OOM is key indexes. Of course you can tune 
it, but that's after your node has crashed. Events like repair can cause a much 
bigger memory pressure than expected on normal operation. As part of the 
measures taken when heap is almost full it would be good if key indexes could 
be shrank. I don't know how indexes are stored in memory but I guess it would 
be possible to remove entries without rereading all sstables.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2863) NPE when writing SSTable generated via repair

2011-07-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13068956#comment-13068956
 ] 

Héctor Izquierdo commented on CASSANDRA-2863:
-

I have a patch from 2818 (2818-v4) applied, if that's of any help. The patch 
only touches messaging classes though.

 NPE when writing SSTable generated via repair
 -

 Key: CASSANDRA-2863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2863
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.1
Reporter: Héctor Izquierdo
Assignee: Sylvain Lebresne
 Fix For: 0.8.2


 A NPE is generated during repair when closing an sstable generated via 
 SSTable build. It doesn't happen always. The node had been scrubbed and 
 compacted before calling repair.
  INFO [CompactionExecutor:2] 2011-07-06 11:11:32,640 SSTableReader.java (line 
 158) Opening /d2/cassandra/data/sbs/walf-g-730
 ERROR [CompactionExecutor:2] 2011-07-06 11:11:34,327 
 AbstractCassandraDaemon.java (line 113) Fatal exception in thread 
 Thread[CompactionExecutor:2,1,main] 
 java.lang.NullPointerException
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.close(SSTableWriter.java:382)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.index(SSTableWriter.java:370)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$Builder.build(SSTableWriter.java:315)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1103)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1094)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-2863) NPE when writing SSTable generated via repair

2011-07-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13068956#comment-13068956
 ] 

Héctor Izquierdo edited comment on CASSANDRA-2863 at 7/21/11 12:36 PM:
---

I have a patch from #2818 (2818-v4) applied, if that's of any help. The patch 
only touches messaging classes though.

  was (Author: hector.izquierdo):
I have a patch from 2818 (2818-v4) applied, if that's of any help. The 
patch only touches messaging classes though.
  
 NPE when writing SSTable generated via repair
 -

 Key: CASSANDRA-2863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2863
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.1
Reporter: Héctor Izquierdo
Assignee: Sylvain Lebresne
 Fix For: 0.8.2


 A NPE is generated during repair when closing an sstable generated via 
 SSTable build. It doesn't happen always. The node had been scrubbed and 
 compacted before calling repair.
  INFO [CompactionExecutor:2] 2011-07-06 11:11:32,640 SSTableReader.java (line 
 158) Opening /d2/cassandra/data/sbs/walf-g-730
 ERROR [CompactionExecutor:2] 2011-07-06 11:11:34,327 
 AbstractCassandraDaemon.java (line 113) Fatal exception in thread 
 Thread[CompactionExecutor:2,1,main] 
 java.lang.NullPointerException
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.close(SSTableWriter.java:382)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$RowIndexer.index(SSTableWriter.java:370)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter$Builder.build(SSTableWriter.java:315)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1103)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:1094)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2973) fatal errrors after nodetool cleanup

2011-08-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13073500#comment-13073500
 ] 

Michał Kowalczuk commented on CASSANDRA-2973:
-

Hello, Wojciech is my colleague. This cluster was upgraded from 0.7. I'm not 
sure which CF was getting read errors, though.

And as far as I know, Wojciech will be back on 15th August, not October.

 fatal errrors after nodetool cleanup
 

 Key: CASSANDRA-2973
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2973
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Wojciech Meler
Assignee: Sylvain Lebresne

 after adding nodes to cluster  running cleanup I get scaring exceptions in 
 log:
 2011-07-30 00:00:05:506 CEST ERROR 
 [ReadStage:2335][org.apache.cassandra.service.AbstractCassandraDaemon] Fatal 
 exception in thread Thread[ReadStage:2335,5,main]
 java.io.IOError: java.io.IOException: mmap segment underflow; remaining is 
 4394 but 60165 requested
 at 
 org.apache.cassandra.db.columniterator.IndexedSliceReader.init(IndexedSliceReader.java:80)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:91)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:67)
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:66)
 at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1292)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1189)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1146)
 at org.apache.cassandra.db.Table.getRow(Table.java:385)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:61)
 at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:69)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
  Caused by: java.io.IOException: mmap segment underflow; remaining is 4394 
 but 60165 requested
 at 
 org.apache.cassandra.io.util.MappedFileDataInput.readBytes(MappedFileDataInput.java:117)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:389)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:368)
 at 
 org.apache.cassandra.io.sstable.IndexHelper$IndexInfo.deserialize(IndexHelper.java:194)
 at 
 org.apache.cassandra.io.sstable.IndexHelper.deserializeIndex(IndexHelper.java:83)
 at 
 org.apache.cassandra.db.columniterator.IndexedSliceReader.init(IndexedSliceReader.java:73)
 ... 14 more
 exceptions disappeared after running scrub

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2991) Add a 'load new sstables' JMX/nodetool command

2011-08-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13084930#comment-13084930
 ] 

Sébastien Giroux commented on CASSANDRA-2991:
-

I wouldn't use underscore in the name as the other command don't have any :)

 Add a 'load new sstables' JMX/nodetool command
 --

 Key: CASSANDRA-2991
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2991
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Brandon Williams
Assignee: Pavel Yaskevich
 Fix For: 0.8.5

 Attachments: CASSANDRA-2991.patch


 Sometimes people have to create a new cluster to get around a problem and 
 need to copy sstables around.  It would be convenient to be able to trigger 
 this from nodetool or JMX instead of doing a restart of the node.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3066) Creating a keyspace SYSTEM create issue

2011-08-21 Thread JIRA
Creating a keyspace SYSTEM create issue
---

 Key: CASSANDRA-3066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3066
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.4
 Environment: Windows
Reporter: Sébastien Giroux
Priority: Minor
 Fix For: 0.8.5


It's possible to create a keyspace SYSTEM but impossible to do anything with it 
after.

I know naming a keyspace SYSTEM is probably not a good idea but I was testing 
something on a test cluster and found this bug. Step to reproduce:

connect localhost/9160;
create keyspace SYSTEM;
use SYSTEM;
create column family test
with comparator = UTF8Type and subcomparator = UTF8Type
and default_validation_class = UTF8Type
and column_metadata = [{column_name: title, validation_class: UTF8Type},
{column_name: publisher, validation_class: UTF8Type}];

And you get:

system keyspace is not user-modifiable

Although SYSTEM keyspace have been created and is a different keyspace as 
system.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3066) Creating a keyspace SYSTEM cause issue

2011-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sébastien Giroux updated CASSANDRA-3066:


Summary: Creating a keyspace SYSTEM cause issue  (was: Creating a keyspace 
SYSTEM create issue)

 Creating a keyspace SYSTEM cause issue
 --

 Key: CASSANDRA-3066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3066
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.4
 Environment: Windows
Reporter: Sébastien Giroux
Priority: Minor
 Fix For: 0.8.5


 It's possible to create a keyspace SYSTEM but impossible to do anything with 
 it after.
 I know naming a keyspace SYSTEM is probably not a good idea but I was testing 
 something on a test cluster and found this bug. Step to reproduce:
 connect localhost/9160;
 create keyspace SYSTEM;
 use SYSTEM;
 create column family test
 with comparator = UTF8Type and subcomparator = UTF8Type
 and default_validation_class = UTF8Type
 and column_metadata = [{column_name: title, validation_class: UTF8Type},
 {column_name: publisher, validation_class: UTF8Type}];
 And you get:
 system keyspace is not user-modifiable
 Although SYSTEM keyspace have been created and is a different keyspace as 
 system.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3066) Creating a keyspace SYSTEM cause issue

2011-08-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sébastien Giroux updated CASSANDRA-3066:


Attachment: CASSANDRA-3066-0.8-v1.patch

 Creating a keyspace SYSTEM cause issue
 --

 Key: CASSANDRA-3066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3066
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.4
 Environment: Windows
Reporter: Sébastien Giroux
Priority: Minor
 Fix For: 0.8.5

 Attachments: CASSANDRA-3066-0.8-v1.patch


 It's possible to create a keyspace SYSTEM but impossible to do anything with 
 it after.
 I know naming a keyspace SYSTEM is probably not a good idea but I was testing 
 something on a test cluster and found this bug. Step to reproduce:
 connect localhost/9160;
 create keyspace SYSTEM;
 use SYSTEM;
 create column family test
 with comparator = UTF8Type and subcomparator = UTF8Type
 and default_validation_class = UTF8Type
 and column_metadata = [{column_name: title, validation_class: UTF8Type},
 {column_name: publisher, validation_class: UTF8Type}];
 And you get:
 system keyspace is not user-modifiable
 Although SYSTEM keyspace have been created and is a different keyspace as 
 system.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3066) Creating a keyspace SYSTEM cause issue

2011-08-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13088449#comment-13088449
 ] 

Sébastien Giroux commented on CASSANDRA-3066:
-

If what we want is that the keyspace name is case-sensitive and system != 
SYSTEM, patch attached! Otherwise, I have no clue :)

 Creating a keyspace SYSTEM cause issue
 --

 Key: CASSANDRA-3066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3066
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.4
 Environment: Windows
Reporter: Sébastien Giroux
Priority: Minor
 Fix For: 0.8.5

 Attachments: CASSANDRA-3066-0.8-v1.patch


 It's possible to create a keyspace SYSTEM but impossible to do anything with 
 it after.
 I know naming a keyspace SYSTEM is probably not a good idea but I was testing 
 something on a test cluster and found this bug. Step to reproduce:
 connect localhost/9160;
 create keyspace SYSTEM;
 use SYSTEM;
 create column family test
 with comparator = UTF8Type and subcomparator = UTF8Type
 and default_validation_class = UTF8Type
 and column_metadata = [{column_name: title, validation_class: UTF8Type},
 {column_name: publisher, validation_class: UTF8Type}];
 And you get:
 system keyspace is not user-modifiable
 Although SYSTEM keyspace have been created and is a different keyspace as 
 system.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-3066) Creating a keyspace SYSTEM cause issue

2011-08-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13088449#comment-13088449
 ] 

Sébastien Giroux edited comment on CASSANDRA-3066 at 8/21/11 8:46 PM:
--

If what we want is that the keyspace name is case-sensitive and system != 
SYSTEM, patch attached! Otherwise, I have no clue :)

  was (Author: wajam):
If what we want is that the keyspace name is case-sensitive and system != 
SYSTEM, patch attached! Otherwise, I have no clue :)
  
 Creating a keyspace SYSTEM cause issue
 --

 Key: CASSANDRA-3066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3066
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.4
 Environment: Windows
Reporter: Sébastien Giroux
Priority: Minor
 Fix For: 0.8.5

 Attachments: CASSANDRA-3066-0.8-v1.patch


 It's possible to create a keyspace SYSTEM but impossible to do anything with 
 it after.
 I know naming a keyspace SYSTEM is probably not a good idea but I was testing 
 something on a test cluster and found this bug. Step to reproduce:
 connect localhost/9160;
 create keyspace SYSTEM;
 use SYSTEM;
 create column family test
 with comparator = UTF8Type and subcomparator = UTF8Type
 and default_validation_class = UTF8Type
 and column_metadata = [{column_name: title, validation_class: UTF8Type},
 {column_name: publisher, validation_class: UTF8Type}];
 And you get:
 system keyspace is not user-modifiable
 Although SYSTEM keyspace have been created and is a different keyspace as 
 system.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-3066) Creating a keyspace SYSTEM cause issue

2011-08-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13088449#comment-13088449
 ] 

Sébastien Giroux edited comment on CASSANDRA-3066 at 8/21/11 8:47 PM:
--

If what we want is that the keyspace name is case-sensitive and system  
SYSTEM, patch attached! Otherwise, I have no clue :)

  was (Author: wajam):
If what we want is that the keyspace name is case-sensitive and system != 
SYSTEM, patch attached! Otherwise, I have no clue :)
  
 Creating a keyspace SYSTEM cause issue
 --

 Key: CASSANDRA-3066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3066
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.4
 Environment: Windows
Reporter: Sébastien Giroux
Priority: Minor
 Fix For: 0.8.5

 Attachments: CASSANDRA-3066-0.8-v1.patch


 It's possible to create a keyspace SYSTEM but impossible to do anything with 
 it after.
 I know naming a keyspace SYSTEM is probably not a good idea but I was testing 
 something on a test cluster and found this bug. Step to reproduce:
 connect localhost/9160;
 create keyspace SYSTEM;
 use SYSTEM;
 create column family test
 with comparator = UTF8Type and subcomparator = UTF8Type
 and default_validation_class = UTF8Type
 and column_metadata = [{column_name: title, validation_class: UTF8Type},
 {column_name: publisher, validation_class: UTF8Type}];
 And you get:
 system keyspace is not user-modifiable
 Although SYSTEM keyspace have been created and is a different keyspace as 
 system.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3098) Weird/hex Column Name: formatting with describe keyspaces

2011-08-29 Thread JIRA
Weird/hex Column Name:  formatting with describe keyspaces
--

 Key: CASSANDRA-3098
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3098
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.4
 Environment: Cassandra 0.8.4 (and today's cassandra-0.8 branch)
java version 1.6.0_20
OpenJDK Runtime Environment (IcedTea6 1.9.9) (fedora-54.1.9.9.fc14-x86_64)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)

Reporter: Jonas Borgström


Displaying a newly created column family with column_metadata displays some 
kind of hex representation of the column names instead of something more human 
readable:

{code}
[default@test] create column family Foo3 with column_metadata = [{ column_name: 
mycolumn, validation_class: UTF8Type }, { column_name: mycolumn2, 
validation_class: UTF8Type }];
4f266c60-d236-11e0--242d50cf1fbf
Waiting for schema agreement...
... schemas agree across the cluster
[default@test] describe keyspace;   


Keyspace: test:
  Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
  Durable Writes: true
Options: [datacenter1:1]
  Column Families:
ColumnFamily: Foo3
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: org.apache.cassandra.db.marshal.BytesType
  Row cache size / save period in seconds: 0.0/0
  Key cache size / save period in seconds: 20.0/14400
  Memtable thresholds: 0.2953125/1440/63 (millions of ops/minutes/MB)
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: true
  Built indexes: []
  Column Metadata:
Column Name: fffcf2   --- I expected this to say 'mycolumn' or 
'mycolumn2'
  Validation Class: org.apache.cassandra.db.marshal.UTF8Type
Column Name:  --- I expected this to say 'mycolumn' or 
'mycolumn2'
  Validation Class: org.apache.cassandra.db.marshal.UTF8Type
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13095967#comment-13095967
 ] 

Jérémy Sevellec commented on CASSANDRA-2961:


What do you think about : 
- create a Map of expireTimeEndpointMap into Gossiper wich store endpoints as 
key and expireTime as value.

- SS, when a state change :
   - if STATUS is REMOVED_TOKEN or STATUS_LEFT, extract the expireTime in the 
string a the end of the VV and call the Gossiper to add the endpoint/expireTime 
into the expireTimeEndpointMap.
For all other state
   - for all other STATUS, call the gossiper to remove the endpoint into 
expireTimeEndpointMap if it is present.

- Gossiper, when doing status check for each endpoint, verifying if there is an 
expireTime in expireTimeEndpointMap for this endpoint, if so, we have an 
expireTime, if not, expireTime is set with aVeryLongTime. test and evict if 
necessary the endpoint.

It makes sense for you?

(I describe a lot... sorry but i would like to be sure of good understanding 
all aspect of the problem...)



 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams
 Fix For: 1.0


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096117#comment-13096117
 ] 

Jérémy Sevellec commented on CASSANDRA-2961:


I agree with you about the fact that : REMOVED_TOKEN and STATUS_LEFT are the 
only states we need to worry about expiring. 

But if SS :
 - first receive a change for an endpoint with the status by REMOVED_TOKEN or 
STATUS_LEFT 
 - and then for this same endpoint receiving an other change with one of other 
status
we have to delete the expireTime because the gossiper will remove this endpoint 
when expireTime will be exceeded and it must not? no?


 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams
 Fix For: 1.0


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-2961:
---

Comment: was deleted

(was: here is the patch)

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
  Labels: patch
 Fix For: 1.0


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-2961:
---

Attachment: trunk-2961.patch

Here is the patch

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
  Labels: patch
 Fix For: 1.0

 Attachments: trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-2961:
---

Comment: was deleted

(was: Here is the patch)

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
  Labels: patch
 Fix For: 1.0

 Attachments: trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096789#comment-13096789
 ] 

Jérémy Sevellec commented on CASSANDRA-2961:


In addition, I have include a lib for test (hamcrest) to simplify the writing 
of assertion into Junit.

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
  Labels: patch
 Fix For: 1.0

 Attachments: trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13099052#comment-13099052
 ] 

Jérémy Sevellec commented on CASSANDRA-2961:


ok. I thought it was a good idea but I realize that to inspect the patch, it is 
difficult. I will publish a new patch version as soon as possible.

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
  Labels: patch
 Fix For: 1.0

 Attachments: trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13099067#comment-13099067
 ] 

Jérémy Sevellec commented on CASSANDRA-2961:


yes it's true and it'is my code style configuration :-(

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
  Labels: patch
 Fix For: 1.0

 Attachments: trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-2961:
---

Attachment: trunk-2961-v2.patch

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
  Labels: patch
 Fix For: 1.0

 Attachments: trunk-2961-v2.patch, trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13100089#comment-13100089
 ] 

Jérémy Sevellec commented on CASSANDRA-2961:



ok I like it, it's few things :-) : 
- hamscrest : In my case, It's true, I just use hamcrest with is into assert. 
There is a lot of other verb which interesting to make asserting more readable. 
Tt was for help for next but if you want I can remove it. tell me you do you 
prefer.
- VersionedValue.getExpireTime : It's true, I put it in the Gossiper? a utility 
class?
- addExpireTimeIfFound : ok i put one call in excise but i keep the method to 
isolate the thinking. if you're ok.
- DEBUG log : ho there was (to make my test), but i remove it before creating 
the patch... I add them again


 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.0

 Attachments: trunk-2961-v2.patch, trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13100123#comment-13100123
 ] 

Jérémy Sevellec commented on CASSANDRA-2961:


I found why i have make 2 call of addExpireTimeIfFound into SS instead of 
calling it into excise : 
There is 3 calls to excise in SS : handleStateLeft, handleStateRemoving and... 
removeToken.
In removeToken, we don't have the pieces of the VV which contain expireTime. 
So we can't extract an expireTime.

so there is 3 possibilities : 
- modify excise to add pieces parameter, set it to null in the removeToken 
call and manage the case if pieces is null. I find this solution not so 
beautiful but will work.
- refactor and create 2 method signature of excise : one with pieces parameter 
and one without.
- keep as it is.

It's as you want. Tell me what is your preference (or another).

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.0

 Attachments: trunk-2961-v2.patch, trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-2961:
---

Attachment: trunk-2961-v3.patch

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.1

 Attachments: trunk-2961-v2.patch, trunk-2961-v3.patch, 
 trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13100637#comment-13100637
 ] 

Jérémy Sevellec commented on CASSANDRA-2961:


You can find an new version of the patch : 
- without hamcrest dependency
- compute the generation of expireTime into gossiper and calling it into the 
constructor of VV
- modify SS to be more readable

I hope it's ok :-)

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.1

 Attachments: trunk-2961-v2.patch, trunk-2961-v3.patch, 
 trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13100637#comment-13100637
 ] 

Jérémy Sevellec edited comment on CASSANDRA-2961 at 9/8/11 8:21 PM:


You can find an new version of the patch : 
- without hamcrest dependency
- compute the generation of expireTime into gossiper and calling it into the 
constructor of VV
- modify SS to be more readable
- adding some log

I hope it's ok :-)

  was (Author: jsevellec):
You can find an new version of the patch : 
- without hamcrest dependency
- compute the generation of expireTime into gossiper and calling it into the 
constructor of VV
- modify SS to be more readable

I hope it's ok :-)
  
 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.1

 Attachments: trunk-2961-v2.patch, trunk-2961-v3.patch, 
 trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-2961:
---

Attachment: (was: trunk-2961-v3.patch)

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.1

 Attachments: trunk-2961-v2.patch, trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-2961:
---

Attachment: trunk-2961-v3.patch

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.1

 Attachments: trunk-2961-v2.patch, trunk-2961-v3.patch, 
 trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3198) debian packaging installation problem when installatin for the first time

2011-09-13 Thread JIRA
debian packaging installation problem when installatin for the first time
-

 Key: CASSANDRA-3198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3198
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Affects Versions: 0.8.5
Reporter: Jérémy Sevellec


when installing cassandra through the debian packaging for the first time, 
there is permission problem when starting Cassandra.

Normally, the postinst script change owner of /var/log/cassandra and 
/var/lib/cassandra from root to cassandra user.

there is a problem with the test which verify if threre is a need to change the 
owner of these directory or not.

On a new install, the $2 parameter is not set and the the test is false and the 
owner is not changed.

(simply, i think replace  with || might work)


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3198) debian packaging installation problem when installing for the first time

2011-09-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-3198:
---

Summary: debian packaging installation problem when installing for the 
first time  (was: debian packaging installation problem when installatin for 
the first time)

 debian packaging installation problem when installing for the first time
 

 Key: CASSANDRA-3198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3198
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Affects Versions: 0.8.5
Reporter: Jérémy Sevellec

 when installing cassandra through the debian packaging for the first time, 
 there is permission problem when starting Cassandra.
 Normally, the postinst script change owner of /var/log/cassandra and 
 /var/lib/cassandra from root to cassandra user.
 there is a problem with the test which verify if threre is a need to change 
 the owner of these directory or not.
 On a new install, the $2 parameter is not set and the the test is false and 
 the owner is not changed.
 (simply, i think replace  with || might work)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3198) debian packaging installation problem when installing for the first time

2011-09-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-3198:
---

Attachment: trunk-3198-v1.patch

here is the patch

 debian packaging installation problem when installing for the first time
 

 Key: CASSANDRA-3198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3198
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Affects Versions: 0.8.5
Reporter: Jérémy Sevellec
Assignee: Eric Evans
 Attachments: trunk-3198-v1.patch


 when installing cassandra through the debian packaging for the first time, 
 there is permission problem when starting Cassandra.
 Normally, the postinst script change owner of /var/log/cassandra and 
 /var/lib/cassandra from root to cassandra user.
 there is a problem with the test which verify if threre is a need to change 
 the owner of these directory or not.
 On a new install, the $2 parameter is not set and the the test is false and 
 the owner is not changed.
 (simply, i think replace  with || might work)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3198) debian packaging installation problem when installing for the first time

2011-09-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13104791#comment-13104791
 ] 

Jérémy Sevellec commented on CASSANDRA-3198:


I agree with that too

 debian packaging installation problem when installing for the first time
 

 Key: CASSANDRA-3198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3198
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Affects Versions: 0.8.5
Reporter: Jérémy Sevellec
Assignee: Jérémy Sevellec
 Fix For: 0.8.6

 Attachments: debian-postinst-fixperms.patch, trunk-3198-v1.patch


 when installing cassandra through the debian packaging for the first time, 
 there is permission problem when starting Cassandra.
 Normally, the postinst script change owner of /var/log/cassandra and 
 /var/lib/cassandra from root to cassandra user.
 there is a problem with the test which verify if threre is a need to change 
 the owner of these directory or not.
 On a new install, the $2 parameter is not set and the the test is false and 
 the owner is not changed.
 (simply, i think replace  with || might work)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13105455#comment-13105455
 ] 

Jérémy Sevellec edited comment on CASSANDRA-2961 at 9/15/11 4:09 PM:
-

Here is a new version of the patch integrating your comments

  was (Author: jsevellec):
Here is a new version of the path integrating your comments
  
 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Brandon Williams
Assignee: Jérémy Sevellec
Priority: Minor
 Fix For: 1.0.1

 Attachments: trunk-2961-v2.patch, trunk-2961-v3.patch, 
 trunk-2961-v4.patch, trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time

2011-09-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Sevellec updated CASSANDRA-2961:
---

Attachment: trunk-2961-v4.patch

Here is a new version of the path integrating your comments

 Expire dead gossip states based on time
 ---

 Key: CASSANDRA-2961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Brandon Williams
Assignee: Jérémy Sevellec
Priority: Minor
 Fix For: 1.0.1

 Attachments: trunk-2961-v2.patch, trunk-2961-v3.patch, 
 trunk-2961-v4.patch, trunk-2961.patch


 Currently dead states are held until aVeryLongTime, 3 days.  The problem is 
 that if a node reboots within this period, it begins a new 3 days and will 
 repopulate the ring with the dead state.  While mostly harmless, perpetuating 
 the state forever is at least wasting a small amount of bandwidth.  Instead, 
 we can expire states based on a ttl, which will require that the cluster be 
 loosely time synced; within the quarantine period of 60s.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-07-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4049:
--

Attachment: pluggable_custom_components.patch

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.3

 Attachments: pluggable_custom_components.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-07-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4049:
--

Attachment: (was: compaction_strategy_cleanup.patch)

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.3

 Attachments: pluggable_custom_components.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-07-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4049:
--

Attachment: (was: component_patch.diff)

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.3

 Attachments: pluggable_custom_components.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4448) CQL3: allow to define a per-cf default consistency level

2012-07-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13419037#comment-13419037
 ] 

Michaël Figuière commented on CASSANDRA-4448:
-

{quote}If you're writing application code, you should not be writing raw CQL; 
you should be using a higher-level, idiomatic API{quote}

Hibernate let users execute queries using either their their own SQL-ish query 
language (HQL) or their QueryBuilder API (Criteria). Actually as far as I can 
observe, most heavy-weight business applications that rely on Hibernate to 
execute many different kind of queries typically mostly use HQL as a proper 
String query often ends up being more readable and easier to maintain than a 
chain of methods. This different case might lead to different habits, but we 
should still consider CQL Language as a major API for applications.

{quote}There can be more than one such code, if only because a lot of people 
need to access their DB from multiple languages.{quote}

I think this is an important point: this feature allow for a central 
enforcement point for CL for applications that rely on default, thus 
simplifying the headache of changing the common CL.
Furthermore I guess some users will wish to decouple their application from CL 
configuration to allow them for some behavior or performance tuning over time. 
Typically I can imagine a DBA that want to trade some consistency for 
performance as a graceful degradation strategy will be happy to just have to 
push an {{ALTER}} command.
Not having it I guess many developers would follow the 
_parameterize-it-just-in-case_ strategy and this would lead to some additional 
properties in their {{.properties}} files, in the case of Java apps.

 CQL3: allow to define a per-cf default consistency level
 

 Key: CASSANDRA-4448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4448
 Project: Cassandra
  Issue Type: New Feature
Reporter: Sylvain Lebresne
  Labels: cql3
 Fix For: 1.2


 One of the goal of CQL3 is that client library should not have to parse 
 queries to provide a good experience. In particular, that means such client 
 (that don't want to parse queries) won't be able to allow the user to define 
 a specific default read/write consistency level per-CF, forcing user to 
 specific the consistency level with every query, which is not very user 
 friendly.
 This ticket suggests the addition of per-cf default read/write consitency 
 level. Typically the syntax would be:
 {noformat}
 CREATE TABLE foo (...)
 WITH DEFAULT_READ_CONSISTENCY = QUORUM
  AND DEFAULT_WRITE_CONSISTENCY = QUORUM
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4454) Add a notice on cqlsh startup about CQL2/3 switches

2012-07-20 Thread JIRA
Michaël Figuière created CASSANDRA-4454:
---

 Summary: Add a notice on cqlsh startup about CQL2/3 switches
 Key: CASSANDRA-4454
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4454
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.1.2
Reporter: Michaël Figuière


Several developers I've talked with seem not to have noticed the {{-3}} switch 
immediately to run in CQL3 mode. If missing, cqlsh can obviously appear buggy 
in its way to handle CQL3.
I guess it would be worth to add a notice at startup about this important 
detail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4454) Add a notice on cqlsh startup about CQL2/3 switches

2012-07-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13419641#comment-13419641
 ] 

Michaël Figuière commented on CASSANDRA-4454:
-

Indeed the way it's currently mentioned is enough for experienced users. On the 
other hand newcomers might not even be familiar with the fact we're in the 
middle of a CQL grammar switch, then they run cqlsh with default settings, copy 
paste an example of CQL3 DDL with a composite column from a web page and end up 
with something like this:

{noformat}
cqlsh:mykeyspace CREATE TABLE timeline (
  ...  user_id varchar,
  ...  tweet_id uuid,
  ...  author varchar,
  ...  body varchar,
  ...  PRIMARY KEY (user_id, tweet_id));
Bad Request: line 6:40 mismatched input ')' expecting EOF
{noformat}

This is an example of confusing error you can have pushing CQL3 commands. The 
fact that several guys had difficulties with this situation tends to show that 
a message like {{Consider using -3 switch to enable CQL3}} would be useful, 
either at startup or when a CQL2/3 grammar mismatch occur. As the former is 
trivial to implement, I was suggesting it. 

 Add a notice on cqlsh startup about CQL2/3 switches
 ---

 Key: CASSANDRA-4454
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4454
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.1.0
Reporter: Michaël Figuière
Assignee: paul cannon
Priority: Minor
  Labels: cqlsh
 Fix For: 1.1.3


 Several developers I've talked with seem not to have noticed the {{-3}} 
 switch immediately to run in CQL3 mode. If missing, cqlsh can obviously 
 appear buggy in its way to handle CQL3.
 I guess it would be worth to add a notice at startup about this important 
 detail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (CASSANDRA-4398) Incorrect english for cassandra-cli help

2012-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommy Tynjä reassigned CASSANDRA-4398:
--

Assignee: Tommy Tynjä

 Incorrect english for cassandra-cli help
 

 Key: CASSANDRA-4398
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4398
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Aurelien Derouineau
Assignee: Tommy Tynjä
Priority: Trivial

 Some of the help provided for the CLI is not written correctly.
 For example:
 {{describeDescribe a keyspace and it's column families or 
 column family in current keyspace.}}
 {{drop column family  Remove a column family and it's data.}}
 {{drop keyspace   Remove a keyspace and it's data.}}
 Here all the *it's* should be *its*.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4451) add thousand separator when display size

2012-07-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423933#comment-13423933
 ] 

Tommy Tynjä commented on CASSANDRA-4451:


This can easily be achieved with java.text.MessageFormat, which also has Locale 
support if desired. I applied this to the MeteredFlusher: 
https://github.com/tommysdk/cassandra/commit/c6ab6e7d0d8a205f89e5c96741449c49130cf078.
 The approach can obviously be applied to other classes as well.

 add thousand separator when display size
 

 Key: CASSANDRA-4451
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4451
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.2
Reporter: feng qu
Priority: Minor

 I would like to see size displayed using thousand separator in system log as 
 well as in nodetool compactionstats/cfstats/netstats/tpstats etc. 
 here is an example in system.log
  INFO [OptionalTasks:1] 2012-07-19 10:02:21,137 MeteredFlusher.java (line 62) 
 flushing high-traffic column family CFS(Keyspace='mobilelogks', 
 ColumnFamily='UserNotificationLog') (estimated 1632406241 bytes)
 1,632,406,241 is better to read than 1632406241. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4481) Commitlog not replayed after restart - data lost

2012-08-02 Thread JIRA
Ivo Meißner created CASSANDRA-4481:
--

 Summary: Commitlog not replayed after restart - data lost
 Key: CASSANDRA-4481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Single node cluster on 64Bit CentOS
Reporter: Ivo Meißner
Priority: Critical
 Fix For: 1.1.3


When data is written to the commitlog and I restart the machine, all commited 
data is lost that has not been flushed to disk. 

In the startup logs it says that it replays the commitlog successfully, but the 
data is not available then. 

When I open the commitlog file in an editor I can see the added data, but after 
the restart it cannot be fetched from cassandra. 

{code}
 INFO 09:59:45,362 Replaying 
/var/myproject/cassandra/commitlog/CommitLog-83203377067.log
 INFO 09:59:45,476 Finished reading 
/var/myproject/cassandra/commitlog/CommitLog-83203377067.log
 INFO 09:59:45,476 Log replay complete, 0 replayed mutations

{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4481) Commitlog not replayed after restart - data lost

2012-08-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13427889#comment-13427889
 ] 

Ivo Meißner commented on CASSANDRA-4481:


I can reproduce the bug as follows:

1. I insert data with my client into a column family.
2. When I select the data afterwards with a cassandra client, the data is 
returned.
{code}
get comment['1|3488da80-1dd5-11b2-aff8-030772c33eed'];
= (super_column=34942d20-1dd5-11b2-bfef-3f53095dd669,
 (column=added, value=1343979036, timestamp=1343979036707674)
 (column=id, value=34942d20-1dd5-11b2-bfef-3f53095dd669, 
timestamp=1343979036707674)
 (column=itemId, value=3488da801dd511b2aff8030772c33eed, 
timestamp=1343979036707674)
 (column=text, value=Comment, timestamp=1343979036707674)
 (column=typeId, value=1, timestamp=1343979036707674)
 (column=userId, value=4ab5fcb6753a8021ae02, timestamp=1343979036707674))
Returned 1 results.
Elapsed time: 6 msec(s).
{code}
3. Then I restart the machine
4. When I start cassandra again, I get the following output 
{code}
 INFO 09:33:56,857 Log replay complete, 0 replayed mutations
{code}
5. I select the exact same row and get no results, so the data I inserted 
before is gone.
{code}
get comment['1|3488da80-1dd5-11b2-aff8-030772c33eed'];
Returned 0 results.
Elapsed time: 120 msec(s).
{code}

I tried to reproduce it with a newly created keyspace and column family and 
wasn't able to reproduce it yet. In the other keyspace I can reproduce it 
consistently and it happens on all column familys. 
Any suggestions what I can try to narrow it down?

 Commitlog not replayed after restart - data lost
 

 Key: CASSANDRA-4481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Single node cluster on 64Bit CentOS
Reporter: Ivo Meißner
Priority: Critical
 Fix For: 1.1.3


 When data is written to the commitlog and I restart the machine, all commited 
 data is lost that has not been flushed to disk. 
 In the startup logs it says that it replays the commitlog successfully, but 
 the data is not available then. 
 When I open the commitlog file in an editor I can see the added data, but 
 after the restart it cannot be fetched from cassandra. 
 {code}
  INFO 09:59:45,362 Replaying 
 /var/myproject/cassandra/commitlog/CommitLog-83203377067.log
  INFO 09:59:45,476 Finished reading 
 /var/myproject/cassandra/commitlog/CommitLog-83203377067.log
  INFO 09:59:45,476 Log replay complete, 0 replayed mutations
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4481) Commitlog not replayed after restart - data lost

2012-08-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivo Meißner updated CASSANDRA-4481:
---

Fix Version/s: (was: 1.1.3)
   1.1.4

 Commitlog not replayed after restart - data lost
 

 Key: CASSANDRA-4481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Single node cluster on 64Bit CentOS
Reporter: Ivo Meißner
Priority: Critical
 Fix For: 1.1.4


 When data is written to the commitlog and I restart the machine, all commited 
 data is lost that has not been flushed to disk. 
 In the startup logs it says that it replays the commitlog successfully, but 
 the data is not available then. 
 When I open the commitlog file in an editor I can see the added data, but 
 after the restart it cannot be fetched from cassandra. 
 {code}
  INFO 09:59:45,362 Replaying 
 /var/myproject/cassandra/commitlog/CommitLog-83203377067.log
  INFO 09:59:45,476 Finished reading 
 /var/myproject/cassandra/commitlog/CommitLog-83203377067.log
  INFO 09:59:45,476 Log replay complete, 0 replayed mutations
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4481) Commitlog not replayed after restart - data lost

2012-08-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13427970#comment-13427970
 ] 

Ivo Meißner commented on CASSANDRA-4481:


I am still trying to narrow it down. I have created a new keyspace 
(testkeyspace) with the same configuration and structure. 
When I only use the testkeyspace, the error does not occur, everything in the 
commitlog is available after reboot: 

The following works as expected: 
1. Insert dataA in testkeyspace
2. Reboot - 1 replayed mutations
3. Get dataA returns data as expected

The following does not work:
1. Insert dataA in testkeyspace
2. Get dataA from testkeyspace - returns data as expected
3. Insert dataB in brokenkeyspace
4. Get dataB from brokenkeypsace - returns data as expected
5. Reboot - 0 replayed mutations
6. Get dataA from testkeyspace - NO DATA
7. Get dataB from brokenkeyspace - NO DATA

So it seems to have something to do with the broken keyspace. I don't know 
yet how to get the keyspace into that state. So any input of how I can figure 
it out or what I could try would be appreciated.

I have changed the Fix-Version to 1.1.4. 

 Commitlog not replayed after restart - data lost
 

 Key: CASSANDRA-4481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Single node cluster on 64Bit CentOS
Reporter: Ivo Meißner
Priority: Critical
 Fix For: 1.1.4


 When data is written to the commitlog and I restart the machine, all commited 
 data is lost that has not been flushed to disk. 
 In the startup logs it says that it replays the commitlog successfully, but 
 the data is not available then. 
 When I open the commitlog file in an editor I can see the added data, but 
 after the restart it cannot be fetched from cassandra. 
 {code}
  INFO 09:59:45,362 Replaying 
 /var/myproject/cassandra/commitlog/CommitLog-83203377067.log
  INFO 09:59:45,476 Finished reading 
 /var/myproject/cassandra/commitlog/CommitLog-83203377067.log
  INFO 09:59:45,476 Log replay complete, 0 replayed mutations
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4540) nodetool clearsnapshot broken: gives java.io.IOException when trying to delete snapshot folder

2012-08-14 Thread JIRA
Christopher Lörken created CASSANDRA-4540:
-

 Summary: nodetool clearsnapshot broken: gives java.io.IOException 
when trying to delete snapshot folder
 Key: CASSANDRA-4540
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4540
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.2
 Environment: Debian 6
Reporter: Christopher Lörken
Priority: Minor


nodetool clearsnapshots failes to delete snapshot directories and exits 
prematurely causing the exception at the bottom.
The actual snapshot files _within_ the directory are correctly deleted but the 
folder itself is not deleted.

I've chmodded all files and folders in the snapshots directory to 777 and rund 
the command as root to exclude file permissions as a cause. I also restarted 
cassandra which has no effect on the command.


---
root@server:/var/lib/cassandra/data/MyKeyspace/MyCf/snapshots# nodetool 
clearsnapshot MyKeyspace
Requested snapshot for: MyKeyspace
Exception in thread main java.io.IOException: Failed to delete 
/var/lib/cassandra/data/MyKeyspace/MyCf/snapshots/1344875270796
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:54)
at 
org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:220)
at 
org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:216)
at 
org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:371)
at 
org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:1560)
at org.apache.cassandra.db.Table.clearSnapshot(Table.java:268)
at 
org.apache.cassandra.service.StorageService.clearSnapshot(StorageService.java:1866)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
Source)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
Source)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
at com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown 
Source)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
at javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown 
Source)
at javax.management.remote.rmi.RMIConnectionImpl.access$200(Unknown 
Source)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown 
Source)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown 
Source)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
at sun.rmi.transport.Transport$1.run(Unknown Source)
at sun.rmi.transport.Transport$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown 
Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown 
Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4541) Replace Throttle with Guava's RateLimiter

2012-08-14 Thread JIRA
Michaël Figuière created CASSANDRA-4541:
---

 Summary: Replace Throttle with Guava's RateLimiter
 Key: CASSANDRA-4541
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4541
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.3
Reporter: Michaël Figuière


Guava 13 introduced {{RateLimiter}} [1] which should be a good replacement for 
{{o.a.c.utils.Throttle}} that is used in Compaction and Streaming as a 
throughput limiter.

[1] 
[http://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/util/concurrent/RateLimiter.java]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4541) Replace Throttle with Guava's RateLimiter

2012-08-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michaël Figuière updated CASSANDRA-4541:


Priority: Minor  (was: Major)

 Replace Throttle with Guava's RateLimiter
 -

 Key: CASSANDRA-4541
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4541
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.3
Reporter: Michaël Figuière
Priority: Minor

 Guava 13 introduced {{RateLimiter}} [1] which should be a good replacement 
 for {{o.a.c.utils.Throttle}} that is used in Compaction and Streaming as a 
 throughput limiter.
 [1] 
 [http://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/util/concurrent/RateLimiter.java]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4481) Commitlog not replayed after restart - data lost

2012-08-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13436346#comment-13436346
 ] 

Ivo Meißner commented on CASSANDRA-4481:


I have also created the broken keyspace with a version prior to 1.1.2 (I'm 
pretty sure it was 1.1.1). So maybe there is a commitlog incompatibility... 
I also ran into some schema changing issues with that keyspace. Maybe I 
destroyed the keyspace structure. 
But it would be nice to get some kind of error message if something goes wrong 
with the commitlogs. Everything else seems to work with the keyspace. You 
really don't notice until you wonder where the data is...

 Commitlog not replayed after restart - data lost
 

 Key: CASSANDRA-4481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Single node cluster on 64Bit CentOS
Reporter: Ivo Meißner
Priority: Critical

 When data is written to the commitlog and I restart the machine, all commited 
 data is lost that has not been flushed to disk. 
 In the startup logs it says that it replays the commitlog successfully, but 
 the data is not available then. 
 When I open the commitlog file in an editor I can see the added data, but 
 after the restart it cannot be fetched from cassandra. 
 {code}
  INFO 09:59:45,362 Replaying 
 /var/myproject/cassandra/commitlog/CommitLog-83203377067.log
  INFO 09:59:45,476 Finished reading 
 /var/myproject/cassandra/commitlog/CommitLog-83203377067.log
  INFO 09:59:45,476 Log replay complete, 0 replayed mutations
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4558) Configurable transport in CF RecordReader / RecordWriter

2012-08-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4558:
--

Attachment: configurable_transport.patch

Patch enabling to set transport factory class in hadoop job configuration. 
Added new properties cassandra.input.transport.factory.class and 
cassandra.output.transport.factory.class. TFramedTransportFactory is used by 
default if properties are not set, so old behaviour is preserved.

 Configurable transport in CF RecordReader / RecordWriter
 

 Key: CASSANDRA-4558
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4558
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Piotr Kołaczkowski
 Attachments: configurable_transport.patch


 Currently RecordReaders and RecordWriters use hardcoded TFramedTransport. In 
 order to use other transports, e.g. SSL transport, allow for setting custom 
 transport class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (CASSANDRA-4558) Configurable transport in CF RecordReader / RecordWriter

2012-08-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437753#comment-13437753
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-4558 at 8/20/12 8:11 PM:


I attach a patch enabling to set transport factory class in hadoop job 
configuration. Added new properties cassandra.input.transport.factory.class 
and cassandra.output.transport.factory.class. TFramedTransportFactory is used 
by default if properties are not set, so old behaviour is preserved. The 
modified code has been tested using PIG demo in DSE.

  was (Author: pkolaczk):
Patch enabling to set transport factory class in hadoop job configuration. 
Added new properties cassandra.input.transport.factory.class and 
cassandra.output.transport.factory.class. TFramedTransportFactory is used by 
default if properties are not set, so old behaviour is preserved.
  
 Configurable transport in CF RecordReader / RecordWriter
 

 Key: CASSANDRA-4558
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4558
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Piotr Kołaczkowski
 Attachments: configurable_transport.patch


 Currently RecordReaders and RecordWriters use hardcoded TFramedTransport. In 
 order to use other transports, e.g. SSL transport, allow for setting custom 
 transport class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (CASSANDRA-4558) Configurable transport in CF RecordReader / RecordWriter

2012-08-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437753#comment-13437753
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-4558 at 8/20/12 8:12 PM:


I attach a patch enabling to set transport factory class in hadoop job 
configuration. Added new properties cassandra.input.transport.factory.class 
and cassandra.output.transport.factory.class. TFramedTransportFactory is used 
by default if properties are not set, so old behaviour is preserved. The 
modified code has been tested using PIG demo in DSE.

Patch generated against 1.1 branch (1.1.4), intended for 1.1.

  was (Author: pkolaczk):
I attach a patch enabling to set transport factory class in hadoop job 
configuration. Added new properties cassandra.input.transport.factory.class 
and cassandra.output.transport.factory.class. TFramedTransportFactory is used 
by default if properties are not set, so old behaviour is preserved. The 
modified code has been tested using PIG demo in DSE.
  
 Configurable transport in CF RecordReader / RecordWriter
 

 Key: CASSANDRA-4558
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4558
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Piotr Kołaczkowski
 Attachments: configurable_transport.patch


 Currently RecordReaders and RecordWriters use hardcoded TFramedTransport. In 
 order to use other transports, e.g. SSL transport, allow for setting custom 
 transport class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (CASSANDRA-4558) Configurable transport in CF RecordReader / RecordWriter

2012-08-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13437753#comment-13437753
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-4558 at 8/21/12 12:20 AM:
-

I attach a patch enabling to set transport factory class in hadoop job 
configuration. Added new properties cassandra.input.transport.factory.class 
and cassandra.output.transport.factory.class. TFramedTransportFactory is used 
by default if properties are not set, so old behaviour is preserved. The 
modified code has been tested using PIG demo in DSE.

Patch generated against 1.1 branch (1.1.4).

  was (Author: pkolaczk):
I attach a patch enabling to set transport factory class in hadoop job 
configuration. Added new properties cassandra.input.transport.factory.class 
and cassandra.output.transport.factory.class. TFramedTransportFactory is used 
by default if properties are not set, so old behaviour is preserved. The 
modified code has been tested using PIG demo in DSE.

Patch generated against 1.1 branch (1.1.4), intended for 1.1.
  
 Configurable transport in CF RecordReader / RecordWriter
 

 Key: CASSANDRA-4558
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4558
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Piotr Kołaczkowski
 Fix For: 1.1.5

 Attachments: configurable_transport.patch


 Currently RecordReaders and RecordWriters use hardcoded TFramedTransport. In 
 order to use other transports, e.g. SSL transport, allow for setting custom 
 transport class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4245) Provide a UT8Type (case insensitive) comparator

2012-08-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438694#comment-13438694
 ] 

André Cruz commented on CASSANDRA-4245:
---

I'm also interested in a UTF-8 comparator that orders columns alphabetically. 
In fact, I was expecting this to be the default behaviour in Cassandra until it 
bit me. For example, with 3 columns: André, Zeus and Ándré.

I was expecting:
André
Ándré
Zeus

The result was:
André
Zeus
Ándré

This is what's being discussed in this issue, right?

 Provide a UT8Type (case insensitive) comparator
 ---

 Key: CASSANDRA-4245
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4245
 Project: Cassandra
  Issue Type: New Feature
Reporter: Ertio Lew
Assignee: Aaron Morton
Priority: Minor

 It is a common use case to use a bunch of entity names as column names  then 
 use the row as a search index, using search by range. For such use cases  
 others, it is useful to have a UTF8 comparator that provides case insensitive 
 ordering of columns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-08-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4049:
--

Attachment: (was: pluggable_custom_components.patch)

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.5

 Attachments: pluggable_custom_components-1.1.4.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-08-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443899#comment-13443899
 ] 

Piotr Kołaczkowski commented on CASSANDRA-4049:
---

Ok, rebased. Actually only line numbers changed, there was no conflict.

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.5

 Attachments: pluggable_custom_components-1.1.4.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-08-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4049:
--

Attachment: pluggable_custom_components-1.1.4.patch

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.5

 Attachments: pluggable_custom_components-1.1.4.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4571) Strange permament socket descriptors increasing leads to Too many open files

2012-08-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444190#comment-13444190
 ] 

Per Otterström commented on CASSANDRA-4571:
---

To verify, we started from scratch. A new installation on 3 servers. And the FD 
leak is still there. So, with our particular setup we are able to reproduce the 
bug.

These are the characteristics of our setup:
- We have one single CF.
- Rows are inserted in batches.
- Rows are red, updated and deleted in a random like pattern.
- The FD leak seem to start during heavy read load (but can appear during mixed 
read/write/delete operations as well).
- We are using Hector to access this single CF.
- Cassandra configuration is basically standard.

The FD leaks does not show immediately. It appears once there is ~60M rows in 
CF.


 Strange permament socket descriptors increasing leads to Too many open files
 --

 Key: CASSANDRA-4571
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4571
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1, 1.1.2, 1.1.3
 Environment: CentOS 5.8 Linux 2.6.18-308.13.1.el5 #1 SMP Tue Aug 21 
 17:10:18 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux. 
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03, mixed mode)
Reporter: Serg Shnerson
Priority: Critical

 On the two-node cluster there was found strange socket descriptors 
 increasing. lsof -n | grep java shows many rows like
 java   8380 cassandra  113r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  114r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  115r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  116r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  117r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  118r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  119r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  120r unix 0x8101a374a080
 938348482 socket
  And number of this rows constantly increasing. After about 24 hours this 
 situation leads to error.
 We use PHPCassa client. Load is not so high (aroud ~50kb/s on write). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4588) CQL COPY ... FROM command is slow

2012-08-30 Thread JIRA
Piotr Kołaczkowski created CASSANDRA-4588:
-

 Summary: CQL COPY ... FROM command is slow
 Key: CASSANDRA-4588
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4588
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.4
 Environment: Ubuntu Linux 12.04, kernel 3.4.0
Reporter: Piotr Kołaczkowski


1. created a csv file with 10,000,000 rows with two integer columns; saved it 
to an SSD disk, it took a few seconds, the file is 184 MB large. 
2. started a single local cassandra node from fresh empty data and commit log 
dirs
3. created a keyspace with simple strategy and RF=1
4. loading the file with COPY ... FROM command - it is over 15 minutes now and 
still loading

top reports about 50% CPU usage for java (cassandra) and 50% for python.
I/O is almost idle, iowait  0.1%. 



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4639) Incorrect counter values

2012-09-10 Thread JIRA
Bartłomiej Romański created CASSANDRA-4639:
--

 Summary: Incorrect counter values
 Key: CASSANDRA-4639
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4639
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.2
 Environment: We've got a production 21 node cluster with 3 virtual 
data centers. Keyspace that contains counter column families has a replication 
3 on DC1 and 1 on DC3. DC1 is using SSD drives, DC3 spinning hard drives. We 
are using Ubuntu Server as an OS. Machines have 24GB of RAM. 
Reporter: Bartłomiej Romański
Priority: Critical


Since yesterday almost all counters are incorrect. Usually about 4-5 times 
higher than expected. In logs we've got this message:

ERROR [MutationStage:15] 2012-09-10 13:47:13,280 CounterContext.java (line 381) 
invalid counter shard detected; (6de8e940-dd23-11e1--5233df6faaff, 7, 242) 
and (6de8e940-dd23-11e1--5233df6faaff, 7, 392) differ only in count; will 
pick highest to self-heal; this indicates a bug or corruption generated a bad 
counter shard

every couple of seconds.

This cluster was running without any serious problems for at least 2 months.

Any ideas?


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-09-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13452835#comment-13452835
 ] 

Piotr Kołaczkowski commented on CASSANDRA-4049:
---

Sorry for late reply, been busy fixing things in dse.

{quote}
{noformat}
catch (Exception e)
{
if (!snapshots.equals(name)  !backups.equals(name)
 !name.contains(.json))
logger.warn(Invalid file '{}' in data directory {}., name, 
dir);
return null;
}
{noformat}
What was the reasoning behind this? Not saying it's wrong to remove it, but I 
want to make sure we understand what it was supposed to do, before deciding we 
don't need it.
{quote}

This check was there before. Actually because this check was firing out 
warnings, we created this ticket ;)

-
 
{noformat}
catch (IOException e)
{
SetComponent components = 
Sets.newHashSetWithExpectedSize(Component.TYPES.size());
for (Component.Type componentType : Component.TYPES)
{
Component component = new Component(componentType);
if (new File(desc.filenameFor(component)).exists())
components.add(component);
}

saveTOC(desc, components);
return components;
}
{noformat}

This one is for backwards compatibility. If we find an SSTable without a TOC 
component (from previous version of C*), we just do what we always did - loop 
through all C* components. 


{quote}
Use FileUtils.closeQuietly{quote}

Oh, yeah. I was looking for IOUtils.closeQuietly, and couldn't find it. Thanks, 
that is what I needed!

{quote}
But probably simpler to just use Guava's Files.readLines{quote}

Ok, I fix it.


{quote}
Do we not know what components are necessary at construction time? Would 
strongly prefer an immutable set. Adding extra parameters to SSTW to do this is 
fine.{quote}

We do, but there is a chicken-and-egg problem here. CompactionStrategy knows. 
But CompactionStrategy needs a set of SSTables to be created. And to create 
SSTable readers you need to know the components. That is why I decided for a 
TOC component, that allows for reading the list of components at SSTable 
construction time.

The workflow of creating a new SSTable is currently as follows:

1. memtable is flushed to disk, with C* only components
2. compaction strategy is notified that a new sstable was created and gets an 
SSTableReader (with only default components)
3. compaction strategy adds its custom components to it; in order to do it, it 
has to read some components of SSTable (e.g. access the index or data file)

In order to make SSTableReader immutable, we had to ask currently installed 
compaction strategy for custom component list somewhere in the middle of this 
process, before creating SSTableReader. That is slightly more complex than what 
we have now (have to change the CS interface), but retaining full immutability 
is probably worth it.

{noformat}
public synchronized void addCustomComponent(Component component){noformat}

You are right that synchronized is wrong here. 

Thanks for great suggestions [~jbellis]. I look into improving my patch as soon 
as I'm done with the tickets I've got in the waiting queue for DSE 3.0.


 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.6

 Attachments: pluggable_custom_components-1.1.4.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-09-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13452835#comment-13452835
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-4049 at 9/11/12 8:13 PM:


Sorry for late reply, been busy fixing things in dse.

{quote}
{noformat}
catch (Exception e)
{
if (!snapshots.equals(name)  !backups.equals(name)
 !name.contains(.json))
logger.warn(Invalid file '{}' in data directory {}., name, 
dir);
return null;
}
{noformat}
What was the reasoning behind this? Not saying it's wrong to remove it, but I 
want to make sure we understand what it was supposed to do, before deciding we 
don't need it.
{quote}

--This check was there before. Actually because this check was firing out 
warnings, we created this ticket ;)--
Oh, just noticed, you are referring to removed code. I think this one is not 
needed any more - it was to warn against some unknown files in the data 
directory.
I haven't removed the warning completely - just moved it elsewhere. Here it is 
(line 271):

{noformat}
if (!new File(descriptor.filenameFor(component)).exists())
logger.error(Missing component:  + 
descriptor.filenameFor(component));
{noformat}

-
 
{noformat}
catch (IOException e)
{
SetComponent components = 
Sets.newHashSetWithExpectedSize(Component.TYPES.size());
for (Component.Type componentType : Component.TYPES)
{
Component component = new Component(componentType);
if (new File(desc.filenameFor(component)).exists())
components.add(component);
}

saveTOC(desc, components);
return components;
}
{noformat}

This one is for backwards compatibility. If we find an SSTable without a TOC 
component (from previous version of C*), we just do what we always did - loop 
through all C* components. 


{quote}
Use FileUtils.closeQuietly{quote}

Oh, yeah. I was looking for IOUtils.closeQuietly, and couldn't find it. Thanks, 
that is what I needed!

{quote}
But probably simpler to just use Guava's Files.readLines{quote}

Ok, I fix it.


{quote}
Do we not know what components are necessary at construction time? Would 
strongly prefer an immutable set. Adding extra parameters to SSTW to do this is 
fine.{quote}

We do, but there is a chicken-and-egg problem here. CompactionStrategy knows. 
But CompactionStrategy needs a set of SSTables to be created. And to create 
SSTable readers you need to know the components. That is why I decided for a 
TOC component, that allows for reading the list of components at SSTable 
construction time.

The workflow of creating a new SSTable is currently as follows:

1. memtable is flushed to disk, with C* only components
2. compaction strategy is notified that a new sstable was created and gets an 
SSTableReader (with only default components)
3. compaction strategy adds its custom components to it; in order to do it, it 
has to read some components of SSTable (e.g. access the index or data file)

In order to make SSTableReader immutable, we had to ask currently installed 
compaction strategy for custom component list somewhere in the middle of this 
process, before creating SSTableReader. That is slightly more complex than what 
we have now (have to change the CS interface), but retaining full immutability 
is probably worth it.

{noformat}
public synchronized void addCustomComponent(Component component){noformat}

You are right that synchronized is wrong here. 

Thanks for great suggestions [~jbellis]. I look into improving my patch as soon 
as I'm done with the tickets I've got in the waiting queue for DSE 3.0.

  was (Author: pkolaczk):
Sorry for late reply, been busy fixing things in dse.

{quote}
{noformat}
catch (Exception e)
{
if (!snapshots.equals(name)  !backups.equals(name)
 !name.contains(.json))
logger.warn(Invalid file '{}' in data directory {}., name, 
dir);
return null;
}
{noformat}
What was the reasoning behind this? Not saying it's wrong to remove it, but I 
want to make sure we understand what it was supposed to do, before deciding we 
don't need it.
{quote}

This check was there before. Actually because this check was firing out 
warnings, we created this ticket ;)

-
 
{noformat}
catch (IOException e)
{
SetComponent components = 
Sets.newHashSetWithExpectedSize(Component.TYPES.size());
for (Component.Type componentType : Component.TYPES)
{
Component component = new Component(componentType);
if (new File(desc.filenameFor(component)).exists

[jira] [Comment Edited] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-09-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13452835#comment-13452835
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-4049 at 9/11/12 8:16 PM:


Sorry for late reply, been busy fixing things in dse.

{quote}
{noformat}
catch (Exception e)
{
if (!snapshots.equals(name)  !backups.equals(name)
 !name.contains(.json))
logger.warn(Invalid file '{}' in data directory {}., name, 
dir);
return null;
}
{noformat}
What was the reasoning behind this? Not saying it's wrong to remove it, but I 
want to make sure we understand what it was supposed to do, before deciding we 
don't need it.
{quote}

--This check was there before. Actually because this check was firing out 
warnings, we created this ticket ;)--
Oh, just noticed, you are referring to removed code. [~slebresne] said he could 
live without them. But if you insist, I can think of bringing them back. Don't 
know how to do it yet, but if it is important, I can try.


{noformat}
if (!new File(descriptor.filenameFor(component)).exists())
logger.error(Missing component:  + 
descriptor.filenameFor(component));
{noformat}

-
 
{noformat}
catch (IOException e)
{
SetComponent components = 
Sets.newHashSetWithExpectedSize(Component.TYPES.size());
for (Component.Type componentType : Component.TYPES)
{
Component component = new Component(componentType);
if (new File(desc.filenameFor(component)).exists())
components.add(component);
}

saveTOC(desc, components);
return components;
}
{noformat}

This one is for backwards compatibility. If we find an SSTable without a TOC 
component (from previous version of C*), we just do what we always did - loop 
through all C* components. 


{quote}
Use FileUtils.closeQuietly{quote}

Oh, yeah. I was looking for IOUtils.closeQuietly, and couldn't find it. Thanks, 
that is what I needed!

{quote}
But probably simpler to just use Guava's Files.readLines{quote}

Ok, I fix it.


{quote}
Do we not know what components are necessary at construction time? Would 
strongly prefer an immutable set. Adding extra parameters to SSTW to do this is 
fine.{quote}

We do, but there is a chicken-and-egg problem here. CompactionStrategy knows. 
But CompactionStrategy needs a set of SSTables to be created. And to create 
SSTable readers you need to know the components. That is why I decided for a 
TOC component, that allows for reading the list of components at SSTable 
construction time.

The workflow of creating a new SSTable is currently as follows:

1. memtable is flushed to disk, with C* only components
2. compaction strategy is notified that a new sstable was created and gets an 
SSTableReader (with only default components)
3. compaction strategy adds its custom components to it; in order to do it, it 
has to read some components of SSTable (e.g. access the index or data file)

In order to make SSTableReader immutable, we had to ask currently installed 
compaction strategy for custom component list somewhere in the middle of this 
process, before creating SSTableReader. That is slightly more complex than what 
we have now (have to change the CS interface), but retaining full immutability 
is probably worth it.

{noformat}
public synchronized void addCustomComponent(Component component){noformat}

You are right that synchronized is wrong here. 

Thanks for great suggestions [~jbellis]. I look into improving my patch as soon 
as I'm done with the tickets I've got in the waiting queue for DSE 3.0.

  was (Author: pkolaczk):
Sorry for late reply, been busy fixing things in dse.

{quote}
{noformat}
catch (Exception e)
{
if (!snapshots.equals(name)  !backups.equals(name)
 !name.contains(.json))
logger.warn(Invalid file '{}' in data directory {}., name, 
dir);
return null;
}
{noformat}
What was the reasoning behind this? Not saying it's wrong to remove it, but I 
want to make sure we understand what it was supposed to do, before deciding we 
don't need it.
{quote}

--This check was there before. Actually because this check was firing out 
warnings, we created this ticket ;)--
Oh, just noticed, you are referring to removed code. I think this one is not 
needed any more - it was to warn against some unknown files in the data 
directory.
I haven't removed the warning completely - just moved it elsewhere. Here it is 
(line 271):

{noformat}
if (!new File(descriptor.filenameFor(component)).exists())
logger.error(Missing component:  + 
descriptor.filenameFor(component));
{noformat

[jira] [Comment Edited] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-09-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13452835#comment-13452835
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-4049 at 9/11/12 8:17 PM:


Sorry for late reply, been busy fixing things in dse.

{quote}
{noformat}
catch (Exception e)
{
if (!snapshots.equals(name)  !backups.equals(name)
 !name.contains(.json))
logger.warn(Invalid file '{}' in data directory {}., name, 
dir);
return null;
}
{noformat}
What was the reasoning behind this? Not saying it's wrong to remove it, but I 
want to make sure we understand what it was supposed to do, before deciding we 
don't need it.
{quote}

--This check was there before. Actually because this check was firing out 
warnings, we created this ticket ;)--
Oh, just noticed, you are referring to removed code. [~slebresne] said he could 
live without them. But if you insist, I can think of bringing them back. Don't 
know how to do it yet, but if it is important, I can try.

On the other hand, I left the warning about missing components (which is IMHO 
much more important than some spurious components):
{noformat}
if (!new File(descriptor.filenameFor(component)).exists())
logger.error(Missing component:  + 
descriptor.filenameFor(component));
{noformat}

-
 
{noformat}
catch (IOException e)
{
SetComponent components = 
Sets.newHashSetWithExpectedSize(Component.TYPES.size());
for (Component.Type componentType : Component.TYPES)
{
Component component = new Component(componentType);
if (new File(desc.filenameFor(component)).exists())
components.add(component);
}

saveTOC(desc, components);
return components;
}
{noformat}

This one is for backwards compatibility. If we find an SSTable without a TOC 
component (from previous version of C*), we just do what we always did - loop 
through all C* components. 


{quote}
Use FileUtils.closeQuietly{quote}

Oh, yeah. I was looking for IOUtils.closeQuietly, and couldn't find it. Thanks, 
that is what I needed!

{quote}
But probably simpler to just use Guava's Files.readLines{quote}

Ok, I fix it.


{quote}
Do we not know what components are necessary at construction time? Would 
strongly prefer an immutable set. Adding extra parameters to SSTW to do this is 
fine.{quote}

We do, but there is a chicken-and-egg problem here. CompactionStrategy knows. 
But CompactionStrategy needs a set of SSTables to be created. And to create 
SSTable readers you need to know the components. That is why I decided for a 
TOC component, that allows for reading the list of components at SSTable 
construction time.

The workflow of creating a new SSTable is currently as follows:

1. memtable is flushed to disk, with C* only components
2. compaction strategy is notified that a new sstable was created and gets an 
SSTableReader (with only default components)
3. compaction strategy adds its custom components to it; in order to do it, it 
has to read some components of SSTable (e.g. access the index or data file)

In order to make SSTableReader immutable, we had to ask currently installed 
compaction strategy for custom component list somewhere in the middle of this 
process, before creating SSTableReader. That is slightly more complex than what 
we have now (have to change the CS interface), but retaining full immutability 
is probably worth it.

{noformat}
public synchronized void addCustomComponent(Component component){noformat}

You are right that synchronized is wrong here. 

Thanks for great suggestions [~jbellis]. I look into improving my patch as soon 
as I'm done with the tickets I've got in the waiting queue for DSE 3.0.

  was (Author: pkolaczk):
Sorry for late reply, been busy fixing things in dse.

{quote}
{noformat}
catch (Exception e)
{
if (!snapshots.equals(name)  !backups.equals(name)
 !name.contains(.json))
logger.warn(Invalid file '{}' in data directory {}., name, 
dir);
return null;
}
{noformat}
What was the reasoning behind this? Not saying it's wrong to remove it, but I 
want to make sure we understand what it was supposed to do, before deciding we 
don't need it.
{quote}

--This check was there before. Actually because this check was firing out 
warnings, we created this ticket ;)--
Oh, just noticed, you are referring to removed code. [~slebresne] said he could 
live without them. But if you insist, I can think of bringing them back. Don't 
know how to do it yet, but if it is important, I can try.


{noformat}
if (!new File(descriptor.filenameFor(component)).exists

[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected

2012-09-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457504#comment-13457504
 ] 

Bartłomiej Romański commented on CASSANDRA-4417:


Is it possible to predict how dangerous this bug could be? We are already 
experiencing very serious problems with CASSANDRA-4639. Our counter values 
suddenly became a few times higher than expected. As you can imagine this is a 
disaster from the business point of view. We are already seriously thinking 
about going back to SQL databases :/ I wonder how (if) this bug (and possible 
other counter related bugs) can affect us. We rely heavily on counters.

Can this bug possibly lead to incorrect counter values? Temporarily or 
permanently - will running repair fix it? 

How incorrect counter values could be? Loosing a couple increments immediately 
preceding a node failure is probably acceptable in most cases. Is it possible 
to loose more increments? Or end up in completely incorrect counter values as 
in CASSANDRA-4639?

What would exactly happen after hitting this bug. Running repair should fix it? 
The self-healing mechanism would actually make counter consistent again? Or 
we will get this error messages over and over?

Sorry for writing a comment full of questions, but I've got very limited 
knowledge of cassandra internals. I'll be very thankful if someone could refer 
to the questions above.

 invalid counter shard detected 
 ---

 Key: CASSANDRA-4417
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: Amazon Linux
Reporter: Senthilvel Rangaswamy

 Seeing errors like these:
 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick 
 highest to self-heal; this indicates a bug or corruption generated a bad 
 counter shard
 What does it mean ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4673) Compaction of HintsColumnFamily stucks

2012-09-17 Thread JIRA
Bartłomiej Romański created CASSANDRA-4673:
--

 Summary: Compaction of HintsColumnFamily stucks
 Key: CASSANDRA-4673
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4673
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.1
 Environment: We've got a 24 nodes cluster with 3 virtual data centers. 
We've got 7 nodes with SSD drives. We are operating under very heavy read/write 
load. We typically seeing CPU usage 90% on our machines.

We are using 1.1.2 - why this version is not listed in the 'Affected Version' 
drop down?
Reporter: Bartłomiej Romański


On some nodes the compaction of HintsColumnFamily stucked. Here is a typical 
output of 'nodetool compactionstats':

pending tasks: 1
  compaction typekeyspace   column family bytes compacted 
bytes total  progress
   Compaction  systemHintsColumnFamily   346205828  
 34690966299.80%
Active compaction remaining time :   0h00m00s

Rebooting a node does not help. The compaction starts immediately after booting 
and stucks and the same point.

If this can be related we are also expiring a problem described in 
CASSANDRA-4639.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected

2012-09-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457511#comment-13457511
 ] 

Bartłomiej Romański commented on CASSANDRA-4417:


In the previous comment I wanted to point directly to CASSANDRA-4436 - I've 
mixed up numbers.

One more thing: could hinted-handoff be possible somehow related to this issue? 
We've got a problem with it (CASSANDRA-4673) which was discovered in (more or 
less) in the same time that our counters problems. Is there a possibility that 
sending hinted-handoff a few times ends up with incrementing counters a few 
time?


 invalid counter shard detected 
 ---

 Key: CASSANDRA-4417
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: Amazon Linux
Reporter: Senthilvel Rangaswamy

 Seeing errors like these:
 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick 
 highest to self-heal; this indicates a bug or corruption generated a bad 
 counter shard
 What does it mean ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected

2012-09-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13457528#comment-13457528
 ] 

Bartłomiej Romański commented on CASSANDRA-4417:


And the last comment. Could this be related to: CASSANDRA-4071? If I understand 
the description correctly any topology changes (adding a node, moving a node) 
when the counter is spread across more than one sstable can result in the 
invalid counter shard detected error message during reads. Am I right?

 invalid counter shard detected 
 ---

 Key: CASSANDRA-4417
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: Amazon Linux
Reporter: Senthilvel Rangaswamy

 Seeing errors like these:
 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick 
 highest to self-heal; this indicates a bug or corruption generated a bad 
 counter shard
 What does it mean ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4683) running nodetool cleanup throws an exception

2012-09-18 Thread JIRA
Bartłomiej Romański created CASSANDRA-4683:
--

 Summary: running nodetool cleanup throws an exception
 Key: CASSANDRA-4683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4683
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.5
Reporter: Bartłomiej Romański


We've just upgraded to 1.1.5 from 1.1.2. 

After that running nodetool cleanup we've got the following error:

Exception in thread main java.lang.AssertionError
at 
org.apache.cassandra.db.SystemTable.getCurrentLocalNodeId(SystemTable.java:462)
at 
org.apache.cassandra.utils.NodeId$LocalNodeIdHistory.init(NodeId.java:195)
at org.apache.cassandra.utils.NodeId$LocalIds.clinit(NodeId.java:43)
at org.apache.cassandra.utils.NodeId.localIds(NodeId.java:50)
at org.apache.cassandra.utils.NodeId.getLocalId(NodeId.java:55)
at 
org.apache.cassandra.utils.NodeId$OneShotRenewer.init(NodeId.java:175)
at 
org.apache.cassandra.service.StorageService.forceTableCleanup(StorageService.java:1769)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1447)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:89)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1292)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1380)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:812)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

Any ideas?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4683) running nodetool cleanup throws an exception

2012-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13458808#comment-13458808
 ] 

Bartłomiej Romański commented on CASSANDRA-4683:


This is 1.1.5 node!

br@b5:~$ cassandra -v
xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms12G -Xmx12G -Xmn400M 
-XX:+HeapDumpOnOutOfMemoryError -Xss160k
1.1.5
br@b5:~$ nodetool cleanup
Exception in thread main java.lang.NoClassDefFoundError: Could not initialize 
class org.apache.cassandra.utils.NodeId$LocalIds
at org.apache.cassandra.utils.NodeId.localIds(NodeId.java:50)
at org.apache.cassandra.utils.NodeId.getLocalId(NodeId.java:55)
at 
org.apache.cassandra.utils.NodeId$OneShotRenewer.init(NodeId.java:175)
at 
org.apache.cassandra.service.StorageService.forceTableCleanup(StorageService.java:1769)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

The node has been rebooted after upgrade so I'm pretty sure that the running 
process is also 1.1.5. Is there a way (eg. JMX or nodetool) to query live 
instance about it's version? 


 running nodetool cleanup throws an exception
 

 Key: CASSANDRA-4683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4683
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.5
Reporter: Bartłomiej Romański

 We've just upgraded to 1.1.5 from 1.1.2. 
 After that running nodetool cleanup we've got the following error:
 Exception in thread main java.lang.AssertionError
 at 
 org.apache.cassandra.db.SystemTable.getCurrentLocalNodeId(SystemTable.java:462)
 at 
 org.apache.cassandra.utils.NodeId$LocalNodeIdHistory.init(NodeId.java:195)
 at org.apache.cassandra.utils.NodeId$LocalIds.clinit(NodeId.java:43)
 at org.apache.cassandra.utils.NodeId.localIds(NodeId.java:50)
 at org.apache.cassandra.utils.NodeId.getLocalId(NodeId.java:55)
 at 
 org.apache.cassandra.utils.NodeId$OneShotRenewer.init(NodeId.java:175)
 at 
 org.apache.cassandra.service.StorageService.forceTableCleanup(StorageService.java:1769)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1447)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:89)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1292)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1380)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:812)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java

[jira] [Created] (CASSANDRA-4693) CQL Protocol should allow multiple PreparedStatements to be atomically executed

2012-09-20 Thread JIRA
Michaël Figuière created CASSANDRA-4693:
---

 Summary: CQL Protocol should allow multiple PreparedStatements to 
be atomically executed
 Key: CASSANDRA-4693
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4693
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Michaël Figuière


Currently the only way to insert multiple records on the same partition key, 
atomically and using PreparedStatements is to use a CQL BATCH command. 
Unfortunately when doing so the amount of records to be inserted must be known 
prior to prepare the statement which is rarely the case. Thus the only 
workaround if one want to keep atomicity is currently to use unprepared 
statements which send a bulk of CQL strings and is fairly inefficient.

Therefore CQL Protocol should allow clients to send multiple PreparedStatements 
to be executed with similar guarantees and semantic as CQL BATCH command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-09-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4049:
--

Attachment: (was: pluggable_custom_components-1.1.4.patch)

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.6


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-09-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4049:
--

Attachment: pluggable_custom_components-1.1.5.patch

Improved patch:

1. Code style - now uses FileUtils, Guava.readLines
2. Appends TOC instead of overwriting it.
3. Access to TOC file is protected with ReadWriteLock.
4. Components collection is a CopyOnWriteArraySet.
5. No synchronized.

I really tried to make components collection immutable first, but that opened 
unfortunately a whole can of worms related to:
1. SSTable reference counting (two SSTableReader objects sharing data) and 
deletion
2. Adding custom components from inside of a notifier (e.g. on memtable flush)
3. Rebuilding interval tree (that one was easy to fix, though)

Just didn't want to introduce subtle bugs. SSTable and SSTableReader aren't 
immutable anyway.

I performed stress testing with Cassandra FileSystem stress test tool in DSE, 
and our custom CFS strategy - works fine.

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.6

 Attachments: pluggable_custom_components-1.1.5.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4683) running nodetool cleanup throws an exception

2012-09-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13461439#comment-13461439
 ] 

Bartłomiej Romański commented on CASSANDRA-4683:


My logs shows that all my nodes are 1.1.5:

 br@c2:~$ sudo grep 'Cassandra version' /var/log/cassandra/system.log
 INFO [main] 2012-09-18 05:01:56,121 StorageService.java (line 423) Cassandra 
version: 1.1.5


 running nodetool cleanup throws an exception
 

 Key: CASSANDRA-4683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4683
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.5
Reporter: Bartłomiej Romański

 We've just upgraded to 1.1.5 from 1.1.2. 
 After that running nodetool cleanup we've got the following error:
 Exception in thread main java.lang.AssertionError
 at 
 org.apache.cassandra.db.SystemTable.getCurrentLocalNodeId(SystemTable.java:462)
 at 
 org.apache.cassandra.utils.NodeId$LocalNodeIdHistory.init(NodeId.java:195)
 at org.apache.cassandra.utils.NodeId$LocalIds.clinit(NodeId.java:43)
 at org.apache.cassandra.utils.NodeId.localIds(NodeId.java:50)
 at org.apache.cassandra.utils.NodeId.getLocalId(NodeId.java:55)
 at 
 org.apache.cassandra.utils.NodeId$OneShotRenewer.init(NodeId.java:175)
 at 
 org.apache.cassandra.service.StorageService.forceTableCleanup(StorageService.java:1769)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1447)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:89)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1292)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1380)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:812)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
 Any ideas?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-09-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-4049:
--

Attachment: pluggable_custom_components-1.1.5-2.patch

Next version of the patch:
- addCustomComponents marked as public API
- simplified discoverComponentsFor loop
- fileLocking removed from SSTable
- added detection for concurrent access or lightweight file locking (just 
replace lockOrFail calls with lock)

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.6

 Attachments: pluggable_custom_components-1.1.5-2.patch, 
 pluggable_custom_components-1.1.5.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-09-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13462557#comment-13462557
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-4049 at 9/25/12 7:33 PM:


Next version of the patch:
- addCustomComponents marked as public API
- simplified discoverComponentsFor loop
- fileLocking removed from SSTable
- added detection for concurrent access or lightweight file locking (just 
replace lockOrFail calls with lock if you want locking back)

  was (Author: pkolaczk):
Next version of the patch:
- addCustomComponents marked as public API
- simplified discoverComponentsFor loop
- fileLocking removed from SSTable
- added detection for concurrent access or lightweight file locking (just 
replace lockOrFail calls with lock)
  
 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.6

 Attachments: pluggable_custom_components-1.1.5-2.patch, 
 pluggable_custom_components-1.1.5.patch


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   4   5   6   7   8   9   10   >