[jira] [Resolved] (CASSANDRA-4093) schema_* CFs do not respect column comparator which leads to CLI commands failure.

2012-04-09 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-4093.
-

Resolution: Fixed
  Reviewer: jbellis  (was: xedin)

Backported CASSANDRA-4037 and committed v2 (with dc_local_rr back at 37). 
Thanks.

 schema_* CFs do not respect column comparator which leads to CLI commands 
 failure.
 --

 Key: CASSANDRA-4093
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4093
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.0
Reporter: Dave Brosius
Assignee: Sylvain Lebresne
 Fix For: 1.1.0

 Attachments: 4093.txt, 4093_v2.txt, CASSANDRA-4093-CD-changes.patch


 ColumnDefinition.{ascii, utf8, bool, ...} static methods used to initialize 
 schema_* CFs column_metadata do not respect CF comparator and use 
 ByteBufferUtil.bytes(...) for column names which creates problems in CLI and 
 probably in other places.
 The CompositeType validator throws exception on first column
 String columnName = columnNameValidator.getString(columnDef.name);
 Because it appears the composite type length header is wrong (25455)
 AbstractCompositeType.getWithShortLength
 java.lang.IllegalArgumentException
   at java.nio.Buffer.limit(Buffer.java:247)
   at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:50)
   at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:59)
   at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getString(AbstractCompositeType.java:139)
   at 
 org.apache.cassandra.cli.CliClient.describeColumnFamily(CliClient.java:2046)
   at 
 org.apache.cassandra.cli.CliClient.describeKeySpace(CliClient.java:1969)
   at 
 org.apache.cassandra.cli.CliClient.executeShowKeySpaces(CliClient.java:1574)
 (seen in trunk)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4117) show keyspaces command in cli causes error and does not show keyspaces other than system

2012-04-05 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-4117.
-

Resolution: Duplicate

This is a duplicate of CASSANDRA-4093

 show keyspaces command in cli causes error and does not show keyspaces other 
 than system
 

 Key: CASSANDRA-4117
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4117
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Manoj Kanta Mainali
Priority: Minor

 I have been working with 1.1.0-beta2 version and I am not able to see the 
 created keyspaces via show keyspaces in cli.
 After initiating the cassandra-cli, I did the following
 1. create keyspace foo;
 2. use foo;
 3. create column family foo;
 4. show keyspaces;
 Only the result for the system keyspace is shown.
 I added some debug statements in the code and I can see that in the 
 executeShowKeySpaces in CliClient, the size of keySpaces retrieved is equal 
 to the number of keyspaces in the cluster,i.e. in my case equal to 2, however 
 only null is displayed after printing the system keyspaces.
 I added a try catch statement around the for loop in the method and printed 
 the exception messages in the cli and got following 
 When printing the exception e itself : 
 java.lang.IllegalArgumentException
 When printing the e.getStackTrace()[0] and e.getStackTrace()[1]:
 java.nio.Buffer.limit(Buffer.java:249)
 org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:51)
 I would love to post the whole stack trace, but at the moment I am not sure 
 how I can do that through sessionState.out.println

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3676) Add snaptree dependency to maven central and update pom

2012-04-03 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3676.
-

Resolution: Fixed
  Reviewer: slebresne

+1, committed, thanks

 Add snaptree dependency to maven central and update pom
 ---

 Key: CASSANDRA-3676
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3676
 Project: Cassandra
  Issue Type: Sub-task
Reporter: T Jake Luciani
Assignee: Stephen Connolly
 Fix For: 1.1.0


 Snaptree dependency needs to be added to maven before we can release 1.1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3732) Update POM generation after migration to git

2012-04-03 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3732.
-

   Resolution: Fixed
Fix Version/s: 1.1.0
   1.0.10
 Reviewer: slebresne

Ok +1 then, committed, thanks


 Update POM generation after migration to git
 

 Key: CASSANDRA-3732
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3732
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Sylvain Lebresne
Assignee: Stephen Connolly
Priority: Minor
 Fix For: 1.0.10, 1.1.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4076) Remove get{Indexed,Sliced}ReadBufferSizeInKB methods

2012-03-30 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-4076.
-

Resolution: Fixed
  Reviewer: jbellis
  Assignee: Sylvain Lebresne

Committed, thanks

 Remove get{Indexed,Sliced}ReadBufferSizeInKB methods
 

 Key: CASSANDRA-4076
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4076
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.0
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Fix For: 1.1.0

 Attachments: 4076.txt


 Since CASSANDRA-3171, the 
 DatabaseDescriptor.get{Indexed,Sliced}ReadBufferSizeInKB methods are dead 
 code (they are used as bufferSize argument to SSTableReader.getFileDataInput 
 but that method ignore the argument). This means in particular that we can 
 remove the configuration option sliced_buffer_size_in_kb (we shouldn't do 
 that in 1.0 which is why I targeted to 1.1.0, though we could add a message 
 in 1.0 that this argument is ignored).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3941) allow setting/getting CfDef.caching and CfDef.bloom_filter_fp_chance via CQL

2012-03-27 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3941.
-

Resolution: Duplicate

Dupe of CASSANDRA-4042 (we'll include the BF fp change there)

 allow setting/getting CfDef.caching and CfDef.bloom_filter_fp_chance via CQL
 

 Key: CASSANDRA-3941
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3941
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.7
Reporter: paul cannon
Priority: Minor
  Labels: cql, cql3

 see CASSANDRA-3667 (caching control, added in 1.0.7) and CASSANDRA-3497 
 (bloom_filter_fp_chance, added in the 1.1 betas). these options probably 
 ought to be exposed to CQL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4070) CFS.setMaxCompactionThreshold doesn't allow 0 unless min is also 0

2012-03-22 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-4070.
-

Resolution: Fixed
  Reviewer: jbellis

Committed, thanks

I agree that we probably should have a better way to disable compaction. 
Actually given that leveled compaction pretty much ignore the max and min 
threshold, I think we should think about moving those to the compaction options.

 CFS.setMaxCompactionThreshold doesn't allow 0 unless min is also 0
 --

 Key: CASSANDRA-4070
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4070
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Fix For: 1.0.9

 Attachments: 4070.patch


 Thrift allows to set the max compaction threshold to 0 to disable compaction. 
 However, CFS.setMaxCompactionThreshold throws an exception min  max even if 
 max is 0.
 Note that even if someone sets 0 for both the min and max thresholds, we'll 
 can have a problem because SizeTieredCompaction calls 
 CFS.setMaxCompactionThreshold before calling CFS.setMinCompactionThreshold 
 and thus will trigger the RuntimeException when it shouldn't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4037) Move CfDef and KsDef validation to CFMetaData and KSMetaData

2012-03-22 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-4037.
-

Resolution: Fixed
  Reviewer: jbellis

Committed, thanks

 Move CfDef and KsDef validation to CFMetaData and KSMetaData
 

 Key: CASSANDRA-4037
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4037
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.1.1


 Following CASSANDRA-3792, CQL don't need to use thrift CfDef and KsDef. 
 However, those are still used in order to reuse ThriftValidation validation 
 methods. We should move that validation to CFM and KSM and remove the use of 
 those thrift structures by CQL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4017) Unify migrations

2012-03-18 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-4017.
-

Resolution: Fixed

Committed, thanks

 Unify migrations
 

 Key: CASSANDRA-4017
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4017
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.1.0


 Now that we can send a schema as a RowMutation, there's no need to keep 
 separate add/drop/update migration classes around.  Let's just send the 
 schema to our counterparts and let them figure out what changed.  Currently 
 we have figure out what changed code to both generate migrations on the 
 sender, and for application on the target, which adds complexity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3792) add type information to new schema_ columnfamilies

2012-03-13 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3792.
-

Resolution: Fixed

Committed, thanks

 add type information to new schema_ columnfamilies
 --

 Key: CASSANDRA-3792
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3792
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.1.0


 Should also fix the quotes that the current Thrift-based serialization embeds 
 in string schema data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3999) Column families for most recent data, (a.k.a. size-safe wide rows)

2012-03-05 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3999.
-

Resolution: Duplicate

If I understand this correctly, that's a duplicate of CASSANDRA-3929.

 Column families for most recent data, (a.k.a. size-safe wide rows)
 

 Key: CASSANDRA-3999
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3999
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Ahmet AKYOL

 Wide row design is very handy (for time series data) and on the other hand 
 we have to keep each row size around an acceptable amount. Then, we need 
 buckets; right? Monthly, daily or even hourly buckets... The problem with 
 bucket approach is the distribution of data in rows (as always). 
 So, why not to tell cassandra we want a column family like LRU cache but on 
 disk. If we start design from queries we usually end up with most recent 
 data queries. This size safe wide rows approach can be very useful in many 
 use cases.
 Here are some example hypothetical column family storage parameters :
 max_column_number_hint : 1000 // meaning: try to keep around 1000 columns. 
 Since it's a hint, we(users) are OK with tombstones or 800 - 1200 range
 or
 max_row_size_hint : 1MB
 I don't know Cassandra Internals but C* has already background jobs( for 
 compaction,deletion and ttl) and columns already have timestamps. So both 
 from user point of view and C*, it makes sense.
 P.S: Sorry for my poor English and it's my very first issue :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3862) RowCache misses Updates

2012-02-28 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3862.
-

Resolution: Fixed

Committed, thanks

 RowCache misses Updates
 ---

 Key: CASSANDRA-3862
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3862
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6
Reporter: Daniel Doubleday
Assignee: Sylvain Lebresne
 Fix For: 1.1.0

 Attachments: 3862-7.txt, 3862-cleanup.txt, 3862-v2.patch, 
 3862-v4.patch, 3862-v5.txt, 3862-v6.txt, 3862-v8.txt, 3862.patch, 
 3862_v3.patch, 3862_v8_addon.txt, include_memtables_in_rowcache_read.patch


 While performing stress tests to find any race problems for CASSANDRA-2864 I 
 guess I (re-)found one for the standard on-heap row cache.
 During my stress test I hava lots of threads running with some of them only 
 reading other writing and re-reading the value.
 This seems to happen:
 - Reader tries to read row A for the first time doing a getTopLevelColumns
 - Row A which is not in the cache yet is updated by Writer. The row is not 
 eagerly read during write (because we want fast writes) so the writer cannot 
 perform a cache update
 - Reader puts the row in the cache which is now missing the update
 I already asked this some time ago on the mailing list but unfortunately 
 didn't dig after I got no answer since I assumed that I just missed 
 something. In a way I still do but haven't found any locking mechanism that 
 makes sure that this should not happen.
 The problem can be reproduced with every run of my stress test. When I 
 restart the server the expected column is there. It's just missing from the 
 cache.
 To test I have created a patch that merges memtables with the row cache. With 
 the patch the problem is gone.
 I can also reproduce in 0.8. Haven't checked 1.1 but I haven't found any 
 relevant change their either so I assume the same aplies there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3947) Issues with describe keyword

2012-02-23 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3947.
-

Resolution: Not A Problem

 Issues with describe keyword
 

 Key: CASSANDRA-3947
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3947
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
 Environment: Ubuntu 11.10
Reporter: Rishabh Agrawal
  Labels: newbie

 I am newbie to Cassandra. Please bear with my lame doubts.
 I am running Cassandra version on 1.0.7 on Ubuntu. I found following case 
 with describe:
  
 If there is Keyspace with name 'x' then describe x command will give desired 
 results. But if there is also a Column Family named 'x' then describe will 
 not be able to catch it. But if there is only column family 'x' and no 
 keyspace with the same name then describe x command will give desired results 
 i.e. it will be able to capture and display info regarding 'x' column family.
  
 Kindly help me with that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3903) Intermittent unexpected errors: possibly race condition around CQL parser?

2012-02-17 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3903.
-

   Resolution: Fixed
Fix Version/s: 1.1.0

Committed, thanks

 Intermittent unexpected errors: possibly race condition around CQL parser?
 --

 Key: CASSANDRA-3903
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3903
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Mac OS X 10.7 with Sun/Oracle Java 1.6.0_29
 Debian GNU/Linux 6.0.3 (squeeze) with Sun/Oracle Java 1.6.0_26
 several recent commits on cassandra-1.1 branch. at least:
 0183dc0b36e684082832de43a21b3dc0a9716d48, 
 3eefbac133c838db46faa6a91ba1f114192557ae, 
 9a842c7b317e6f1e6e156ccb531e34bb769c979f
 Running cassandra under ccm with one node
Reporter: paul cannon
Assignee: Sylvain Lebresne
 Fix For: 1.1.0

 Attachments: 0001-Fix-CFS.all-thread-safety.patch, 
 0002-Fix-fixCFMaxId.patch


 When running multiple simultaneous instances of the test_cql.py piece of the 
 python-cql test suite, I can reliably reproduce intermittent and 
 unpredictable errors in the tests.
 The failures often occur at the point of keyspace creation during test setup, 
 with a CQL statement of the form:
 {code}
 CREATE KEYSPACE 'asnvzpot' WITH strategy_class = SimpleStrategy
 AND strategy_options:replication_factor = 1
 
 {code}
 An InvalidRequestException is returned to the cql driver, which re-raises it 
 as a cql.ProgrammingError. The message:
 {code}
 ProgrammingError: Bad Request: line 2:24 no viable alternative at input 
 'asnvzpot'
 {code}
 In a few cases, Cassandra threw an ArrayIndexOutOfBoundsException and this 
 traceback, closing the thrift connection:
 {code}
 ERROR [Thrift:244] 2012-02-10 15:51:46,815 CustomTThreadPoolServer.java (line 
 205) Error occurred during processing of message.
 java.lang.ArrayIndexOutOfBoundsException: 7
 at 
 org.apache.cassandra.db.ColumnFamilyStore.all(ColumnFamilyStore.java:1520)
 at 
 org.apache.cassandra.thrift.ThriftValidation.validateCfDef(ThriftValidation.java:634)
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:744)
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:898)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1245)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3458)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3446)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:680)
 {code}
 Sometimes I see an ArrayOutOfBoundsError with no traceback:
 {code}
 ERROR [Thrift:858] 2012-02-13 12:04:01,537 CustomTThreadPoolServer.java (line 
 205) Error occurred during processing of message.
 java.lang.ArrayIndexOutOfBoundsException
 {code}
 Sometimes I get this:
 {code}
 ERROR [MigrationStage:1] 2012-02-13 12:04:46,077 AbstractCassandraDaemon.java 
 (line 134) Fatal exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IllegalArgumentException: value already present: 1558
 at 
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
 at 
 com.google.common.collect.AbstractBiMap.putInBothMaps(AbstractBiMap.java:111)
 at com.google.common.collect.AbstractBiMap.put(AbstractBiMap.java:96)
 at com.google.common.collect.HashBiMap.put(HashBiMap.java:84)
 at org.apache.cassandra.config.Schema.load(Schema.java:392)
 at 
 org.apache.cassandra.db.migration.MigrationHelper.addColumnFamily(MigrationHelper.java:284)
 at 
 org.apache.cassandra.db.migration.MigrationHelper.addColumnFamily(MigrationHelper.java:209)
 at 
 org.apache.cassandra.db.migration.AddColumnFamily.applyImpl(AddColumnFamily.java:49)
 at 
 org.apache.cassandra.db.migration.Migration.apply(Migration.java:66)
 at 
 org.apache.cassandra.cql.QueryProcessor$1.call(QueryProcessor.java:334)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 

[jira] [Resolved] (CASSANDRA-3856) consider using persistent data structures for some things

2012-02-17 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3856.
-

Resolution: Not A Problem

I agree with Jonathan, if persistent collection make sense for a ticket, let's 
use them there and/or have the discussion of how pertinent they are for that 
use case there.

 consider using persistent data structures for some things
 -

 Key: CASSANDRA-3856
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3856
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Peter Schuller
Priority: Minor

 When thinking about CASSANDRA-3831, CASSANDRA-3833, CASSANDRA-3417 (and 
 probably others) I keep thinking that I really want persistent data 
 structures ala Clojure to enable giving out stable copies of data without 
 copying, to avoid complicating the code significantly to achieve a 
 combination of reasonable computational complexity, performance, and 
 thread-safety. However, I am not about to propose that we introduce Clojure 
 into the code base.
 Turns out other people have had similar desires and wanted to see Java 
 varsions of the clojure data structures (https://github.com/krukow/clj-ds and 
 http://thesoftwarelife.blogspot.com/2009/10/java-immutable-persistent-map.html)
  and there is another persistent ds project too 
 (http://code.google.com/p/pcollections/).
 The latter in particular looks interesting (not having tested it).
 I think it's worth considering adopting the use of these for things like the 
 token meta data. In general, I'd say it may be worth considering for things 
 that are not performance critical in the sense of constant factor 
 performance, but where you want thread-safety and reasonable computational 
 complexity and an easier sense of what's safe from a concurrency perspective. 
 Currently, we keep having to either copy data to punt a concurrency 
 concern, at the cost of computational complexity, or else add locking at the 
 cost of performance and complexity, or switch to concurrent data structures 
 at the cost of performance and another type of complexity.
 Does this seem completely out of the blue to people or do people agree it's 
 worth exploring?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3924) So many 0 size tmp sstables which is not closed in Cassandra data directory

2012-02-16 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3924.
-

Resolution: Duplicate

This was fixed by CASSANDRA-3616 in 1.0.7 release.

 So many 0 size tmp sstables which is not closed in Cassandra data directory
 ---

 Key: CASSANDRA-3924
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3924
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.6
Reporter: MaHaiyang

 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:17 test-tmp-hc-500-Index.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:18 test-tmp-hc-502-Data.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:18 test-tmp-hc-502-Index.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:25 test-tmp-hc-508-Data.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:25 test-tmp-hc-508-Index.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:32 test-tmp-hc-514-Data.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:32 test-tmp-hc-514-Index.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:41 test-tmp-hc-520-Data.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:41 test-tmp-hc-520-Index.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:48 test-tmp-hc-526-Data.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:48 test-tmp-hc-526-Index.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:49 test-tmp-hc-528-Data.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:49 test-tmp-hc-528-Index.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:53 test-tmp-hc-532-Data.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:53 test-tmp-hc-532-Index.db
 -rw-r--r-- 1 mhy  mhygrp   0 Feb 15 10:57 test-tmp-hc-537-Data.db
 There are more than 7000 tmp sstables in cassandra data directory like this 
 .Could't find any exception or error log in cassandra log.
 I find the same id sstables (not tmp) created by memtable flush or compaction 
 according to cassandra log . 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3367) data created before index is not returned in where query

2012-02-16 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3367.
-

Resolution: Cannot Reproduce

Closing as 'Cannot reproduce' as well, I cannot reproduce and it doesn't seem 
anyone hit that.

 data created before index is not returned in where query
 

 Key: CASSANDRA-3367
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3367
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: Cathy Daw
Assignee: Sylvain Lebresne

 *CQL version of bug*
 {code}
 // CREATE KS AND CF  
 CREATE KEYSPACE ks1 with 
   strategy_class =  
 'org.apache.cassandra.locator.SimpleStrategy' 
   and strategy_options:replication_factor=1;
 use ks1;
 DROP COLUMNFAMILY users;
 CREATE COLUMNFAMILY users (
   KEY varchar PRIMARY KEY, password varchar, gender varchar,
   session_token varchar, state varchar, birth_year bigint);
 // INSERT DATA
 INSERT INTO users (KEY, password, gender, state, birth_year) VALUES ('user1', 
 'ch@ngem3a', 'f', 'TX', '1968');
 INSERT INTO users (KEY, password, gender, state, birth_year) VALUES ('user2', 
 'ch@ngem3b', 'm', 'CA', '1971');
 // CREATE INDEX
 CREATE INDEX gender_key ON users (gender);
 CREATE INDEX state_key ON users (state);
 CREATE INDEX birth_year_key ON users (birth_year);
 // INSERT DATA
 INSERT INTO users (KEY, password, gender, state, birth_year) VALUES ('user3', 
 'ch@ngem3c', 'f', 'FL', '1978');
 INSERT INTO users (KEY, password, gender, state, birth_year) VALUES ('user4', 
 'ch@ngem3d', 'm', 'TX', '1974'); 
 // VERIFY DATA
 cqlsh select * from users;
KEY | birth_year | gender |  password | state |
  user1 |   1968 |  f | ch@ngem3a |TX |
  user4 |   1974 |  m | ch@ngem3d |TX |
  user3 |   1978 |  f | ch@ngem3c |FL |
  user2 |   1971 |  m | ch@ngem3b |CA |
 //BUG : missing row from user1, created before index was added
 cqlsh select * from users where state='TX';
KEY | birth_year | gender |  password | state |
  user4 |   1974 |  m | ch@ngem3d |TX |
 //BUG : missing row from user2, created before index was added
 cqlsh select * from users where state='CA';
 {code}
 *CLI version of bug*
 {code}
 // CREATE KS AND CF  
 CREATE keyspace ks1 with
   placement_strategy = 
 'org.apache.cassandra.locator.SimpleStrategy'
   and strategy_options = [{replication_factor:1}];
 use ks1;
 drop column family users;
 create column family users
   with comparator = UTF8Type
   and key_validation_class = UTF8Type
   and default_validation_class = UTF8Type
   and column_metadata = [{column_name: password, 
 validation_class:UTF8Type}
   {column_name: gender, validation_class: UTF8Type},
   {column_name: session_token, validation_class: UTF8Type},
   {column_name: state, validation_class: UTF8Type},
   {column_name: birth_year, validation_class: LongType}];
 // INSERT DATA
 set users['user1']['password']='ch@ngem3a';
 set users['user1']['gender']='f';
 set users['user1']['state']='TX';
 set users['user1']['birth_year']='1968';
 set users['user2']['password']='ch@ngem3b';
 set users['user2']['gender']='m';
 set users['user2']['state']='CA';
 set users['user2']['birth_year']='1971';
 // ADD INDEX  
 update column family users
   with comparator = UTF8Type
   and key_validation_class = UTF8Type
   and default_validation_class = UTF8Type
   and column_metadata = [{column_name: password, 
 validation_class:UTF8Type}
   {column_name: gender, validation_class: UTF8Type, index_type: KEYS},
   {column_name: session_token, validation_class: UTF8Type},
   {column_name: state, validation_class: UTF8Type, index_type: KEYS},
   {column_name: birth_year, validation_class: LongType, index_type: 
 KEYS}];
 // INSERT DATA
 set users['user3']['password']='ch@ngem3b';
 set users['user3']['gender']='f';
 set users['user3']['state']='FL';
 set users['user3']['birth_year']='1978';
 set users['user4']['password']='ch@ngem3c';
 set users['user4']['gender']='m';
 set users['user4']['state']='TX';
 set users['user4']['birth_year']='1974';
 // VERIFY DATA
 [default@cqldb] list users;
 Using default limit of 100
 ---
 RowKey: user1
 = (column=birth_year, value=1968, timestamp=1318714655921000)
 = (column=gender, value=f, timestamp=1318714655917000)
 = (column=password, value=ch@ngem3a, timestamp=1318714655908000)
 = (column=state, value=TX, timestamp=1318714655919000)
 ---
 RowKey: user4
 = (column=birth_year, value=1974, timestamp=1318714671608000)
 = (column=gender, value=m, timestamp=1318714670666000)
 = (column=password, value=ch@ngem3c, timestamp=1318714670665000)
 = (column=state, value=TX, 

[jira] [Resolved] (CASSANDRA-3864) Unit tests failures in 1.1

2012-02-15 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3864.
-

   Resolution: Fixed
Fix Version/s: 1.1.0

Ok, I think the problems of that ticket are now all solved, closing

 Unit tests failures in 1.1
 --

 Key: CASSANDRA-3864
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3864
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Brandon Williams
 Fix For: 1.1.0

 Attachments: 0001-Fix-DefsTest.patch, 
 0002-Fix-SSTableImportTest.patch, 0003-Fix-CompositeTypeTest.patch


 On the current 1.1 branch I get the following errors:
 # SSTableImportTest:
 {noformat}
 [junit] Testcase: 
 testImportSimpleCf(org.apache.cassandra.tools.SSTableImportTest):   Caused an 
 ERROR
 [junit] java.lang.Integer cannot be cast to java.lang.Long
 [junit] java.lang.ClassCastException: java.lang.Integer cannot be cast to 
 java.lang.Long
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport$JsonColumn.init(SSTableImport.java:132)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport.addColumnsToCF(SSTableImport.java:191)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport.addToStandardCF(SSTableImport.java:174)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport.importUnsorted(SSTableImport.java:290)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:255)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImportTest.testImportSimpleCf(SSTableImportTest.java:60)
 {noformat}
 # CompositeTypeTest:
 {noformat}
 [junit] Testcase: 
 testCompatibility(org.apache.cassandra.db.marshal.CompositeTypeTest):   
 Caused an ERROR
 [junit] Invalid comparator class 
 org.apache.cassandra.db.marshal.CompositeType: must define a public static 
 instance field or a public static method getInstance(TypeParser).
 [junit] org.apache.cassandra.config.ConfigurationException: Invalid 
 comparator class org.apache.cassandra.db.marshal.CompositeType: must define a 
 public static instance field or a public static method 
 getInstance(TypeParser).
 [junit]   at 
 org.apache.cassandra.db.marshal.TypeParser.getRawAbstractType(TypeParser.java:294)
 [junit]   at 
 org.apache.cassandra.db.marshal.TypeParser.getAbstractType(TypeParser.java:268)
 [junit]   at 
 org.apache.cassandra.db.marshal.TypeParser.parse(TypeParser.java:81)
 [junit]   at 
 org.apache.cassandra.db.marshal.CompositeTypeTest.testCompatibility(CompositeTypeTest.java:216)
 {noformat}
 # DefsTest:
 {noformat}
 [junit] Testcase: 
 testUpdateColumnFamilyNoIndexes(org.apache.cassandra.db.DefsTest):  FAILED
 [junit] Should have blown up when you used a different comparator.
 [junit] junit.framework.AssertionFailedError: Should have blown up when you 
 used a different comparator.
 [junit]   at 
 org.apache.cassandra.db.DefsTest.testUpdateColumnFamilyNoIndexes(DefsTest.java:539)
 {noformat}
 # CompactSerializerTest:
 {noformat}
 [junit] null
 [junit] java.lang.ExceptionInInitializerError
 [junit]   at 
 org.apache.cassandra.db.SystemTable.getCurrentLocalNodeId(SystemTable.java:437)
 [junit]   at 
 org.apache.cassandra.utils.NodeId$LocalNodeIdHistory.init(NodeId.java:195)
 [junit]   at 
 org.apache.cassandra.utils.NodeId$LocalIds.clinit(NodeId.java:43)
 [junit]   at java.lang.Class.forName0(Native Method)
 [junit]   at java.lang.Class.forName(Class.java:169)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:96)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest.scanClasspath(CompactSerializerTest.java:129)
 [junit] Caused by: java.lang.NullPointerException
 [junit]   at 
 org.apache.cassandra.config.DatabaseDescriptor.createAllDirectories(DatabaseDescriptor.java:574)
 [junit]   at org.apache.cassandra.db.Table.clinit(Table.java:82)
 {noformat}
 There is also some error RemoveSubColumnTest and RemoveSubColumnTest but I'll 
 open a separate ticket for those as they may require a bit more discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

[jira] [Resolved] (CASSANDRA-3748) Range ghosts don't disappear as expected and accumulate

2012-02-10 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3748.
-

   Resolution: Not A Problem
Fix Version/s: (was: 1.0.8)

Alright, closing. We can reopen anyway if this reproduce on 1.0.7.

 Range ghosts don't disappear as expected and accumulate
 ---

 Key: CASSANDRA-3748
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3748
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.3
 Environment: Cassandra on Debian 
Reporter: Dominic Williams
  Labels: compaction, ghost-row, range, remove
   Original Estimate: 6h
  Remaining Estimate: 6h

 I have a problem where range ghosts are accumulating and cannot be removed by 
 reducing GCSeconds and compacting.
 In our system, we have some cfs that represent markets where each row 
 represents an item. Once an item is sold, it is removed from the market by 
 passing its key to remove().
 The problem, which was hidden for some time by caching, is appearing on read. 
 Every few seconds our system collates a random sample from each cf/market by 
 choosing a random starting point:
 String startKey = RNG.nextUUID())
 and then loading a page range of rows, specifying the key range as:
 KeyRange keyRange = new KeyRange(pageSize);
 keyRange.setStart_key(startKey);
 keyRange.setEnd_key(maxKey);
 The returned rows are iterated over, and ghosts ignored. If insufficient rows 
 are obtained, the process is repeated using the key of the last row as the 
 starting key (or wrapping if necessary etc).
 When performance was lagging, we did a test and found that constructing a 
 random sample of 40 items (rows) involved iterating over hundreds of 
 thousands of ghost rows. 
 Our first attempt to deal with this was to halve our GCGraceSeconds and then 
 perform major compactions. However, this had no effect on the number of ghost 
 rows being returned. Furthermore, on examination it seems clear that the 
 number of ghost rows being created within GCSeconds window must be smaller 
 than the number being returned. Thus looks like a bug.
 We are using Cassandra 1.0.3 with Sylain's patch from CASSANDRA-3510

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2475) Prepared statements

2012-02-09 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-2475.
-

Resolution: Fixed

 Prepared statements
 ---

 Key: CASSANDRA-2475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2475
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Affects Versions: 1.0.5
Reporter: Eric Evans
Assignee: Rick Shaw
Priority: Critical
  Labels: cql
 Fix For: 1.1

 Attachments: 2475-v1.patch, 2475-v2.patch, 2475-v3.1.patch, 
 2475-v3.2-Thrift.patch, v1-0001-CASSANDRA-2475-prepared-statement-patch.txt, 
 v1-0002-regenerated-thrift-java.txt, 
 v10-0001-CASSANDRA-2475-properly-report-number-of-markers-in-a-.txt, 
 v10-0002-index-bind-markers-using-parser.txt, 
 v10-0003-clean-up-Term-ctors.txt, 
 v2-0001-CASSANDRA-2475-rickshaw-2475-v3.1.patch.txt, 
 v2-0002-rickshaw-2475-v3.2-Thrift.patch-w-changes.txt, 
 v2-0003-eevans-increment-thrift-version-by-1-not-3.txt, 
 v2-0004-eevans-misc-cleanups.txt, 
 v2-0005-eevans-refactor-for-better-encapsulation-of-prepare.txt, 
 v2-0006-eevans-log-queries-at-TRACE.txt, 
 v2-0007-use-an-LRU-map-for-storage-of-prepared-statements.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3625) Do something about DynamicCompositeType

2012-02-08 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3625.
-

   Resolution: Fixed
Fix Version/s: 1.1
 Reviewer: edanuff
 Assignee: Sylvain Lebresne

 Do something about DynamicCompositeType
 ---

 Key: CASSANDRA-3625
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3625
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.1

 Attachments: 0001-allow-comparing-different-types.patch


 Currently, DynamicCompositeType is a super dangerous type. We cannot leave it 
 that way or people will get hurt.
 Let's recall that DynamicCompositeType allows composite column names without 
 any limitation on what each component type can be. It was added to basically 
 allow to use different rows of the same column family to each store a 
 different index. So for instance you would have:
 {noformat}
 index1: {
   bar:24 - someval
   bar:42 - someval
   foo:12 - someval
   ...
 }
 index2: {
   0:uuid1:3.2 - someval
   1:uuid2:2.2 - someval
   ...
 }
 
 {noformat}
 where index1, index2, ... are rows.
 So each row have columns whose names have similar structure (so they can be 
 compared), but between rows the structure can be different (we neve compare 
 two columns from two different rows).
 But the problem is the following: what happens if in the index1 row above, 
 you insert a column whose name is 0:uuid1 ? There is no really meaningful way 
 to compare bar:24 and 0:uuid1. The current implementation of 
 DynamicCompositeType, when confronted with this, says that it is a user error 
 and throw a MarshalException.
 The problem with that is that the exception is not throw at insert time, and 
 it *cannot* be because of the dynamic nature of the comparator. But that 
 means that if you do insert the wrong column in the wrong row, you end up 
 *corrupting* a sstable.
 It is too dangerous a behavior. And it's probably made worst by the fact that 
 some people probably think that DynamicCompositeType should be superior to 
 CompositeType since you know, it's dynamic.
 One solution to that problem could be to decide of some random (but 
 predictable) order between two incomparable component. For example we could 
 design that IntType  LongType  StringType ...
 Note that even if we do that, I would suggest renaming the 
 DynamicCompositeType to something that suggest that CompositeType is always 
 preferable to DynamicCompositeType unless you're really doing very advanced 
 stuffs.
 Opinions?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3778) KEY IN (...) queries do not work

2012-02-07 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3778.
-

Resolution: Fixed

Just confirmed that CASSANDRA-3791 did fix the test above so re-closing this 
one.

 KEY IN (...) queries do not work
 

 Key: CASSANDRA-3778
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3778
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Affects Versions: 1.1
Reporter: Eric Evans
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 1.1


 {{...KEY IN (...)}} queries fail due to faulty validation.  A pull request 
 for cassandra-dtest was opened that demonstrates this: 
 https://github.com/riptano/cassandra-dtest/pull/2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3850) get_indexed_slices losts index expressions

2012-02-04 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3850.
-

   Resolution: Fixed
Fix Version/s: 1.1
 Reviewer: slebresne

Committed, thanks.

(don't hesitate to attach the patch to the issue next time, it's slightly more 
convenient :))

 get_indexed_slices losts index expressions
 --

 Key: CASSANDRA-3850
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3850
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Andronov
  Labels: get, indexing, search
 Fix For: 1.1


 in trunk 
 CassandraServer.get_indexed_slices(ColumnParent , IndexClause , 
 SlicePredicate , ConsistencyLevel)
  looses  index_clause.expressions when calling  constructing 
 RangeSliceCommand by using wrong constructor.
 This makes examples on http://wiki.apache.org/cassandra/CassandraCli produce 
 wrong output as well as any get involving where check.
 Patch to fix this issue http://pastebin.com/QQT0Tfpc

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3824) [patch] add missing break in nodecmd's command dispatching for SETSTREAMTHROUGHPUT

2012-01-31 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3824.
-

   Resolution: Fixed
Fix Version/s: 1.1
 Reviewer: slebresne
 Assignee: Dave Brosius

Committed, thanks

 [patch] add missing break in nodecmd's command dispatching for 
 SETSTREAMTHROUGHPUT
 --

 Key: CASSANDRA-3824
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3824
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1

 Attachments: add_missing_break.diff


 code falls thru SETSTREAMTHROUGHPUT into REBUILD case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3795) Unable to join the mailing list

2012-01-27 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3795.
-

Resolution: Not A Problem

 Unable to join the mailing list
 ---

 Key: CASSANDRA-3795
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3795
 Project: Cassandra
  Issue Type: Bug
Reporter: Krassimir Kostov

 Hi!
 Since Jan 25, I have been trying several times to join the mailing list at 
 cassandra-u...@incubator.apache.org, but each time I tried, I got the 
 following email.  Please help resolving the issue.  Thanks!
  Date: Fri, 27 Jan 2012 04:30:59 +
  From: mailer-dae...@apache.org
  To: x...@yyy.zzz
  Subject: failure notice
  
  Hi. This is the qmail-send program at apache.org.
  I'm afraid I wasn't able to deliver your message to the following addresses.
  This is a permanent error; I've given up. Sorry it didn't work out.
  
  cassandra-user-allow-subscribe-XXX=yyy@incubator.apache.org:
  This mailing list has moved to user at cassandra.apache.org.
  
  --- Below this line is a copy of the message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3753) Update CqlPreparedResult to provide type information

2012-01-26 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3753.
-

Resolution: Fixed
  Reviewer: jbellis
  Assignee: Sylvain Lebresne

Committed, thanks

 Update CqlPreparedResult to provide type information
 

 Key: CASSANDRA-3753
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3753
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.1
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Critical
  Labels: cql
 Fix For: 1.1

 Attachments: 0001-3753-cql3.patch, 0002-Thrift-gen-file-changes.patch


 As discussed on CASSANDRA-3634, adding type information to a prepared 
 statement would allow more client-side error checking.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3774) cannot alter compaction strategy to leveled

2012-01-24 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3774.
-

Resolution: Duplicate

I don't think you're using current trunk because this was fixed by 
CASSANDRA-3691.

 cannot alter compaction strategy to leveled
 ---

 Key: CASSANDRA-3774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3774
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1
Reporter: Jackson Chung
Assignee: Sylvain Lebresne
 Fix For: 1.1


 this happens in trunk (probably trunk only)
 when changing the compaction strategy to leveled db (via cli), it fails. The 
 C* log shows an assertion error on org.apache.cassandra.db.DecoratedKey. 
 It looks like it is on key being null, as the LeveledManifest construct it 
 with null:
 {code}
 lastCompactedKeys[i] = new DecoratedKey(cfs.partitioner.getMinimumToken(), 
 null);
 {code}
 The DecoratedKey in 1.0 only check assertion on token. Cassandra-1034 changes 
 the assertion in trunk to include the key:
 {code}
 public DecoratedKey(T token, ByteBuffer key)
 {
 assert token != null  key != null  key.remaining()  0;
 {code}
 {noformat}
 ERROR [pool-2-thread-2] 2012-01-23 12:27:47,274 Cassandra.java (line 4228) 
 Internal error processing system_update_column_family
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
 at 
 org.apache.cassandra.thrift.CassandraServer.applyMigrationOnStage(CassandraServer.java:861)
 at 
 org.apache.cassandra.thrift.CassandraServer.system_update_column_family(CassandraServer.java:1053)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$system_update_column_family.process(Cassandra.java:4222)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:3077)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:188)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at 
 org.apache.cassandra.thrift.CassandraServer.applyMigrationOnStage(CassandraServer.java:853)
 ... 7 more
 Caused by: java.lang.RuntimeException: 
 java.lang.reflect.InvocationTargetException
 at 
 org.apache.cassandra.config.CFMetaData.createCompactionStrategyInstance(CFMetaData.java:726)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.maybeReloadCompactionStrategy(ColumnFamilyStore.java:164)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.reload(ColumnFamilyStore.java:148)
 at 
 org.apache.cassandra.db.migration.UpdateColumnFamily.applyModels(UpdateColumnFamily.java:86)
 at 
 org.apache.cassandra.db.migration.Migration.apply(Migration.java:156)
 at 
 org.apache.cassandra.thrift.CassandraServer$2.call(CassandraServer.java:846)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 ... 3 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.cassandra.config.CFMetaData.createCompactionStrategyInstance(CFMetaData.java:708)
 ... 10 more
 Caused by: java.lang.AssertionError
 at org.apache.cassandra.db.DecoratedKey.init(DecoratedKey.java:55)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.init(LeveledManifest.java:78)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.create(LeveledManifest.java:84)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.init(LeveledCompactionStrategy.java:74)
 ... 15 more
 {noformat}
 There is another bug also in the cli that it does not provide meaningful 
 error/stack trace, even with --debug:
 {noformat}
 [default@RequestAnalytic] update column family ServiceName with 
 compaction_strategy='LeveledCompactionStrategy';
 

[jira] [Resolved] (CASSANDRA-3594) MurmurRandomPartitioner

2012-01-23 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3594.
-

Resolution: Duplicate

Resolving this one since CASSANDRA-3772 has been opened to do the same (and has 
arguably a more easily searchable title)

  MurmurRandomPartitioner
 

 Key: CASSANDRA-3594
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3594
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Sylvain Lebresne
Priority: Minor

 Citing Jonathan from CASSANDRA-3545:
 {quote}
 Murmur is substantially faster than MD5, especially v3 (CASSANDRA-2975), and 
 with CASSANDRA-1034 done we don't need to rely on tokens being unique. Murmur 
 gives quite good hash distribution, which is the main thing we care about for 
 partitioning.
 {quote}
 I concur.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3750) Migrations and Schema CFs use disk space proportional to the square of the number of CFs

2012-01-18 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3750.
-

Resolution: Duplicate

While it is not yet committed, CASSANDRA-1391 will almost surely fix that, so 
marking that one as duplicate.

 Migrations and Schema CFs use disk space proportional to the square of the 
 number of CFs
 

 Key: CASSANDRA-3750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3750
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.1
 Environment: Linux (CentOS 5.7)
Reporter: John Chakerian
 Attachments: fit.png


 The system keyspace grows proportional to the square of the number of CFs 
 (more likely, it grows quadratically with # of schema changes in general). 
 The major offenders in the keyspace are the Migrations table  the Schema 
 table. On clusters with very large #s of CFs (in the low thousands), we think 
 that these large system tables may be contributing to various performance 
 issues.
 The approximate expression is: s = 0.0003253*n^2 + 2.58, where n is # of 
 keyspaces + # of schemas and s is the size of the system keyspace in 
 megabytes. See attached plot of the regression curve showing fit. 
 Sampled data: 
 {noformat}
 NUM_CFS SYSTEM_SIZE_IN_MB
 100 4.4
 200 15
 300 32
 400 55
 500 85
 600 120
 700 162
 800 211
 900 266
 1000 327
 {noformat}
 This was hit in 1.0.1, but is almost certainly not version specific. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3727) Fix unit tests failure

2012-01-12 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3727.
-

Resolution: Fixed

I still had intermittent failure of columnFamilyStoreTest, but because 
SystemTable.isIndexBuilt() was suffering from the same 'I forgot to expunge 
tombstones' problem than SystemTable.loadTokens(). I took on myself to commit 
the same fix for that instance directly (and check no other method had this 
problem).

So closing this as all tests are now passing.

However I'd be interested to know if anyone else is seeing the 'unable to 
create link' stack trace during tests, because if so we should probably open 
another ticket to investigate.

 Fix unit tests failure
 --

 Key: CASSANDRA-3727
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3727
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.0.7
Reporter: Sylvain Lebresne
Priority: Blocker
 Fix For: 1.0.7

 Attachments: 3727.txt, CASSANDRA-3727-CliTest-timeout-fix.patch


 On current 1.0 branch (and on my machine: Linux), I have the following unit 
 test failures:
 * CliTest and EmbeddedCassandraTest: they both first kind of pass (JUnit 
 first prints a message with no failures in it), then hang until JUnit timeout 
 and fails with a 'Timeout occurred'. In other word, the tests themselves are 
 passing, but something they do prevents the process to exit cleanly leading 
 to a JUnit timeout. I don't want to discard that as not a problem, because if 
 something can make the process not exit cleanly, this can be a pain for 
 restarts (and in particular upgrades) and hence would be basically a 
 regression. I'm marking the ticket as blocker (for the release of 1.0.7) 
 mostly because of this one.
 * SystemTableTest: throws an assertionError. I haven't checked yet, so that 
 could be an easy one to fix.
 * RemoveTest: it fails, saying that '/127.0.0.1:7010 is in use by another 
 process' (consistently). But I have no other process running on port 7010. 
 It's likely just of problem of the test, but it's new and in the meantime 
 removes are not tested.
 * I also see a bunch of stack trace with errors like:
 {noformat}
 [junit] ERROR 10:01:59,007 Fatal exception in thread 
 Thread[NonPeriodicTasks:1,5,main]
 [junit] java.lang.RuntimeException: java.io.IOException: Unable to create 
 hard link from build/test/cassandra/data/Keyspace1/Indexed1-hc-1-Index.db to 
 /home/mcmanus/Git/cassandra/build/test/cassandra/data/Keyspace1/backups/Indexed1-hc-1-Index.db
  (errno 17)
 {noformat}
 (with SSTableReaderTest). This does not make the tests fail, but it is still 
 worth investigating. It may be due to CASSANDRA-3101.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2749) fine-grained control over data directories

2012-01-04 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-2749.
-

Resolution: Fixed
  Assignee: Sylvain Lebresne

Committed with nits above fixed, thanks Pavel.

 fine-grained control over data directories
 --

 Key: CASSANDRA-2749
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2749
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.1

 Attachments: 0001-2749.patch, 
 0001-Make-it-possible-to-put-column-families-in-subdirect.patch, 
 0001-non-backwards-compatible-patch-for-2749-putting-cfs-.patch.gz, 
 0002-fix-unit-tests.patch, 0003-Fixes.patch, 2749.tar.gz, 
 2749_backwards_compatible_v1.patch, 2749_backwards_compatible_v2.patch, 
 2749_backwards_compatible_v3.patch, 2749_backwards_compatible_v4.patch, 
 2749_backwards_compatible_v4_rebase1.patch, 2749_not_backwards.tar.gz, 
 2749_proper.tar.gz


 Currently Cassandra supports multiple data directories but no way to control 
 what sstables are placed where. Particularly for systems with mixed SSDs and 
 rotational disks, it would be nice to pin frequently accessed columnfamilies 
 to the SSDs.
 Postgresql does this with tablespaces 
 (http://www.postgresql.org/docs/9.0/static/manage-ag-tablespaces.html) but we 
 should probably avoid using that name because of confusing similarity to 
 keyspaces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3694) ClassCastException during hinted handoff

2012-01-04 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3694.
-

Resolution: Fixed
  Reviewer: jbellis

Committed, thanks

 ClassCastException during hinted handoff
 

 Key: CASSANDRA-3694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1
Reporter: Brandon Williams
Assignee: Sylvain Lebresne
 Fix For: 1.1

 Attachments: 3694.patch


 {noformat}
 ERROR 08:51:00,200 Fatal exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.ClassCastException: org.apache.cassandra.dht.BigIntegerToken cannot 
 be cast to org.apache.cassandra.db.RowPosition
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getSequentialIterator(ColumnFamilyStore.java:1286)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1356)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleAllDeliveries(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:84)
 at 
 org.apache.cassandra.db.HintedHandOffManager$1.run(HintedHandOffManager.java:119)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3655) NPE when running upgradesstables

2012-01-03 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3655.
-

Resolution: Fixed
  Assignee: Jonathan Ellis  (was: Tupshin Harper)

Committed as even if didn't fixed the issue (which it probably did) it's an 
improvement.

 NPE when running upgradesstables
 

 Key: CASSANDRA-3655
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3655
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.5
 Environment: 1.0.5 + patch for 
 https://issues.apache.org/jira/browse/CASSANDRA-3618
Reporter: Tupshin Harper
Assignee: Jonathan Ellis
  Labels: compaction
 Fix For: 1.0.7

 Attachments: 3655.txt


 Running a test upgrade from 0.7(version f sstables) to 1.0.
 upgradesstables runs for about 40 minutes and then NPE's when trying to 
 retrieve a key.
 No files have been succesfully upgraded. Likely related is that scrub 
 (without having run upgrade) consumes all RAM and OOMs.
 Possible theory is that a lot of paths call IPartitioner's decorateKey, and, 
 at least in the randompartitioner's implementation, if any of those callers 
 pass a null ByteBuffer, they key will be null in the stack trace below.
 java.util.concurrent.ExecutionException: java.lang.NullPointerException
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:203)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:219)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:970)
   at 
 org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:1540)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)
   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
   at sun.rmi.transport.Transport$1.run(Transport.java:159)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.removeDeletedAndOldShards(PrecompactedRow.java:65)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:92)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:137)
   at 
 

[jira] [Resolved] (CASSANDRA-3658) Fix smallish problems find by FindBugs

2011-12-22 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3658.
-

Resolution: Fixed
  Reviewer: jbellis
  Assignee: Sylvain Lebresne

I'm good with keeping the assert. Committed all except 09. Thanks

 Fix smallish problems find by FindBugs
 --

 Key: CASSANDRA-3658
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3658
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
  Labels: fingbugs
 Fix For: 1.1

 Attachments: 0001-Respect-Future-semantic.patch, 
 0002-Avoid-race-when-reloading-snitch-file.patch, 
 0003-use-static-inner-class-when-possible.patch, 0004-Remove-dead-code.patch, 
 0005-Protect-against-signed-byte-extension.patch, 
 0006-Add-hashCode-method-when-equals-is-overriden.patch, 
 0007-Inverse-argument-of-compare-instead-of-negating-to-a.patch, 
 0008-stop-pretending-Token-is-Serializable-LocalToken-is-.patch, 
 0009-remove-useless-assert-that-is-always-true.patch, 
 0010-Add-equals-and-hashCode-to-Expiring-column.patch


 I've just run (the newly released) FindBugs 2 out of curiosity. Attaching a 
 number of patches related to issue raised by it. There is nothing major at 
 all so all patches are against trunk.
 I've tried keep each issue to it's own patch with a self describing title. It 
 far from covers all FindBugs alerts, but it's a picky tool so I've tried to 
 address only what felt at least vaguely useful. Those are still mostly nits 
 (only patch 2 is probably an actual bug).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3643) Cassandra C CQL driver

2011-12-19 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3643.
-

Resolution: Duplicate

Closing this as duplicate of CASSANDRA-2478. This is *not* saying C support 
wouldn't be useful/needed, but there is just not that many possibilities:
* either thrift adds better support for it -- this has to do with the thrift 
project.
* or we wait for CASSANDRA-2478 and then someone creates a C driver using that 
(note that CQL drivers live out of tree so even post-2478 there is not much 
point to letting this ticket open).

 Cassandra C CQL driver
 --

 Key: CASSANDRA-3643
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3643
 Project: Cassandra
  Issue Type: Wish
  Components: Drivers
Affects Versions: 1.0.6
 Environment: all
Reporter: Vlad Paiu
Priority: Blocker
  Labels: C, cql, driver

 It's really a shame that such a great project like Cassandra doesn't support 
 a way for it to be used from within a C application.
 Thrift has never worked for C or it is very poorly documented. Either way, 
 integrating Cassandra with an application written in C is just not possible 
 in an elegant manner at the current moment in time.
 With the development of CQL, it would really be great if one could run CQL 
 commands from within a C library, very much like libmysqlclient.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3601) get_count NullPointerException with counters

2011-12-09 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3601.
-

Resolution: Fixed

+1, committed. Thanks Greg.

 get_count NullPointerException with counters
 

 Key: CASSANDRA-3601
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3601
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.5
Reporter: Greg Hinkle
  Labels: counters
 Fix For: 1.0.6

 Attachments: trunk-3601.txt


 get_count doesn't currently work for counter columns or super counter 
 columns. The fix seems to be pretty simple.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3595) Explore using simple byte comparison for secondary indexes row instead partitioner ordered

2011-12-08 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3595.
-

Resolution: Invalid

Hum, yes you're right. I though we could just return results in byte order but 
obviously that doesn't work. I was too quick, my bad.

 Explore using simple byte comparison for secondary indexes row instead 
 partitioner ordered
 --

 Key: CASSANDRA-3595
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3595
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Sylvain Lebresne
Priority: Minor

 As remarked on CASSANDRA-3545, I don't think we absolutely need to have the 
 row sorted by the partitioner for secondary indexes. And calculating hashes 
 takes a significant amount of time (even if we take a faster hash: 
 CASSANDRA-3594)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3422) Can create a Column Family with comparator CounterColumnType which is subsequently unusable

2011-12-07 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3422.
-

   Resolution: Fixed
Fix Version/s: 0.8.9

 Can create a Column Family with comparator CounterColumnType which is 
 subsequently unusable
 ---

 Key: CASSANDRA-3422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3422
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
Reporter: Kelley Reynolds
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 0.8.9, 1.0.6

 Attachments: 3422-v2.patch, 3422.patch


 It's probably the case that this shouldn't be allowed at all but one is 
 currently allowed to create a Column Family with comparator CounterColumnType 
 which then appears unusable.
 CREATE COLUMNFAMILY comparator_cf_counter (id text PRIMARY KEY) WITH 
 comparator=CounterColumnType
 # Fails
 UPDATE comparator_cf_counter SET 1=1 + 1 WHERE id='test_key'
 Error = invalid operation for non commutative columnfamily 
 comparator_cf_counter

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3520) Unit test are hanging on 0.8 branch

2011-11-28 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3520.
-

Resolution: Fixed
  Reviewer: jbellis
  Assignee: Sylvain Lebresne

Committed

 Unit test are hanging on 0.8 branch
 ---

 Key: CASSANDRA-3520
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3520
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: Linux
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 0.8.8

 Attachments: 0001-Use-durable-writes-for-system-ks.patch, 3520.patch


 As the summary says, the unit test on current 0.8 are just hanging after 
 CliTest (it's apparently not the case on windows, but it is on Linux and 
 MacOSX).
 Not sure what's going on, but what I can tell is that it's enough to run 
 CliTest to have it hang after the test successfully pass (i.e, JUnit just 
 wait indefinitely for the VM to exit). Even weirder, it seems that it is the 
 counter increment in the CliTest that make it hang, if you comment those 
 statement, it stop hanging. However, nothing seems to go wrong with the 
 increment itself (the test passes) and it doesn't even trigger anything 
 (typically sendToHintedEndpoint is not called because there is only one node).
 Looking at the stack when the VM is hanging (attached), there is nothing 
 specific to counters in there, and nothing that struck me at odd (but I could 
 miss something). There do is a few thrift thread running (CASSANDRA-3335), 
 but why would that only be a problem for the tests in that situation is a 
 mystery to me.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3514) CounterColumnFamily Compaction error (ArrayIndexOutOfBoundsException)

2011-11-23 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3514.
-

Resolution: Fixed

Committed, thanks

 CounterColumnFamily Compaction error (ArrayIndexOutOfBoundsException) 
 --

 Key: CASSANDRA-3514
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3514
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.3
Reporter: Eric Falcao
Assignee: Sylvain Lebresne
  Labels: compaction
 Fix For: 0.8.8, 1.0.4

 Attachments: 3514.patch


 On a single node, I'm seeing the following error when trying to compact a 
 CounterColumnFamily. This appears to have started with version 1.0.3.
 nodetool -h localhost compact TRProd MetricsAllTime
 Error occured during compaction
 java.util.concurrent.ExecutionException: 
 java.lang.ArrayIndexOutOfBoundsException
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:250)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1471)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1523)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)
   at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
   at sun.rmi.transport.Transport$1.run(Transport.java:159)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:619)
 Caused by: java.lang.ArrayIndexOutOfBoundsException
   at 
 org.apache.cassandra.utils.ByteBufferUtil.arrayCopy(ByteBufferUtil.java:292)
   at 
 org.apache.cassandra.db.context.CounterContext$ContextState.copyTo(CounterContext.java:792)
   at 
 org.apache.cassandra.db.context.CounterContext.removeOldShards(CounterContext.java:709)
   at 
 org.apache.cassandra.db.CounterColumn.removeOldShards(CounterColumn.java:260)
   at 
 org.apache.cassandra.db.CounterColumn.mergeAndRemoveOldShards(CounterColumn.java:306)
   at 
 org.apache.cassandra.db.CounterColumn.mergeAndRemoveOldShards(CounterColumn.java:271)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.removeDeletedAndOldShards(PrecompactedRow.java:86)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:102)
   at 
 

[jira] [Resolved] (CASSANDRA-3510) Incorrect query results due to invalid SSTable.maxTimestamp

2011-11-22 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3510.
-

   Resolution: Fixed
Fix Version/s: 1.0.4
 Reviewer: amorton
 Assignee: Sylvain Lebresne

Committed

 Incorrect query results due to invalid SSTable.maxTimestamp
 ---

 Key: CASSANDRA-3510
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3510
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.3
Reporter: Aaron Morton
Assignee: Sylvain Lebresne
Priority: Critical
 Fix For: 1.0.4

 Attachments: 0001-3510-ignore-maxTimestamp-if-Long.MIN_VALUE.patch, 
 0002-3510-update-maxTimestamp-during-repair.patch, 3510.patch


 related to CASSANDRA-3446
 (sorry this is so long, took me a bit to work through it all and there is a 
 lot of new code :) )
  
 h1. Summary
 SSTable.maxTimestamp for files created before 1.0 defaults to Long.MIN_VALUE, 
 and this means the wrong data is returned from queries. 
  
 h2. Details 
 Noticed on a cluster that was upgraded from 0.8.X to 1.X, it then had trouble 
 similar to CASSANDRA-3446. It was rolled back to 0.8 and the migrated to 
 1.0.3. 
 4 Node cluster, all files upgraded to hb format. 
 In a super CF there are situations where a get for a sub columns returns a 
 different value than a get for the column. .e.g. 
 {noformat}
 [default@XXX] get Users[ascii('username')]['meta']['password'];
 = (column=password, value=3130323130343130, timestamp=1307352647576000)
 [default@XX] get Users[ascii('username')]['meta']; 
 (snip)   
 = (column=password, value=3034323131303034, timestamp=1319563673493000)
 {noformat}
 The correct value is the second one. 
 I added logging after line 109 in 
 o.a.c.db.CollectionController.collectTimeOrderedData() to log the sstable 
 name and the file max timestamp, this is what I got:
 {code:java}
 for (SSTableReader sstable : view.sstables)
 {
 long currentMaxTs = sstable.getMaxTimestamp();
 logger.debug(String.format(Got sstable %s and max TS %d, sstable, 
 currentMaxTs));
 reduceNameFilter(reducedFilter, container, currentMaxTs);
 {code}
 {noformat}
 DEBUG 14:08:46,012 Got sstable 
 SSTableReader(path='/var/lib/cassandra/data/X/Users-hb-12348-Data.db') and 
 max TS 1321824847534000
 DEBUG 14:08:47,231 Got sstable 
 SSTableReader(path='/var/lib/cassandra/data/X/Users-hb-12346-Data.db') and 
 max TS 1321813380793000
 DEBUG 14:08:49,879 Got sstable 
 SSTableReader(path='/var/lib/cassandra/data/X/Users-hb-12330-Data.db') and 
 max TS -9223372036854775808
 DEBUG 14:08:49,880 Got sstable 
 SSTableReader(path='/var/lib/cassandra/data/X/Users-hb-12325-Data.db') and 
 max TS -9223372036854775808
 {noformat}
 The key I was reading is present in files 12330 and 12325, the first contains 
 the *old / wrong* value with timestamp 1307352647576000 above. The second 
 contains the *new / correct* value with timestamp 1319563673493000.
 **Updated:** Incorrect, it was a later file that had the correct value, see 
 the first comment. 
 When CollectionController.collectTimeOrderedData() processes the 12325 file 
 (after processing the 12330 file) while looping over the sstables the call to 
 reduceNameFilter() removes the column  from the filter because the column 
 read from the 12330 file has a time stamp of 1307352647576000 and the 12325 
 file incorrectly has a max time stamp of -9223372036854775808 .
 SSTableMetadata is reading the max time stamp from the stats file, but it is 
 Long.MIN_VALUE. I think this happens because scrub creates the SSTableWriter 
 using cfs.createCompactionWriter() which sets the maxTimestamp in the meta 
 data collector according to the maxTimestamp in the meta data for the file(s) 
 that will be scrubbed / compacted. But for pre 1.0 format files the default 
 in SSTableMetadata is Long.MIN_VALUE, (see SSTableMetaData.deserialize() and 
 the ctor). So scrubbing a pre 1.0 file will write stats files that have 
 maxTimestamp as Long.MIN_VALUE.
 During scrubbing the SSTableWriter does not update the maxTimestamp because 
 append(AbstractCompactedRow) is called which expects the that 
 cfs.createCompactionWriter() was able to set the correct maxTimestamp on the 
 meta data. Compaction also uses append(AbstractCompactedRow) so may create an 
 SSTable with an incorrect maxTimestamp if one of the input files started life 
 as a pre 1.0 file and has a bad maxTimestamp. 
 It looks like the only time the maxTimestamp is calculated is when the 
 SSTable is originally written. So the error from the old files will be 
 carried along. 
 e.g. If the files a,b and c have the maxTimestamps 10, 100 and Long.MIN_VALUE 
 compaction will write a SSTable with maxTimestamp 100. However 

[jira] [Resolved] (CASSANDRA-3481) During repair, incorrect data size Connection reset errors. Repair unable to complete.

2011-11-16 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3481.
-

   Resolution: Fixed
Fix Version/s: (was: 1.0.4)
   1.0.3
 Reviewer: jbellis

Forgot to close this one but it's been committed already.

 During repair, incorrect data size  Connection reset errors. Repair 
 unable to complete.
 

 Key: CASSANDRA-3481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.2
Reporter: Eric Falcao
Assignee: Sylvain Lebresne
  Labels: connection, repair
 Fix For: 1.0.3

 Attachments: 3481-v2.patch, 3481.patch


 This has been happening since 1.0.2. I wasn't on 1.0 for very long but I'm 
 fairly certain repair was working ok. Repair worked decently for me in 0.8 
 (data bloat sucked). All my SSTables are version h.
 On one node:
 java.lang.AssertionError: incorrect row data size 596045 written to 
 /mnt/cassandra/data/TRProd/Metrics1m-tmp-h-25036-Data.db; correct is 586675
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:253)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:146)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:87)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:184)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:81)
 On the other node:
 4999 - 0%, /mnt/cassandra/data/TRProd/Metrics1m-h-24953-Data.db sections=1707 
 progress=0/1513497639 - 0%, 
 /mnt/cassandra/data/TRProd/Metrics1m-h-25000-Data.db sections=635 
 progress=0/53400713 - 0%, 
 /mnt/cassandra/data/TRProd/Metrics1m-h-25002-Data.db sections=570 
 progress=0/709993 - 0%, /mnt/cassandra/data/TRProd/Metrics1m-h-25003-Data.db 
 sections=550 progress=0/449498 - 0%, 
 /mnt/cassandra/data/TRProd/Metrics1m-h-25005-Data.db sections=516 
 progress=0/316301 - 0%], 6 sstables.
  INFO [StreamStage:1] 2011-11-09 19:45:22,795 StreamOutSession.java (line 
 203) Streaming to /10.38.69.192
 ERROR [Streaming:1] 2011-11-09 19:47:47,964 AbstractCassandraDaemon.java 
 (line 133) Fatal exception in thread Thread[Streaming:1,1,main]
 java.lang.RuntimeException: java.net.SocketException: Connection reset
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:619)
 Caused by: java.net.SocketException: Connection reset
   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
   at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
   at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
   at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
   at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
   at 
 org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:181)
   at 
 org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:145)
   at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   ... 3 more
 ERROR [Streaming:1] 2011-11-09 19:47:47,970 AbstractCassandraDaemon.java 
 (line 133) Fatal exception in thread Thread[Streaming:1,1,main]
 java.lang.RuntimeException: java.net.SocketException: Connection reset
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:619)
 Caused by: java.net.SocketException: Connection reset
   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
   at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
   at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
   at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
   at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
   at 
 org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:181)
   at 
 

[jira] [Resolved] (CASSANDRA-3434) Explore using Guava (or guava inspired) faster bytes comparison

2011-11-14 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3434.
-

   Resolution: Fixed
Fix Version/s: 1.1
 Reviewer: jbellis
 Assignee: Sylvain Lebresne

Committed

 Explore using Guava (or guava inspired) faster bytes comparison
 ---

 Key: CASSANDRA-3434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3434
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.1

 Attachments: 3434.patch


 Guava uses un.misc.Unsafe to do a faster byte arrays comparison (on long at a 
 time) as noted in HADOOP-7761.
 We should probably look into it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3456) Automatically create SHA1 of new sstables

2011-11-11 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3456.
-

Resolution: Fixed

Fix committed

 Automatically create SHA1 of new sstables
 -

 Key: CASSANDRA-3456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3456
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.0.3

 Attachments: 0001-Fix-pattern-under-window.patch, 3456.patch, 
 system.log


 Compressed sstables have block checksums which is great but non-compressed 
 sstables don't for technical/compatibility reasons that I'm not criticizing. 
 It's a bit annoying because when someone comes up with a corrupted file, we 
 really have nothing to help discarding it as bitrot or not. However, it would 
 be fairly trivial/cheap to compute the SHA1 (or other) of whole sstables when 
 creating them. And if it's a new, separate, sstable component, we don't even 
 have to implement anything to check the hash. It would only be there to 
 (manually) check for bitrot when corruption is suspected by the user, or to 
 say check the integrity of backups.
 I'm absolutely not pretending that it's a perfect solution, and for 
 compressed sstables the block checksums are clearly more fine grained, but 
 it's easy to add and could prove useful for non compressed files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3482) Flush Assertion Error - CF size changed during serialization

2011-11-11 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3482.
-

Resolution: Fixed

+1, committed

 Flush Assertion Error - CF size changed during serialization
 

 Key: CASSANDRA-3482
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3482
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.2
 Environment: RHEL 6
 java version 1.6.0_26
 6 node cluster
Reporter: Dan Hendry
Assignee: Jonathan Ellis
Priority: Critical
 Fix For: 1.0.3

 Attachments: 3482.txt


 I have seen the following assert in the logs - there are no other suspicious 
 or unexpected log messages.
 INFO [FlushWriter:9] 2011-11-10 13:08:58,882 Memtable.java (line 237) Writing 
 Memtable-UserData@1388955390(25676955/430716097 serialized/live bytes, 478913 
 ops)
 ERROR [FlushWriter:9] 2011-11-10 13:08:59,513 AbstractCassandraDaemon.java 
 (line 133) Fatal exception in thread Thread[FlushWriter:9,5,main]
 java.lang.AssertionError: CF size changed during serialization: was 4 
 initially but 3 written
 at 
 org.apache.cassandra.db.ColumnFamilySerializer.serializeForSSTable(ColumnFamilySerializer.java:94)
 at 
 org.apache.cassandra.db.ColumnFamilySerializer.serializeWithIndexes(ColumnFamilySerializer.java:112)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:177)
 at 
 org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:264)
 at org.apache.cassandra.db.Memtable.access$400(Memtable.java:47)
 at org.apache.cassandra.db.Memtable$4.runMayThrow(Memtable.java:289)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Once the error occurs, further MemtablePostFlusher tasks are blocked:
 nodetool tpstats:
   Pool NameActive   Pending  Completed   Blocked  All 
 time blocked
   MemtablePostFlusher   118 16 0  
0
 It *seems* that all further flushed for the particular CF (in this case 
 UserData) will also result in the same assertion error. Restarting the node 
 fixes the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3465) Wrong counters values when RF 1

2011-11-11 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3465.
-

Resolution: Not A Problem

 Wrong counters values when RF  1
 -

 Key: CASSANDRA-3465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
 Environment: Amazon EC2 (cluster of 5 t1.micro), phpCassa 0.8.a.2
Reporter: Alain RODRIGUEZ
Assignee: Sylvain Lebresne
Priority: Critical
 Attachments: 0001-add-debug-infos.patch, logServer0.log, 
 logServer0_cl_all.log, logServer1.log, logServer1_cl_all.log, logServer2.log, 
 logServer2_cl_all.log


 I have got a CF that contains many counters of some events. When I'm at RF = 
 1 and simulate 10 events, they are well counted.
 However, when I switch to a RF = 3, my counter show a wrong value that 
 sometimes change when requested twice (it can return 7, then 5 instead of 10 
 all the time).
 I first thought that it was a problem of CL because I seem to remember that I 
 read once that I had to use CL.One for reads and writes with counters. So I 
 tried with CL.One, without success...
 /*-- CODE 
 ---*/
 $servers = array(ec2-xxx-xxx-xxx-xxx.eu-west-1.compute.amazonaws.com,
ec2-yyy-yyy-yyy-yyy.eu-west-1.compute.amazonaws.com,
ec2-zzz-zzz-zzz-zzz.eu-west-1.compute.amazonaws.com,
ec2-aaa-aaa-aaa-aaa.eu-west-1.compute.amazonaws.com,
ec2-bbb-bbb-bbb-bbb.eu-west-1.compute.amazonaws.com);
 $pool = new ConnectionPool(mykeyspace, $servers);
 $stats_test = new ColumnFamily($pool, 'stats_test',
  $read_consistency_level=cassandra_ConsistencyLevel::ONE,
$write_consistency_level=cassandra_ConsistencyLevel::ONE);
   
 $time = date( 'YmdH', time());

 for($i=0; $i10; $i++){
   for($c=1; $c=3; $c++){
   $stats_test-add($c, $time.':test');
   }
 $counts = $stats_test-multiget(array(1,2,3));
   echo('Counter1: '.$counts[1][$time.':test'].\n);
   echo('Counter2: '.$counts[2][$time.':test'].\n);
   echo('Counter3: '.$counts[3][$time.':test'].\n\n);
 }
 /* END OF CODE 
 -*/
 /*-- OUTPUT 
 */
 Counter1: 1
 Counter2: 1
 Counter3: 1
 Counter1: 2
 Counter2: 2
 Counter3: 2
 Counter1: 3
 Counter2: 3
 Counter3: 3
 Counter1: 3
 Counter2: 4
 Counter3: 4
 Counter1: 4
 Counter2: 5
 Counter3: 3
 Counter1: 5
 Counter2: 6
 Counter3: 3
 Counter1: 6
 Counter2: 7
 Counter3: 4
 Counter1: 4
 Counter2: 8
 Counter3: 7
 Counter1: 5
 Counter2: 9
 Counter3: 8
 Counter1: 8
 Counter2: 4
 Counter3: 9

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3484) Bizarre Compaction Manager Behaviour

2011-11-11 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3484.
-

   Resolution: Fixed
Fix Version/s: 1.0.3
 Reviewer: slebresne

Alright, +1 on the patch here, committed.

 Bizarre Compaction Manager Behaviour
 

 Key: CASSANDRA-3484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.2
 Environment: RHEL 6
 java version 1.6.0_26
 6 node cluster (5 nodes 0.8.6, 1 node 1.0.2 minus CASSANDRA-2503)
Reporter: Dan Hendry
 Fix For: 1.0.3

 Attachments: 3484.txt, compaction.png


 It seems the CompactionManager has gotten itself into a bad state. My 1.0.2 
 node has been up for 20 hours now - checking via JMX, the compaction manager 
 is reporting that it has completed 14,797,412,000 tasks. Yep, thats right 14 
 billion tasks and increasing at a rate of roughly 208,400/second. 
 I should point out that I am currently running a major compaction on the 
 node. My theory is that this problem was introduced by CASSANDRA-3363. It 
 looks like SizeTieredCompactionStrategy.getBackgroundTasks() returns a set of 
 task without consideration for any in-progress compactions. Compactions are 
 only kicked off if task.markSSTablesForCompaction() returns true 
 (CompactionManager line 127) but the task resubmission is based only on the 
 task list not being empty (CompactionManager line 141). Should the logic not 
 be to only reschedule if a task has actually been executed?
 I am just waiting now for the major compaction to finish to see if the 
 problem goes away as would be suggested by my theory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3456) Automatically create SHA1 of new sstables

2011-11-09 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3456.
-

   Resolution: Fixed
Fix Version/s: 1.0.3
 Reviewer: jbellis

Committed

 Automatically create SHA1 of new sstables
 -

 Key: CASSANDRA-3456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3456
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.0.3

 Attachments: 3456.patch


 Compressed sstables have block checksums which is great but non-compressed 
 sstables don't for technical/compatibility reasons that I'm not criticizing. 
 It's a bit annoying because when someone comes up with a corrupted file, we 
 really have nothing to help discarding it as bitrot or not. However, it would 
 be fairly trivial/cheap to compute the SHA1 (or other) of whole sstables when 
 creating them. And if it's a new, separate, sstable component, we don't even 
 have to implement anything to check the hash. It would only be there to 
 (manually) check for bitrot when corruption is suspected by the user, or to 
 say check the integrity of backups.
 I'm absolutely not pretending that it's a perfect solution, and for 
 compressed sstables the block checksums are clearly more fine grained, but 
 it's easy to add and could prove useful for non compressed files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3461) Server-side fatal exception when mixing column types in DynamicCompositeType

2011-11-07 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3461.
-

Resolution: Won't Fix

Resolving as won't fix since there is really nothing we can do about it.

 Server-side fatal exception when mixing column types in DynamicCompositeType
 

 Key: CASSANDRA-3461
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3461
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
 Environment: JDK 1.6.0_26
Reporter: Carlos Carrasco
Assignee: Sylvain Lebresne

 Running this CLI script with cause the Cassandra server to throw a fatal 
 exception, and the CLI to hang for some seconds and then just display null:
 create keyspace Test;
 use Test;
 create column family Composite  with comparator ='DynamicCompositeType 
 (a=AsciiType,s=UTF8Type)';
 set Composite[ascii('key')]['s@one']=ascii('value');
 set Composite[ascii('key')]['a@two']=ascii('value');
 It appears DynamicCompositeType does not allow mixing different types of 
 components in the same position for the same row, which makes sense, but 
 shouldn't this be a controled error and passed to the client, instead of 
 throwing a fatal exception in the server side?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3395) Quorum returns incorrect results during hinted handoff

2011-11-04 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3395.
-

   Resolution: Fixed
Fix Version/s: (was: 0.8.8)
   1.0.2

CASSANDRA-3303 has been committed with the fix for this. As said by Jonathan 
above, the fix is 1.0 only.

 Quorum returns incorrect results during hinted handoff
 --

 Key: CASSANDRA-3395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3395
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.0.2

 Attachments: logs.tar.bz2, ttest.py, ttestraw.py


 In a 3 node cluster with RF=3 and using a single coordinator, if 
 monotonically increasing columns are inserted into a row and the latest one 
 sliced (both at QUORUM) during HH replay occasionally this column will not be 
 seen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3316) Add a JMX call to force cleaning repair sessions (in case they are hang up)

2011-11-03 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3316.
-

Resolution: Fixed
  Reviewer: slebresne

+1, committed.

I don't think it's worth adding a nodetool command (more precisely I think it's 
a feature that it's not too easy to trigger this) because we don't expect 
people to use that hopefully. It's more to have a solution available if it 
comes to that.

 Add a JMX call to force cleaning repair sessions (in case they are hang up)
 ---

 Key: CASSANDRA-3316
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3316
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.8.6
Reporter: Sylvain Lebresne
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.0.2

 Attachments: 3316-v1.txt


 A repair session contains many parts, most of which are not local to the node 
 (implying the node waits on those operation). You request merkle trees, then 
 you schedule streaming (and in 1.0.0, some of the streaming don't involve the 
 local node itself). It's lots of place where something can go wrong, and if 
 so it leaves the repair hanging and as a consequence it leaves a 
 repairSessions tasks sitting active on the 'AntiEntropy Session' executor.
 Obviously, we should improve the detection by repair of those things that can 
 go wrong. CASSANDRA-2433 started and CASSANDRA-3112 is open to fill as much 
 of the remaining parts as possible, but my bet is that it will be hard to 
 cover everything (and it may not be worth of handling very improbable failure 
 scenario). Besides CASSANDRA-3112 will involve change in the wire protocol, 
 so it may take some time to be committed. In the meantime, it would be nice 
 to provide a JMX call to force terminating repairSessions so that you don't 
 end up in the case where you have enough 'zombie' sessions on the executor 
 that you can't submit new ones (you could restart the node but it's ugly). 
 Anyway, it's not a big issue but it would be simple to add such a JMX call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3401) Cannot create CompositeType using pycassa (worked in 0.8.x)

2011-10-26 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3401.
-

Resolution: Duplicate

Pretty sure this is a duplicate of CASSANDRA-3391.

 Cannot create CompositeType using pycassa (worked in 0.8.x)
 ---

 Key: CASSANDRA-3401
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3401
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Drivers
Affects Versions: 1.0.0
Reporter: Justin Plock

 Using pycassa 1.2.1 against Cassandra 0.8.x, this code worked fine:
 {code}
 key_field_comparator = pycassa.CompositeType(pycassa.TimeUUIDType(), 
 pycassa.UTF8Type())
 value_key_comparator = pycassa.CompositeType(pycassa.UTF8Type(), 
 pycassa.TimeUUIDType())
 SYSTEM_MANAGER.create_column_family('Indexes', 'Items', 
 comparator_type=value_key_comparator, 
 default_validation_class=pycassa.TIME_UUID_TYPE, 
 key_validation_class=key_field_comparator)
 {code}
 However, against Cassandra 1.0, this same code will now hang my python script 
 indefinitely. After killing the program, Cassandra will crash and will throw 
 this exception:
 {code:javascript}
 Exception encountered during startup: Could not inflate CFMetaData for {
 keyspace: Indexes,
 name: Items,
 column_type: Standard,
 comparator_type: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.TimeUUIDType),
 subcomparator_type: null,
 comment: ,
 row_cache_size: 0.0,
 key_cache_size: 20.0,
 read_repair_chance: 1.0,
 replicate_on_write: true,
 gc_grace_seconds: 864000,
 default_validation_class: org.apache.cassandra.db.marshal.TimeUUIDType,
 key_validation_class: org.apache.cassandra.db.marshal.CompositeType,
 min_compaction_threshold: 4,
 max_compaction_threshold: 32,
 row_cache_save_period_in_seconds: 0,
 key_cache_save_period_in_seconds: 14400,
 row_cache_keys_to_save: 2147483647,
 merge_shards_chance: 0.1,
 id: 1004,
 column_metadata: [],
 row_cache_provider: 
 org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider,
 key_alias: null,
 compaction_strategy: 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,
 compaction_strategy_options: {},
 compression_options: {}
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3404) [patch] fix logging contexts

2011-10-26 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3404.
-

Resolution: Fixed

Commmitted in r1189073, thanks

 [patch] fix logging contexts
 

 Key: CASSANDRA-3404
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3404
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Hadoop
Affects Versions: 1.0.0
Reporter: Dave Brosius
Priority: Trivial
 Attachments: log_context.diff


 a couple of places the logging context doesn't match the class, probably due 
 to copy/paste bug.
 fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3396) [patch] push down assignments to scopes where they are needed

2011-10-25 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3396.
-

   Resolution: Fixed
Fix Version/s: 1.0.1
 Reviewer: slebresne
 Assignee: Dave Brosius

 [patch] push down assignments to scopes where they are needed
 -

 Key: CASSANDRA-3396
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3396
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Affects Versions: 1.0.0
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.0.1

 Attachments: assignment_scope.diff


 Code makes assignments at a scope where that assignment may not be needed, 
 patch pushes these assignments down to where conditionals has prescribed that 
 the assignment will be used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3343) nodetool printing classpath

2011-10-11 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3343.
-

Resolution: Fixed

That's clearly a mistake. Corrected as r1181741.

 nodetool printing classpath
 ---

 Key: CASSANDRA-3343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3343
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Cathy Daw
Assignee: Sylvain Lebresne
Priority: Trivial

 * Get file from: 
 [https://repository.apache.org/content/repositories/orgapachecassandra-046/org/apache/cassandra/apache-cassandra/1.0.0/apache-cassandra-1.0.0-bin.tar.gz]
 * Install C* and start server
 * Run: nodetool -h localhost ring
 {code}
 Cathy-Daws-MacBook-Pro:bin cathy$ ./nodetool -h localhost ring
 ./../conf:./../build/classes/main:./../build/classes/thrift:./../lib/antlr-3.2.jar:./../lib/apache-cassandra-1.0.0.jar:./../lib/apache-cassandra-clientutil-1.0.0.jar:./../lib/apache-cassandra-thrift-1.0.0.jar:./../lib/avro-1.4.0-fixes.jar:./../lib/avro-1.4.0-sources-fixes.jar:./../lib/commons-cli-1.1.jar:./../lib/commons-codec-1.2.jar:./../lib/commons-lang-2.4.jar:./../lib/compress-lzf-0.8.4.jar:./../lib/concurrentlinkedhashmap-lru-1.2.jar:./../lib/guava-r08.jar:./../lib/high-scale-lib-1.1.2.jar:./../lib/jackson-core-asl-1.4.0.jar:./../lib/jackson-mapper-asl-1.4.0.jar:./../lib/jamm-0.2.5.jar:./../lib/jline-0.9.94.jar:./../lib/json-simple-1.1.jar:./../lib/libthrift-0.6.jar:./../lib/log4j-1.2.16.jar:./../lib/servlet-api-2.5-20081211.jar:./../lib/slf4j-api-1.6.1.jar:./../lib/slf4j-log4j12-1.6.1.jar:./../lib/snakeyaml-1.6.jar:./../lib/snappy-java-1.0.3.jar
 Address DC  RackStatus State   LoadOwns   
  Token   
 127.0.0.1   datacenter1 rack1   Up Normal  8.91 KB 
 100.00% 10597065753338857570408052040129979696  
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3339) Invalid queries in Cassandra.Client causes subsequent, valid, queries to fail

2011-10-10 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3339.
-

Resolution: Invalid

This sounds like an Hector bug (or maybe a thrift one), not a Cassandra one. 
You'd want to create an issue at https://github.com/rantav/hector/issues and/or 
use the hector mailing list (more info at http://hector-client.org)

 Invalid queries in Cassandra.Client causes subsequent, valid, queries to fail
 -

 Key: CASSANDRA-3339
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3339
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 0.8.6
 Environment: Windows
Reporter: Ivo Ladage-van Doorn

 First of all; I'm quite new to Cassandra, so I hope that my analysis is 
 correct. 
 I am using the Hector client to perform queries on Cassandra. The problem is 
 that once I invoked an invalid slice query with a null rowKey, subsequent 
 queries also fail with roughly the same error. So the first time I invoke the 
 invalid query, I get this exception:
 org.apache.thrift.protocol.TProtocolException: Required field 'key' was not 
 present! 
 Struct: get_slice_args(key:null, 
 column_parent:ColumnParent(column_family:AmdatuToken), 
 predicate:SlicePredicate(slice_range:SliceRange(start:, finish:, 
 reversed:false, count:100)), consistency_level:ONE)
at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:56)
at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:285)
at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:268)
...
 which is expected behavior. However, after invoking this invalid query, 
 subsequent valid calls also fail with roughly the same error:
 me.prettyprint.hector.api.exceptions.HCassandraInternalException: Cassandra 
 encountered an internal error processing this request: TApplicationError 
 type: 7 message:Required field 'key' was not present! Struct: 
 get_slice_args(key:null, column_parent:null, predicate:null, 
 consistency_level:ONE) 
 org.apache.felix.log.LogException: 
 me.prettyprint.hector.api.exceptions.HCassandraInternalException: 
 Cassandra encountered an internal error processing this request: 
 TApplicationError type: 7 message:Required field 'key' was not present! 
 Struct: get_slice_args(key:null, column_parent:null, predicate:null, 
 consistency_level:ONE)
 at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:29)
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$2.execute(KeyspaceServiceImpl.java:121)
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$2.execute(KeyspaceServiceImpl.java:114)
 ...
 In the case of Hector it goes downhill from there, ending in socket write 
 errors and marking the Cassandra host as being down.
 Now this is what I think happens:
 The Hector client uses the org.apache.cassandra.thrift.Cassandra.Client class 
 to execute the queries on Cassandra.
 When I perform a Thrift slice query from Hector, it invokes the get_slice 
 method in the Cassandra.Client, which in its turn invokes send_get_slice. In 
 my case a bug in my own software caused an invocation of this method with a 
 rowKey that equals null. Now although the rowKey is invalid, the method call 
 continues all the way to send_get_slice. This send_get_slice method looks 
 like this:
 (from org.apache.cassandra.thrift.Cassandra)
 public void send_get_slice(ByteBuffer key, ColumnParent column_parent, 
 SlicePredicate predicate, ConsistencyLevel consistency_level) throws 
 org.apache.thrift.TException
 {
   oprot_.writeMessageBegin(new 
 org.apache.thrift.protocol.TMessage(get_slice, 
 org.apache.thrift.protocol.TMessageType.CALL, ++seqid_));
   get_slice_args args = new get_slice_args();
   args.setKey(key);
   args.setColumn_parent(column_parent);
   args.setPredicate(predicate);
   args.setConsistency_level(consistency_level);
   args.write(oprot_);
   oprot_.writeMessageEnd();
   oprot_.getTransport().flush();
 }
 The problem is that the TMessage is written to the output protocol at the 
 first line. When subsequently the arguments are written in 
 args.write(oprot_), it first calls validate(). The validate() method detects 
 the null rowKey and throws an exception:
 public void validate() throws org.apache.thrift.TException {
   // check for required fields
   if (key == null) {
 throw new org.apache.thrift.protocol.TProtocolException(Required field 
 'key' was not present! Struct:  + toString());
   }
   ...
 }
 
 Now Hector finally catches the exception and returns the Cassandra client to 
 

[jira] [Resolved] (CASSANDRA-3238) Issue with multi region ec2 and replication updates

2011-10-07 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3238.
-

Resolution: Not A Problem

Since with dynamic_snitch_badness_threshold the problem goes away, resolving as 
Not A Problem. Feel free to reopen if you think there is still something to fix 
here.

 Issue with multi region ec2 and replication updates
 ---

 Key: CASSANDRA-3238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: Nick Bailey
Assignee: Vijay
Priority: Minor
 Fix For: 1.0.0


 Using the Ec2MultiRegionSnitch and updating replication settings for a 
 keyspace seems to cause some issues that require a rolling restart to fix. 
 The following was observed when updating a keyspace from SimpleStrategy to 
 NTS in a multi region environment:
 * All repairs would hang. Even repairs only against a keyspace that was not 
 updated.
 * Reads at CL.ONE would start to go across region
 After a rolling restart of the cluster, repairs started working correctly 
 again and reads stayed local to the region.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3141) SSTableSimpleUnsortedWriter call to ColumnFamily.serializedSize iterate through the whole columns

2011-10-05 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3141.
-

   Resolution: Not A Problem
Fix Version/s: (was: 0.8.8)

Ok, closing this for now. If someone has evidence there is a real need for 
optimization here he can reopen.

 SSTableSimpleUnsortedWriter call to ColumnFamily.serializedSize iterate 
 through the whole columns
 -

 Key: CASSANDRA-3141
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3141
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.8.3
Reporter: Benoit Perroud
Priority: Minor
 Attachments: CachedSizeCF.patch


 Every time newRow is called, serializedSize iterate through all the columns 
 to compute the size.
 Once 1'000'000 columns exist in the CF, it becomes painfull to do at every 
 iteration the same computation. Caching the size and incrementing when a 
 Column is added could be an option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3296) CsDef instead of CfDef in system_add_keyspace() function

2011-10-03 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3296.
-

   Resolution: Fixed
Fix Version/s: 0.8.7

fixed in r1178325, thanks

 CsDef instead of CfDef in system_add_keyspace() function
 

 Key: CASSANDRA-3296
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3296
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.6
 Environment: In this file:
 apache-cassandra-0.8.6-src/src/java/org/apache/cassandra/thrift/CassandraServer.java
 Around line #893
Reporter: Alexis Wilke
Priority: Trivial
 Fix For: 0.8.7


 throw new InvalidRequestException(CsDef ( + cf.getName() +) had a keyspace 
 definition that did not match KsDef);
 The string starts with CsDef ( when it should be CfDef (.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira