[jira] [Updated] (CASSANDRA-4052) Add way to force the cassandra-cli to refresh it's schema

2012-04-19 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4052:
--

 Reviewer: xedin
Fix Version/s: 1.1.1
 Assignee: Dave Brosius
   Labels: cli  (was: )

bq. To retain assume commands that have been applied, hold assumptions in a 
separate class that holds a map of these assumptions. Since we now have that, 
save these assumptions across separate invocations of cli by storing in a 
~/.cassandra-cli directory file.

Nice!

 Add way to force the cassandra-cli to refresh it's schema
 -

 Key: CASSANDRA-4052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4052
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tupshin Harper
Assignee: Dave Brosius
Priority: Minor
  Labels: cli
 Fix For: 1.1.1

 Attachments: 4052_refresh_schema.diff


 By design, the cassandra-cli caches the schema and doesn't refresh it when 
 various commands like describe keyspaces are run. This is reasonable, and 
 it is easy enough to restart the cli  if necessary. However, this does lead 
 to confusion since a new user can reasonably assume that describe keyspaces 
 will always show an accurate current represention of the ring. We should find 
 a way to reduce the surprise (and lack of easy discoverability) of this 
 behaviour.
 I propose any one of the following(#1 is probably the easiest and most 
 likely):
 1) Add a command (that would be documented in the cli's help) to explicitly 
 refresh the schema (schema refresh, refresh schema, or anything similar).
 2) Always force a refresh of the schema when performing at least the 
 describe keyspaces command.
 3) Add a flag to cassandra-cli to explicitly enable schema caching. If that 
 flag is not passed, then schema caching will be disabled for that session. 
 This suggestion assumes that for simple deployments (few CFs, etc), schema 
 caching isn't very important to the performance of the cli.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-556) nodeprobe snapshot to support specific column families

2012-04-18 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-556:
-

Reviewer: jbellis
  Labels: jmx lhf  (was: lhf)

 nodeprobe snapshot to support specific column families
 --

 Key: CASSANDRA-556
 URL: https://issues.apache.org/jira/browse/CASSANDRA-556
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Were
Assignee: Dave Brosius
Priority: Minor
  Labels: jmx, lhf
 Fix For: 1.1.1

 Attachments: cf_snapshots_556.diff, cf_snapshots_556_2.diff, 
 cf_snapshots_556_2A.diff


 It would be good to support dumping specific column families via nodeprobe 
 for backup purposes.
 In my particular case the majority of cassandra data doesn't need to be 
 backed up except for a couple of column families containing user settings / 
 profiles etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4103) Add stress tool to binaries

2012-04-18 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4103:
--

Affects Version/s: (was: 1.1.1)
   (was: 1.0.8)
   (was: 1.2)
Fix Version/s: (was: 1.0.10)
   (was: 1.1.1)
   1.1.0

(Reverted from 1.0 branch.  Left it in 1.1.0.  Will update NEWS accordingly.)

 Add stress tool to binaries
 ---

 Key: CASSANDRA-4103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4103
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Rick Branson
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.0

 Attachments: 0001-CASSANDRA-4103.patch


 It would be great to also get the stress tool packaged along with the 
 binaries. Many people don't even know it exists because it's not distributed 
 with them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4140) Build stress classes in a location that allows tools/stress/bin/stress to find them

2012-04-18 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4140:
--

 Reviewer: nickmbailey
Affects Version/s: (was: 1.2)
   1.1.0
Fix Version/s: (was: 1.0.10)
   (was: 1.1.1)
   (was: 1.2)
   1.1.0

 Build stress classes in a location that allows tools/stress/bin/stress to 
 find them
 ---

 Key: CASSANDRA-4140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4140
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.1.0
Reporter: Nick Bailey
Assignee: Vijay
Priority: Trivial
 Fix For: 1.1.0

 Attachments: 0001-CASSANDRA-4140-v2.patch, 0001-CASSANDRA-4140.patch


 Right now its hard to run stress from a checkout of trunk. You need to do 
 'ant artifacts' and then run the stress tool in the generated artifacts.
 A discussion on irc came up with the proposal to just move stress to the main 
 jar, but the stress/stressd bash scripts in bin/, and drop the tools 
 directory altogether. It will be easier for users to find that way and will 
 make running stress from a checkout much easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4171) cql3 ALTER TABLE foo WITH default_validation=int has no effect

2012-04-18 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4171:
--

Priority: Trivial  (was: Major)

 cql3 ALTER TABLE foo WITH default_validation=int has no effect
 --

 Key: CASSANDRA-4171
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4171
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Affects Versions: 1.1.0
Reporter: paul cannon
Assignee: paul cannon
Priority: Trivial
  Labels: cql3
 Fix For: 1.1.0


 running the following with cql3:
 {noformat}
 CREATE TABLE test (foo text PRIMARY KEY) WITH default_validation=timestamp;
 ALTER TABLE test WITH default_validation=int;
 {noformat}
 does not actually change the default validation type of the CF. It does under 
 cql2.
 No error is thrown. Some properties *can* be successfully changed using ALTER 
 WITH, such as comment and gc_grace_seconds, but I haven't tested all of them. 
 It seems probable that default_validation is the only problematic one, since 
 it's the only (changeable) property which accepts CQL typenames.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4065) Bogus MemoryMeter liveRatio calculations

2012-04-18 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4065:
--

 Reviewer: jbellis
Affects Version/s: (was: 1.0.8)
   0.8.0
Fix Version/s: 1.1.0
 Assignee: Daniel Doubleday

 Bogus MemoryMeter liveRatio calculations
 

 Key: CASSANDRA-4065
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4065
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
Reporter: Daniel Doubleday
Assignee: Daniel Doubleday
Priority: Minor
 Fix For: 1.1.0


 I get strange cfs.liveRatios.
 A couple of mem meter runs seem to calculate bogus results: 
 {noformat}
 Tue 09:14:48 dd@blnrzh045:~$ grep 'setting live ratio to maximum of 64 
 instead of' /var/log/cassandra/system.log
  WARN [MemoryMeter:1] 2012-03-20 08:08:07,253 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of Infinity
  WARN [MemoryMeter:1] 2012-03-20 08:08:09,160 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of Infinity
  WARN [MemoryMeter:1] 2012-03-20 08:08:13,274 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of Infinity
  WARN [MemoryMeter:1] 2012-03-20 08:08:22,032 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of Infinity
  WARN [MemoryMeter:1] 2012-03-20 08:12:41,057 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of 67.11787351054079
  WARN [MemoryMeter:1] 2012-03-20 08:13:50,877 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of 112.58547951925435
  WARN [MemoryMeter:1] 2012-03-20 08:15:29,021 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of 193.36945063589877
  WARN [MemoryMeter:1] 2012-03-20 08:17:50,716 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of 348.45008340969434
 {noformat}
 Because meter runs never decrease liveRatio in Memtable (Which seems strange 
 to me. If past calcs should be included for any reason wouldn't averaging 
 make more sense?):
 {noformat}
 cfs.liveRatio = Math.max(cfs.liveRatio, newRatio);
 {noformat}
 Memtables are flushed every couple of secs:
 {noformat}
 ColumnFamilyStore.java (line 712) Enqueuing flush of 
 Memtable-BlobStore@935814661(1874540/149963200 serialized/live bytes, 202 ops)
 {noformat}
 Even though a saner liveRatio has been calculated after the bogus runs:
 {noformat}
 INFO [MemoryMeter:1] 2012-03-20 08:19:55,934 Memtable.java (line 198) 
 CFS(Keyspace='SmeetBlob', ColumnFamily='BlobStore') 
liveRatio is 64.0 (just-counted was 2.97165811895841).  calculation took 
 124ms for 58 columns
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4018) Add column metadata to system columnfamilies

2012-04-18 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4018:
--

Fix Version/s: (was: 1.1.1)
   1.2

(Moving to 1.2 since it's going to require rewriting HH to use composites.)

 Add column metadata to system columnfamilies
 

 Key: CASSANDRA-4018
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4018
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.2


 CASSANDRA-3792 adds this to the schema CFs; we should modernize the other 
 system CFs as well

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4154) CFRR wide row iterator does not handle tombstones well

2012-04-16 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4154:
--

Attachment: 4154.txt

 CFRR wide row iterator does not handle tombstones well
 --

 Key: CASSANDRA-4154
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4154
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
Reporter: Brandon Williams
Assignee: Jonathan Ellis
 Fix For: 1.1.0

 Attachments: 4154.txt


 If the last row is a tombstone, CFRR's wide row iterator will throw an 
 exception:
 {noformat}
 java.util.NoSuchElementException
 at com.google.common.collect.Iterables.getLast(Iterables.java:663)
 at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.maybeInit(ColumnFamilyRecordReader.java:441)
 at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.computeNext(ColumnFamilyRecordReader.java:467)
 at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.computeNext(ColumnFamilyRecordReader.java:413)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:137)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:132)
 at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
 at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.getNextWide(CassandraStorage.java:140)
 at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.getNext(CassandraStorage.java:199)
 at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:187)
 at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:423)
 at 
 org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
 at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4157) Allow KS + CF names up to 48 characters

2012-04-16 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4157:
--

Attachment: 4157.txt

 Allow KS + CF names up to 48 characters
 ---

 Key: CASSANDRA-4157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4157
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.0

 Attachments: 4157.txt


 CASSANDRA-2749 imposed a 32-character limit on KS and CF names.  We can be a 
 little more lenient than that and still be safe for path names (see 
 CASSANDRA-4110).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4145) NullPointerException when using sstableloader with PropertyFileSnitch configured

2012-04-13 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4145:
--

Component/s: Tools

 NullPointerException when using sstableloader with PropertyFileSnitch 
 configured
 

 Key: CASSANDRA-4145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4145
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
Affects Versions: 0.8.1
Reporter: Ji Cheng
Assignee: Ji Cheng
Priority: Minor
  Labels: bulkloader
 Fix For: 1.0.10, 1.1.0

 Attachments: 4145.txt


 I got a NullPointerException when using sstableloader on 1.0.6. The cluster 
 is using PropertyFileSnitch. The same configuration file is used for 
 sstableloader. 
 The problem is if StorageService is initialized before DatabaseDescriptor, 
 PropertyFileSnitch will try to access StorageService.instance before it 
 finishes initialization.
 {code}
  ERROR 01:14:05,601 Fatal configuration error
 org.apache.cassandra.config.ConfigurationException: Error instantiating 
 snitch class 'org.apache.cassandra.locator.PropertyFileSnitch'.
 at 
 org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:607)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:454)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:306)
 at 
 org.apache.cassandra.service.StorageService.init(StorageService.java:187)
 at 
 org.apache.cassandra.service.StorageService.clinit(StorageService.java:190)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:183)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:106)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:62)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown 
 Source)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
 Source)
 at java.lang.reflect.Constructor.newInstance(Unknown Source)
 at 
 org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:589)
 ... 7 more
 Caused by: java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.reloadConfiguration(PropertyFileSnitch.java:170)
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.init(PropertyFileSnitch.java:60)
 ... 12 more
 Error instantiating snitch class 
 'org.apache.cassandra.locator.PropertyFileSnitch'.
 Fatal configuration error; unable to start server.  See log for stacktrace.
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4145) NullPointerException when using sstableloader with PropertyFileSnitch configured

2012-04-13 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4145:
--

 Reviewer: jbellis
Affects Version/s: (was: 1.0.6)
   0.8.1
Fix Version/s: 1.1.0
   1.0.10
 Assignee: Ji Cheng
   Labels: bulkloader  (was: )

 NullPointerException when using sstableloader with PropertyFileSnitch 
 configured
 

 Key: CASSANDRA-4145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4145
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
Affects Versions: 0.8.1
Reporter: Ji Cheng
Assignee: Ji Cheng
Priority: Minor
  Labels: bulkloader
 Fix For: 1.0.10, 1.1.0

 Attachments: 4145.txt


 I got a NullPointerException when using sstableloader on 1.0.6. The cluster 
 is using PropertyFileSnitch. The same configuration file is used for 
 sstableloader. 
 The problem is if StorageService is initialized before DatabaseDescriptor, 
 PropertyFileSnitch will try to access StorageService.instance before it 
 finishes initialization.
 {code}
  ERROR 01:14:05,601 Fatal configuration error
 org.apache.cassandra.config.ConfigurationException: Error instantiating 
 snitch class 'org.apache.cassandra.locator.PropertyFileSnitch'.
 at 
 org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:607)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:454)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:306)
 at 
 org.apache.cassandra.service.StorageService.init(StorageService.java:187)
 at 
 org.apache.cassandra.service.StorageService.clinit(StorageService.java:190)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:183)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:106)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:62)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown 
 Source)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
 Source)
 at java.lang.reflect.Constructor.newInstance(Unknown Source)
 at 
 org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:589)
 ... 7 more
 Caused by: java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.reloadConfiguration(PropertyFileSnitch.java:170)
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.init(PropertyFileSnitch.java:60)
 ... 12 more
 Error instantiating snitch class 
 'org.apache.cassandra.locator.PropertyFileSnitch'.
 Fatal configuration error; unable to start server.  See log for stacktrace.
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4146) sstableloader should detect and report failures

2012-04-13 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4146:
--

Reviewer: yukim

 sstableloader should detect and report failures
 ---

 Key: CASSANDRA-4146
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4146
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.0.9
Reporter: Manish Zope
Assignee: Brandon Williams
Priority: Minor
  Labels: sstableloader, tools
 Fix For: 1.1.1

 Attachments: 4146.txt

   Original Estimate: 48h
  Remaining Estimate: 48h

 There are three cases where we have observed the abnormal termination
 1) In case of exception while loading.
 2) User terminates the loading process.
 3) If some node is down OR un-reachable then sstableloader get stucked.In 
 this case user have to terminate the process in between.
 In case of abnormal termination, sstables (which are added in this session) 
 remains as it is on the cluster.In case user starts the process all over 
 again by fixing the exception, it results in duplication of the data till 
 Major compaction is triggered.
 sstableloader can maintain the session while loading the sstables in 
 cluster.So in case of abnormal termination sstableloader triggers the event 
 that will delete the sstables loaded in that session.
 Also It would be great to have timeout in case of sstableloader.That can be 
 kept configurable.
 So if sstableloader process got stucked for period longer than timeout, it 
 can terminate itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4079) Check SSTable range before running cleanup

2012-04-13 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4079:
--

Reviewer: slebresne  (was: bcoverston)

 Check SSTable range before running cleanup
 --

 Key: CASSANDRA-4079
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4079
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benjamin Coverston
Assignee: Jonathan Ellis
Priority: Minor
  Labels: compaction
 Fix For: 1.1.1

 Attachments: 4079.txt


 Before running a cleanup compaction on an SSTable we should check the range 
 to see if the SSTable falls into the range we want to remove. If it doesn't 
 we can just mark the SSTable as compacted and be done with it, if it does, we 
 can no-op.
 Will not help with STCS, but for LCS, and perhaps some others we may see a 
 benefit here after topology changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4009) Increase usage of Metrics and flesh out o.a.c.metrics

2012-04-13 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4009:
--

Reviewer: brandon.williams
Assignee: Yuki Morishita  (was: Brandon Williams)

 Increase usage of Metrics and flesh out o.a.c.metrics
 -

 Key: CASSANDRA-4009
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4009
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.1.1


 With CASSANDRA-3671 we have begun using the Metrics packages to expose stats 
 in a new JMX structure, intended to be more user-friendly (for example, you 
 don't need to know what a StorageProxy is or does.)  This ticket serves as a 
 parent for subtasks to finish fleshing out the rest of the enhanced metrics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4045) BOF fails when some nodes are down

2012-04-13 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4045:
--

Reviewer: yukim

 BOF fails when some nodes are down
 --

 Key: CASSANDRA-4045
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4045
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
  Labels: hadoop
 Fix For: 1.1.1

 Attachments: 4045.txt


 As the summary says, we should allow jobs to complete when some targets are 
 unavailable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4142) OOM Exception during repair session with LeveledCompactionStrategy

2012-04-12 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4142:
--

Affects Version/s: (was: 1.0.6)
   1.0.0
Fix Version/s: 1.1.1
 Assignee: Sylvain Lebresne

 OOM Exception during repair session with LeveledCompactionStrategy
 --

 Key: CASSANDRA-4142
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4142
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.0
 Environment: OS: Linux CentOs 6 
 JDK: Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode)
 Node configuration:
 Quad-core
 10 GB RAM
 Xmx set to 2,5 GB (as computed by default).
Reporter: Romain Hardouin
Assignee: Sylvain Lebresne
 Fix For: 1.1.1


 We encountered an OOM Exception on 2 nodes during repair session.
 Our CF are set up to use LeveledCompactionStrategy and SnappyCompressor.
 These two options used together maybe the key to the problem.
 Despite of setting XX:+HeapDumpOnOutOfMemoryError, no dump have been 
 generated.
 Nonetheless a memory analysis on a live node doing a repair reveals an 
 hotspot: an ArrayList of SSTableBoundedScanner which appears to contain as 
 many objects as there are SSTables on disk. 
 This ArrayList consumes 786 MB of the heap space for 5757 objects. Therefore 
 each object is about 140 KB.
 Eclipse Memory Analyzer's denominator tree shows that 99% of a 
 SSTableBoundedScanner object's memory is consumed by a 
 CompressedRandomAccessReader which contains two big byte arrays.
 Cluster information:
 9 nodes
 Each node handles 35 GB (RandomPartitioner)
 This JIRA was created following this discussion:
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Why-so-many-SSTables-td7453033.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4141) Looks like Serializing cache broken in 1.1

2012-04-11 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4141:
--

Reviewer: xedin

 Looks like Serializing cache broken in 1.1
 --

 Key: CASSANDRA-4141
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4141
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
 Fix For: 1.1.0

 Attachments: 0001-CASSANDRA-4141.patch


 I get the following error while setting the row cache to be 1500 MB
 INFO 23:27:25,416 Initializing row cache with capacity of 1500 MBs and 
 provider org.apache.cassandra.cache.SerializingCacheProvider
 java.lang.OutOfMemoryError: Java heap space
 Dumping heap to java_pid26402.hprof ...
 havent spend a lot of time looking into the issue but looks like SC 
 constructor has 
 .initialCapacity(capacity)
 .maximumWeightedCapacity(capacity)
  which 1500Mb

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4032) memtable.updateLiveRatio() is blocking, causing insane latencies for writes

2012-04-10 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4032:
--

Attachment: 4032-v4.txt

you're right; v4 attached w/ Set approach.

(used NBHS instead of CSLS since the latter requires defining a comparator.  
I'm not sure the overhead is substantially different.)

 memtable.updateLiveRatio() is blocking, causing insane latencies for writes
 ---

 Key: CASSANDRA-4032
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4032
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
 Fix For: 1.1.0

 Attachments: 4032-v3.txt, 4032-v4.txt, CASSANDRA-4032-1.1.0-v1.txt, 
 CASSANDRA-4032-1.1.0-v2.txt


 Reproduce by just starting a fresh cassandra with a heap large enough for 
 live ratio calculation (which is {{O(n)}}) to be insanely slow, and then 
 running {{./bin/stress -d host -n1 -t10}}. With a large enough heap 
 and default flushing behavior this is bad enough that stress gets timeouts.
 Example (blocked for is my debug log added around submit()):
 {code}
  INFO [MemoryMeter:1] 2012-03-09 15:07:30,857 Memtable.java (line 198) 
 CFS(Keyspace='Keyspace1', ColumnFamily='Standard1') liveRatio is 
 8.89014894083727 (just-counted was 8.89014894083727).  calculation took 
 28273ms for 1320245 columns
  WARN [MutationStage:8] 2012-03-09 15:07:30,857 Memtable.java (line 209) 
 submit() blocked for: 231135
 {code}
 The calling code was written assuming a RejectedExecutionException is thrown, 
 but it's not because {{DebuggableThreadPoolExecutor}} installs a blocking 
 rejection handler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4134) Do not send hints before a node is fully up

2012-04-10 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4134:
--

 Priority: Minor  (was: Major)
Affects Version/s: (was: 1.0.9)

I'm not sure what if anything we can do about this; the assumption that you 
won't send a node updates for columnfamilies it doesn't know about runs pretty 
deep.

 Do not send hints before a node is fully up
 ---

 Key: CASSANDRA-4134
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4134
 Project: Cassandra
  Issue Type: Bug
Reporter: Joaquin Casares
Priority: Minor

 After seeing this on a cluster and working with Pavel, we have seen the 
 following errors disappear after all migrations have been applied:
 {noformat}
 ERROR [MutationStage:1] 2012-04-09 18:16:00,240 RowMutationVerbHandler.java 
 (line 61) Error in row mutation
 org.apache.cassandra.db.UnserializableColumnFamilyException: Couldn't find 
 cfId=1028
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:129)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:401)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.deserialize(RowMutation.java:409)
   at org.apache.cassandra.db.RowMutation.fromBytes(RowMutation.java:357)
   at 
 org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:42)
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 and
 ERROR [ReadStage:69] 2012-04-09 18:16:01,715 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[ReadStage:69,5,main]
 java.lang.IllegalArgumentException: Unknown ColumnFamily content_indexes in 
 keyspace linkcurrent
   at org.apache.cassandra.config.Schema.getComparator(Schema.java:223)
   at 
 org.apache.cassandra.db.ColumnFamily.getComparatorFor(ColumnFamily.java:300)
   at 
 org.apache.cassandra.db.ReadCommand.getComparator(ReadCommand.java:92)
   at 
 org.apache.cassandra.db.SliceByNamesReadCommand.init(SliceByNamesReadCommand.java:44)
   at 
 org.apache.cassandra.db.SliceByNamesReadCommandSerializer.deserialize(SliceByNamesReadCommand.java:106)
   at 
 org.apache.cassandra.db.SliceByNamesReadCommandSerializer.deserialize(SliceByNamesReadCommand.java:74)
   at 
 org.apache.cassandra.db.ReadCommandSerializer.deserialize(ReadCommand.java:132)
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51)
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}
 It seems as though as soon as the correct Migration is applied, the Hints are 
 accepted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3690) Streaming CommitLog backup

2012-04-09 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3690:
--

Attachment: 3690-v6.txt

v6 attached.  The primary changes made are fixes to the Future logic: the only 
way you'll get a null Future back is if no archive tack was submitted; if it 
errors out, you'll get an ExecutionException when you call get(), but never a 
null.

Updated getArchivingSegmentNames javadoc to emphasize that it does NOT include 
failed archive attempts. Not sure if this is what was intended.

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3690-v2.patch, 
 0001-CASSANDRA-3690-v4.patch, 0001-CASSANDRA-3690-v5.patch, 
 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch, 3690-v6.txt


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-556) nodeprobe snapshot to support specific column families

2012-04-09 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-556:
-

Fix Version/s: 1.1.1

 nodeprobe snapshot to support specific column families
 --

 Key: CASSANDRA-556
 URL: https://issues.apache.org/jira/browse/CASSANDRA-556
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Were
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1


 It would be good to support dumping specific column families via nodeprobe 
 for backup purposes.
 In my particular case the majority of cassandra data doesn't need to be 
 backed up except for a couple of column families containing user settings / 
 profiles etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3883) CFIF WideRowIterator only returns batch size columns

2012-04-09 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3883:
--

Attachment: 3883-v2.txt

v2 attached w/ that approach

 CFIF WideRowIterator only returns batch size columns
 

 Key: CASSANDRA-3883
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3883
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
Reporter: Brandon Williams
 Fix For: 1.1.0

 Attachments: 3883-v1.txt, 3883-v2.txt


 Most evident with the word count, where there are 1250 'word1' items in two 
 rows (1000 in one, 250 in another) and it counts 198 with the batch size set 
 to 99.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3883) CFIF WideRowIterator only returns batch size columns

2012-04-09 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3883:
--

Reviewer: brandon.williams  (was: tjake)
Assignee: Jonathan Ellis

 CFIF WideRowIterator only returns batch size columns
 

 Key: CASSANDRA-3883
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3883
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
Reporter: Brandon Williams
Assignee: Jonathan Ellis
 Fix For: 1.1.0

 Attachments: 3883-v1.txt, 3883-v2.txt


 Most evident with the word count, where there are 1250 'word1' items in two 
 rows (1000 in one, 250 in another) and it counts 198 with the batch size set 
 to 99.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4130) update snitch documentation

2012-04-06 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4130:
--

Attachment: 4130.txt

 update snitch documentation
 ---

 Key: CASSANDRA-4130
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4130
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.0.10

 Attachments: 4130.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3868) Remove or nullify replicate_on_write option

2012-04-06 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3868:
--

Fix Version/s: (was: 1.1.0)

 Remove or nullify replicate_on_write option
 ---

 Key: CASSANDRA-3868
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3868
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.8.0
Reporter: Brandon Williams
 Attachments: 3868.txt


 My understanding from Sylvain is that setting this option to false is rather 
 dangerous/stupid, and you should basically never do it.  So 1.1 is a good 
 time to get rid of it, or make it a no-op.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3868) Remove or nullify replicate_on_write option

2012-04-06 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3868:
--

Fix Version/s: 1.2

 Remove or nullify replicate_on_write option
 ---

 Key: CASSANDRA-3868
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3868
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.8.0
Reporter: Brandon Williams
 Fix For: 1.2

 Attachments: 3868.txt


 My understanding from Sylvain is that setting this option to false is rather 
 dangerous/stupid, and you should basically never do it.  So 1.1 is a good 
 time to get rid of it, or make it a no-op.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4118) ConcurrentModificationException in ColumnFamily.updateDigest(ColumnFamily.java:294) (cassandra 1.0.8)

2012-04-05 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4118:
--

Fix Version/s: 1.1.0
   1.0.10

 ConcurrentModificationException in 
 ColumnFamily.updateDigest(ColumnFamily.java:294)  (cassandra 1.0.8)
 --

 Key: CASSANDRA-4118
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4118
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.8
 Environment: two nodes, replication factor=2
Reporter: Zklanu RyÅ›
Assignee: Vijay
 Fix For: 1.0.10, 1.1.0


 Sometimes when reading data I receive them without any exception but I can 
 see in Cassandra logs, that there is an error:
 ERROR [ReadRepairStage:58] 2012-04-05 12:04:35,732 
 AbstractCassandraDaemon.java (line 139) Fatal exception in thread 
 Thread[ReadRepairStage:58,5,main]
 java.util.ConcurrentModificationException
 at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
 at java.util.AbstractList$Itr.next(AbstractList.java:343)
 at 
 org.apache.cassandra.db.ColumnFamily.updateDigest(ColumnFamily.java:294)
 at org.apache.cassandra.db.ColumnFamily.digest(ColumnFamily.java:288)
 at 
 org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:102)
 at 
 org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:30)
 at 
 org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.runMayThrow(ReadCallback.java:227)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4128) stress tool hangs forever on timeout or error

2012-04-05 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4128:
--

 Reviewer: brandon.williams
 Priority: Minor  (was: Major)
Fix Version/s: 1.1.1
 Assignee: Pavel Yaskevich

 stress tool hangs forever on timeout or error
 -

 Key: CASSANDRA-4128
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4128
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: This happens in every version of the stress tool, that I 
 know of, including calling it from the dtests.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich
Priority: Minor
  Labels: stress
 Fix For: 1.1.1


 The stress tool hangs forever if it encounters a timeout or exception. CTRL-C 
 will kill it if run from a terminal, but when running it from a script (like 
 a dtest) it hangs the script forever. It would be great for scripting it if a 
 reasonable error code was returned when things go wrong.
 To duplicate, clear out /var/lib/cassandra and then run stress 
 --operation=READ.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4114) Default read_repair_chance value is wrong

2012-04-04 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4114:
--

 Reviewer: vijay2...@yahoo.com
  Component/s: Tools
Affects Version/s: (was: 1.0.7)
   1.0.0
Fix Version/s: 1.1.0
 Assignee: Jonathan Ellis

 Default read_repair_chance value is wrong
 -

 Key: CASSANDRA-4114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4114
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.0.0
Reporter: Manoj Kanta Mainali
Assignee: Jonathan Ellis
Priority: Trivial
 Fix For: 1.1.0


 The documents says that the default read_repair_chance value is 0.1, and it 
 is also declared so in CFMetaDeta. However, when creating a column family 
 with create column family foo via cli and checking with show keyspaces 
 shows that the read_repair_chance=1.0. This also happens when creating the 
 column family through Hector.
 Going through the code, I find that in CfDef class, the constructor without 
 any parameters sets the read_repair_chance to 1. Changing this value to 0.1 
 seems to create a column family with the 0.1 read_repair_chance. The best 
 might be to remove it from the CfDef as the read_repair_chance is set to the 
 default value in CFMetaDeta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4114) Default read_repair_chance value is wrong

2012-04-04 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4114:
--

Attachment: 4114.txt

I think that since the cli has behaved this way since 1.0.0, changing it now 
might surprise people who didn't read NEWS (and thus don't know that it was 
supposed to change to default of 0.1).  So I propose fixing this in 1.1.0 
instead. 

(For completeness, I note that cql {{CREATE COLUMNFAMILY}} does default to 0.1 
correctly since it does not build its CFMetadata objects from Thrift.)


 Default read_repair_chance value is wrong
 -

 Key: CASSANDRA-4114
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4114
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.0.0
Reporter: Manoj Kanta Mainali
Assignee: Jonathan Ellis
Priority: Trivial
 Fix For: 1.1.0

 Attachments: 4114.txt


 The documents says that the default read_repair_chance value is 0.1, and it 
 is also declared so in CFMetaDeta. However, when creating a column family 
 with create column family foo via cli and checking with show keyspaces 
 shows that the read_repair_chance=1.0. This also happens when creating the 
 column family through Hector.
 Going through the code, I find that in CfDef class, the constructor without 
 any parameters sets the read_repair_chance to 1. Changing this value to 0.1 
 seems to create a column family with the 0.1 read_repair_chance. The best 
 might be to remove it from the CfDef as the read_repair_chance is set to the 
 default value in CFMetaDeta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4054) SStableImport and SStableExport does not serialize row level deletion

2012-04-04 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4054:
--

 Priority: Minor  (was: Major)
Fix Version/s: (was: 1.1.0)
   1.2
 Assignee: Yuki Morishita

 SStableImport and SStableExport does not serialize row level deletion
 -

 Key: CASSANDRA-4054
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4054
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.5
Reporter: Zhu Han
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2


 SSTableImport and SSTableExport does not serialize/de-serialize the row-level 
 deletion info to/from the json file. This brings back the deleted data after 
 restore from the json file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2479) move the tests off of CollatingOrderPreservingPartitioner

2012-04-04 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2479:
--

Fix Version/s: (was: 1.1.0)
   1.2

 move the tests off of CollatingOrderPreservingPartitioner
 -

 Key: CASSANDRA-2479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2479
 Project: Cassandra
  Issue Type: Improvement
Reporter: Eric Evans
Priority: Trivial
  Labels: cql
 Fix For: 1.2


 The configuration for unit and system tests currently makes use of 
 CollatingOrderPreservingPartitioner, which is problematic for testing key 
 type validation (COPP supports only UTF8 keys).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3785) Support slice with exclusive start and stop

2012-04-04 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3785:
--

Reviewer: jbellis

 Support slice with exclusive start and stop
 ---

 Key: CASSANDRA-3785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3785
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
  Labels: cql3
 Fix For: 1.1.1

 Attachments: 3785.patch


 Currently, slices are always start and end inclusive. However, for CQL 3.0, 
 we already differenciate between inclusivity/exclusivity for the row key and 
 for the component of composite columns. It would be nice to always support 
 that distinction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3702) CQL count() needs paging support

2012-04-04 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3702:
--

Labels: lhf  (was: )

 CQL count() needs paging support
 

 Key: CASSANDRA-3702
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3702
 Project: Cassandra
  Issue Type: Sub-task
  Components: Tools
Reporter: Nick Bailey
  Labels: lhf
 Fix For: 1.1.1


 Doing
 {noformat}
 SELECT count(*) from cf;
 {noformat}
 will max out at 10,000 because that is the default limit for cql queries. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3818) disabling m-a-t for fun and profit (and other ant stuff)

2012-04-04 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3818:
--

Reviewer: stephenc

 disabling m-a-t for fun and profit (and other ant stuff)
 

 Key: CASSANDRA-3818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3818
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Packaging
Affects Versions: 1.0.7
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor
  Labels: build
 Fix For: 1.1.1

 Attachments: v1-0001-CASSANDRA-3818-keep-init-in-init-target.txt, 
 v1-0002-clean-up-avro-generation-dependencies-and-dependants.txt, 
 v1-0003-remove-useless-build-subprojects-target.txt, 
 v1-0004-group-test-targets-under-test-all.txt, 
 v1-0005-add-property-to-disable-maven-junk.txt, 
 v1-0006-add-property-to-disable-rat-license-header-writing.txt, 
 v1-0007-don-t-needlessly-regenerate-thrift-code.txt


 It should be possible to disable maven-ant-tasks for environments with more 
 rigid dependency control, or where network access isn't available.
 Patches to follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4090) cqlsh can't handle python being a python3

2012-04-03 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4090:
--

  Component/s: Tools
 Priority: Trivial  (was: Major)
Affects Version/s: (was: 1.0.8)
Fix Version/s: 1.1.0
   1.0.10
   Labels: cqlsh  (was: )

 cqlsh can't handle python being a python3
 -

 Key: CASSANDRA-4090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4090
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: On Archlinux, where Python3 installations are default 
 (most distros currently use Python2 as default now)
 {code}
 $ ls -l `which python` 
 lrwxrwxrwx 1 root root 7 Nov 21 09:05 /usr/bin/python - python3
 {code}
Reporter: Andrew Ash
Assignee: Andrew Ash
Priority: Trivial
  Labels: cqlsh
 Fix For: 1.0.10, 1.1.0

 Attachments: 4090.patch.txt, python3-fix.patch


 cqlsh fails to run when {{python}} is a Python 3, with this error message:
 {code}
 andrew@spite:~/src/cassandra-trunk/bin $ ./cqlsh 
   File ./cqlsh, line 97
 except ImportError, e:
   ^
 SyntaxError: invalid syntax
 andrew@spite:~/src/cassandra-trunk/bin $ 
 {code}
 The error occurs because the cqlsh script checks for a default installation 
 of python that is older than a certain version, but not one newer that is 
 incompatible (e.g. Python3).  To fix this, I update the logic to only run 
 {{python}} if it's a version at least 2.5 but before 3.0  If this version of 
 python is in that range then role with it, otherwise try python2.6, 
 python2.7, then python2.5 (no change from before).
 This is working on my installation, where {{python}} executes python 3.2.2 
 and doesn't break backwards compatibility to distributions that haven't made 
 the jump to Python3 as default yet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4100) Make scrub and cleanup operations throttled

2012-04-03 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4100:
--

 Reviewer: yukim
Affects Version/s: (was: 1.0.8)
   Labels: compaction  (was: )

 Make scrub and cleanup operations throttled
 ---

 Key: CASSANDRA-4100
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4100
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: compaction
 Fix For: 1.0.10

 Attachments: 0001-CASSANDRA-4100.patch


 Looks like scrub and cleanup operations are not throttled and it will be nice 
 to throttle else we are likely to run into IO issues while running it on live 
 cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4103) Add stress tool to binaries

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4103:
--

Reviewer: brandon.williams

 Add stress tool to binaries
 ---

 Key: CASSANDRA-4103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4103
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.0.8, 1.1.1, 1.2
Reporter: Rick Branson
Assignee: Vijay
Priority: Minor
 Attachments: 0001-CASSANDRA-4103.patch


 It would be great to also get the stress tool packaged along with the 
 binaries. Many people don't even know it exists because it's not distributed 
 with them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3642) Failed to delete any Index.db on Windows

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3642:
--

Fix Version/s: (was: 1.0.10)
 Assignee: (was: Sylvain Lebresne)

I no longer see problems relating to index components after CASSANDRA-3967.  
Instead, I see this:

[junit] ERROR 13:41:33,813 Unable to delete 
build\test\cassandra\data\system\schema_columnfamilies\system-schema_columnfamilies-hc-11-Data.db
 (it will be removed on server restart; we'll also retry after GC)



 Failed to delete any Index.db on Windows
 

 Key: CASSANDRA-3642
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3642
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: Windows Server 2008 R2 64bit
 Java SE 7u1 64bit
Reporter: Viktor Jevdokimov
Priority: Minor
 Attachments: 3642.patch


 After upgrade Cassandra 0.8.7 to 1.0.6 under Windows Server 2008 R2 64bit 
 with disk acces mode mmap_index_only, Cassandra failing to delete any 
 *-Index.db files after compaction or scrub:
 ERROR 13:43:17,490 Fatal exception in thread Thread[NonPeriodicTasks:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Failed to delete 
 D:\cassandra\data\data\system\LocationInfo-g-29-Index.db
 at 
 org.apache.cassandra.utils.FBUtilities.unchecked(FBUtilities.java:689)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
  Source)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
  Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source) Caused by: 
 java.io.IOException: Failed to delete 
 D:\cassandra\data\data\system\LocationInfo-g-29-Index.db
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:54)
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:44)
 at org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:141)
 at 
 org.apache.cassandra.io.sstable.SSTableDeletingTask.runMayThrow(SSTableDeletingTask.java:81)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 ... 8 more
 ERROR 17:20:09,701 Fatal exception in thread Thread[NonPeriodicTasks:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Failed to delete 
 D:\cassandra\data\data\Keyspace1\ColumnFamily1-hc-840-Index.db
 at 
 org.apache.cassandra.utils.FBUtilities.unchecked(FBUtilities.java:689)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
  Source)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
  Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source) Caused by: 
 java.io.IOException: Failed to delete D:\cassandra\data\data\ 
 Keyspace1\ColumnFamily1-hc-840-Index.db
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:54)
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:44)
 at org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:141)
 at 
 org.apache.cassandra.io.sstable.SSTableDeletingTask.runMayThrow(SSTableDeletingTask.java:81)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 ... 8 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3966) KeyCacheKey and RowCacheKey to use raw byte[]

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3966:
--

Affects Version/s: (was: 1.0.8)
   1.0.0
Fix Version/s: (was: 1.0.10)
   1.1.1

bq. Wondering if it will make sense to do use FreeableMemory instead of 
ByteBuffer if (SerializingCache is chosen) just for RowCacheKey

I don't think that would work -- the reason we can use FM for SC is that when 
we call cache.get, we deserialize and thus don't need to worry about reference 
counts anymore.  I don't think it's feasible to try to keep track of the 
referents of the key.  (Consider, for instance, a thread looping over the cache 
entries to save it.  Meanwhile, a cache entry gets evicted and we free it.  The 
saving thread will now happily segfault.)

 KeyCacheKey and RowCacheKey to use raw byte[]
 -

 Key: CASSANDRA-3966
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3966
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1


 We can just store the raw byte[] instead of byteBuffer,
 After reading the mail
 http://www.mail-archive.com/dev@cassandra.apache.org/msg03725.html
 Each ByteBuffer takes 48 bytes = for house keeping can be removed by just 
 implementing hashcode and equals in the KeyCacheKey and RowCacheKey
 http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/nio/ByteBuffer.java#ByteBuffer.hashCode%28%29

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3920) tests for cqlsh

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3920:
--

Fix Version/s: (was: 1.0.10)
   1.1.1

 tests for cqlsh
 ---

 Key: CASSANDRA-3920
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3920
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
  Labels: cqlsh
 Fix For: 1.1.1


 Cqlsh has become big enough and tries to cover enough situations that it's 
 time to start acting like a responsible adult and make this bugger some unit 
 tests to guard against regressions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3710) Add a configuration option to disable snapshots

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3710:
--

Fix Version/s: (was: 1.0.10)

I'm fine with adding an autosnapshot configuration variable defaulting to true 
that controls whether to snapshot before truncate and drop.

 Add a configuration option to disable snapshots
 ---

 Key: CASSANDRA-3710
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3710
 Project: Cassandra
  Issue Type: New Feature
Reporter: Brandon Williams
Priority: Minor
 Attachments: Cassandra107Patch_TestModeV1.txt


 Let me first say, I hate this idea.  It gives cassandra the ability to 
 permanently delete data at a large scale without any means of recovery.  
 However, I've seen this requested multiple times, and it is in fact useful in 
 some scenarios, such as when your application is using an embedded cassandra 
 instance for testing and need to truncate, which without JNA will timeout 
 more often than not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3942) ColumnFamilyRecordReader can report progress 100%

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3942:
--

 Reviewer: jbellis
Affects Version/s: (was: 0.8.10)
   0.6
 Assignee: T Jake Luciani

 ColumnFamilyRecordReader can report progress  100%
 ---

 Key: CASSANDRA-3942
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3942
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.6
Reporter: T Jake Luciani
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 1.0.10


 CFRR.getProgress() can return a value  1.0 since the totalRowCount is a 
 estimate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4049) Add generic way of adding SSTable components required custom compaction strategy

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4049:
--

Fix Version/s: (was: 1.0.10)
   1.1.1
 Assignee: Piotr Kołaczkowski
   Labels: compaction  (was: )

 Add generic way of adding SSTable components required custom compaction 
 strategy
 

 Key: CASSANDRA-4049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4049
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
Priority: Minor
  Labels: compaction
 Fix For: 1.1.1

 Attachments: compaction_strategy_cleanup.patch, component_patch.diff


 CFS compaction strategy coming up in the next DSE release needs to store some 
 important information in Tombstones.db and RemovedKeys.db files, one per 
 sstable. However, currently Cassandra issues warnings when these files are 
 found in the data directory. Additionally, when switched to 
 SizeTieredCompactionStrategy, the files are left in the data directory after 
 compaction.
 The attached patch adds new components to the Component class so Cassandra 
 knows about those files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4078) StackOverflowError when upgrading to 1.0.8 from 0.8.10

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4078:
--

Attachment: 4078-asserts-v3.txt

alternate patch attached that converts the ordering check in SSTW.beforeAppend 
to an assert

 StackOverflowError when upgrading to 1.0.8 from 0.8.10
 --

 Key: CASSANDRA-4078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4078
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.10
 Environment: OS: Linux xps.openfin 2.6.35.13-91.fc14.i686 #1 SMP Tue 
 May 3 13:36:36 UTC 2011 i686 i686 i386 GNU/Linux
 Java: JVM vendor/version: Java HotSpot(TM) Server VM/1.6.0_31
Reporter: Wenjun
Assignee: paul cannon
Priority: Minor
 Fix For: 1.0.10

 Attachments: 4078-asserts-v3.txt, 4078.add-asserts.txt, 
 4078.patch2.txt, cassandra.yaml.1.0.8, cassandra.yaml.8.10, keycheck.txt, 
 system.log, system.log.0326, system.log.0326-02


 Hello
 I am trying to upgrade our 1-node setup from 0.8.10 to 1.0.8 and seeing the 
 following exception when starting up 1.0.8.  We have been running 0.8.10 
 without any issues.
  
 Attached is the entire log file during startup of 1.0.8.  There are 2 
 exceptions:
 1. StackOverflowError (line 2599)
 2. InstanceAlreadyExistsException (line 3632)
 I tried run scrub under 0.8.10 first, it did not help.  Also, I tried 
 dropping the column family which caused the exception, it just got the same 
 exceptions from another column family.
 Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4110) Relax path length requirement for non-Windows platforms

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4110:
--

Attachment: 4110.txt

 Relax path length requirement for non-Windows platforms
 ---

 Key: CASSANDRA-4110
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4110
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.1.0
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.0

 Attachments: 4110.txt


 As described at the bottom of CASSANDRA-2749, we only need to worry about 
 total path length on Windows.  For other platforms we only need to check the 
 filename length.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4112) nodetool cleanup giving exception

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4112:
--

Priority: Major  (was: Blocker)

Are you running with assertions disabled?

 nodetool cleanup giving exception
 -

 Key: CASSANDRA-4112
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4112
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.9
 Environment: Ubuntu LTS 10.04, OpenJDK 1.6.0_20
Reporter: Shoaib

 We just recently started using version 1.0.9, previously we were using tiered 
 compaction because of a bug in 1.0.8 (not letting us use leveled compaction) 
 and now since moving to 1.0.9 we have started using leveled compaction.
 Trying to do a cleanup we are getting the following exception:
 root@hk1adsdbp001:~# nodetool -h localhost cleanup 
 Error occured during cleanup
 java.util.concurrent.ExecutionException: java.util.NoSuchElementException
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
 at java.util.concurrent.FutureTask.get(FutureTask.java:111)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:204)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:240)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:988)
 at 
 org.apache.cassandra.service.StorageService.forceTableCleanup(StorageService.java:1639)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
 at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 Caused by: java.util.NoSuchElementException
 at java.util.ArrayList$Itr.next(ArrayList.java:757)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.replace(LeveledManifest.java:196)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:147)
 at 
 org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:495)
 at 
 org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:235)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:1010)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:802)
 

[jira] [Updated] (CASSANDRA-4112) nodetool cleanup giving exception

2012-04-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4112:
--

Attachment: 4112.txt

LCS incorrectly assumed that cleanup would always result in the same number of 
sstables as before, which is not the case (if no keys in an sstable belong 
post-cleanup, it will be left out entirely).

fix attached.

 nodetool cleanup giving exception
 -

 Key: CASSANDRA-4112
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4112
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.9
 Environment: Ubuntu LTS 10.04, OpenJDK 1.6.0_20
Reporter: Shoaib
  Labels: compaction
 Fix For: 1.0.9

 Attachments: 4112.txt


 We just recently started using version 1.0.9, previously we were using tiered 
 compaction because of a bug in 1.0.8 (not letting us use leveled compaction) 
 and now since moving to 1.0.9 we have started using leveled compaction.
 Trying to do a cleanup we are getting the following exception:
 root@test:~# nodetool -h localhost cleanup 
 Error occured during cleanup
 java.util.concurrent.ExecutionException: java.util.NoSuchElementException
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
 at java.util.concurrent.FutureTask.get(FutureTask.java:111)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:204)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:240)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:988)
 at 
 org.apache.cassandra.service.StorageService.forceTableCleanup(StorageService.java:1639)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
 at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 Caused by: java.util.NoSuchElementException
 at java.util.ArrayList$Itr.next(ArrayList.java:757)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.replace(LeveledManifest.java:196)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:147)
 at 
 org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:495)
 at 
 

[jira] [Updated] (CASSANDRA-4097) Classes in org.apache.cassandra.deps:avro:1.4.0-cassandra-1 clash with core Avro classes

2012-03-29 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4097:
--

 Priority: Minor  (was: Major)
Affects Version/s: (was: 1.0.8)
   0.7.0

ls lib/*avro*

 Classes in org.apache.cassandra.deps:avro:1.4.0-cassandra-1 clash with core 
 Avro classes
 

 Key: CASSANDRA-4097
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4097
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Andrew Swan
Priority: Minor

 Cassandra has this dependency:
 {code:title=build.xml}...
 dependency groupId=org.apache.cassandra.deps artifactId=avro 
 version=1.4.0-cassandra-1
 ...{code}
 Unfortunately this JAR file contains classes in the {{org.apache.avro}} 
 package that are incompatible with classes of the same fully-qualified name 
 in the current release of Avro. For example, the inner class 
 {{org.apache.avro.Schema$Parser}} found in Avro 1.6.1 is missing from the 
 Cassandra version of that class. This makes it impossible to have both 
 Cassandra and the latest Avro version on the classpath (my use case is an 
 application that embeds Cassandra but also uses Avro 1.6.1 for unrelated 
 serialization purposes). A simple and risk-free solution would be to change 
 the package declaration of Cassandra's Avro classes from {{org.apache.avro}} 
 to (say) {{org.apache.cassandra.avro}}, assuming that the above dependency is 
 only used by Cassandra and no other projects (which seems a reasonable 
 assumption given its name).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3911) Basic QoS support for helping reduce OOMing cluster

2012-03-29 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3911:
--

Reviewer: brandon.williams

 Basic QoS support for helping reduce OOMing cluster
 ---

 Key: CASSANDRA-3911
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3911
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Goffinet
Assignee: Harish Doddi
Priority: Minor
 Fix For: 1.2

 Attachments: CASSANDRA-3911-trunk.txt


 We'd like to propose adding some basic QoS features to Cassandra. There can 
 be a lot to be done here but for v1 to keep things less invasive, and still 
 provide basics we would like to contribute the following features and see if 
 the community thinks this is OK.
 We would set these on server (cassandra.yaml). If threshold is crossed, we 
 throw an exception up to the client.
 1) Limit how many rows a client can fetch over RPC through multi-get.
 2) Limit how many columns may be returned (if count  N) throw exception 
 before processing.
 3) Limit how many rows and columns a client can try to batch mutate.
 This can be added in our Thrift logic, before any processing can be done. The 
 big reason why we want to do this, is so that customers don't shoot 
 themselves in the foot, by making mistakes or not knowing how many columns 
 they might have returned.
 We can build logic like this into a basic client, but I propose one of the 
 features we might want in Cassandra is support for not being able to OOM a 
 node. We've done lots of work around memtable flushing, dropping messages, 
 etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4098) Listing wide-rows from CLI crashes Cassandra

2012-03-29 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4098:
--

Component/s: Tools
Description: 
If a user attempts to list a column family from the CLI that contains a 
wide-row (e.g. 10 million columns).  It crashes hangs the CLI and then 
Cassandra eventually crashes with an OoM.  We should introduce a default limit 
on columns when listing a column family.

(patch on its way)

  was:

If a user attempts to list a column family from the CLI that contains a 
wide-row (e.g. 10 million columns).  It crashes hangs the CLI and then 
Cassandra eventually crashes with an OoM.  We should introduce a default limit 
on columns when listing a column family.

(patch on its way)

   Priority: Minor  (was: Major)
   Assignee: Brian ONeill

What version did we add the column limit code in?

 Listing wide-rows from CLI crashes Cassandra 
 -

 Key: CASSANDRA-4098
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4098
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Brian ONeill
Assignee: Brian ONeill
Priority: Minor
 Attachments: test_data.cl, trunk-4098.txt


 If a user attempts to list a column family from the CLI that contains a 
 wide-row (e.g. 10 million columns).  It crashes hangs the CLI and then 
 Cassandra eventually crashes with an OoM.  We should introduce a default 
 limit on columns when listing a column family.
 (patch on its way)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4096) mlockall() returned code is ignored w/o assertions

2012-03-29 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4096:
--

Attachment: 4096.txt

The assert is redundant anyway since JNA will check return value and errno for 
us. Patch attached.

 mlockall() returned code is ignored w/o assertions
 --

 Key: CASSANDRA-4096
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4096
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
  Labels: jna
 Attachments: 4096.txt


 We log that mlockall() was successful only based on the lack of an assertion 
 failure, so for anyone running w/o {{-ea}} we are lying about mlockall() 
 succeeding.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4096) mlockall() returned code is ignored w/o assertions

2012-03-29 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4096:
--

Reviewer: scode
Priority: Minor  (was: Major)
Assignee: Jonathan Ellis
  Labels: jna  (was: )

 mlockall() returned code is ignored w/o assertions
 --

 Key: CASSANDRA-4096
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4096
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Jonathan Ellis
Priority: Minor
  Labels: jna
 Attachments: 4096.txt


 We log that mlockall() was successful only based on the lack of an assertion 
 failure, so for anyone running w/o {{-ea}} we are lying about mlockall() 
 succeeding.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4095) Internal error processing get_slice (NullPointerException)

2012-03-29 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4095:
--

Attachment: 4095.txt

Thanks for the report, John.  I think your analysis is spot on.  Patch attached 
that does not assume row.cf is non-null.

 Internal error processing get_slice (NullPointerException)
 --

 Key: CASSANDRA-4095
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4095
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.8
 Environment: Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
Reporter: John Laban
Priority: Minor
 Attachments: 4095.txt


 I get this pretty regularly.  It seems to happen transiently on multiple 
 nodes in my cluster, every so often, and goes away.
 ERROR [Thrift:45] 2012-03-26 19:59:12,024 Cassandra.java (line 3041) Internal 
 error processing get_slice
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.SliceFromReadCommand.maybeGenerateRetryCommand(SliceFromReadCommand.java:76)
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:724)
 at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
 at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
 at 
 org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
 at 
 org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
 at 
 org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 The line in question is (I think) the one below, so it looks like the column 
 family reference for a row can sometimes be null?
 int liveColumnsInRow = row != null ? row.cf.getLiveColumnCount() : 0;
 Here is my column family (on 1.0.8):
 ColumnFamily: WorkQueue (Super)
   Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Columns sorted by: 
 org.apache.cassandra.db.marshal.UTF8Type/org.apache.cassandra.db.marshal.UTF8Type
   Row cache size / save period in seconds / keys to save : 0.0/0/all
   Row Cache Provider: 
 org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider
   Key cache size / save period in seconds: 0.0/0
   GC grace seconds: 0
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.0
   Replicate on write: false
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4095) Internal error processing get_slice (NullPointerException)

2012-03-29 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4095:
--

Affects Version/s: (was: 1.0.8)
   1.0.2
Fix Version/s: 1.1.0
   1.0.9

 Internal error processing get_slice (NullPointerException)
 --

 Key: CASSANDRA-4095
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4095
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.2
 Environment: Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
Reporter: John Laban
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.0.9, 1.1.0

 Attachments: 4095.txt


 I get this pretty regularly.  It seems to happen transiently on multiple 
 nodes in my cluster, every so often, and goes away.
 ERROR [Thrift:45] 2012-03-26 19:59:12,024 Cassandra.java (line 3041) Internal 
 error processing get_slice
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.SliceFromReadCommand.maybeGenerateRetryCommand(SliceFromReadCommand.java:76)
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:724)
 at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
 at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
 at 
 org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
 at 
 org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
 at 
 org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 The line in question is (I think) the one below, so it looks like the column 
 family reference for a row can sometimes be null?
 int liveColumnsInRow = row != null ? row.cf.getLiveColumnCount() : 0;
 Here is my column family (on 1.0.8):
 ColumnFamily: WorkQueue (Super)
   Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Columns sorted by: 
 org.apache.cassandra.db.marshal.UTF8Type/org.apache.cassandra.db.marshal.UTF8Type
   Row cache size / save period in seconds / keys to save : 0.0/0/all
   Row Cache Provider: 
 org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider
   Key cache size / save period in seconds: 0.0/0
   GC grace seconds: 0
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.0
   Replicate on write: false
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3755) NPE on invalid CQL DELETE command

2012-03-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3755:
--

Reviewer: slebresne  (was: urandom)

Dave, did you want to take a stab at that approach?

 NPE on invalid CQL DELETE command
 -

 Key: CASSANDRA-3755
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3755
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: paul cannon
Assignee: Dave Brosius
Priority: Minor
  Labels: cql
 Fix For: 1.0.9

 Attachments: unknown_cf.diff


 The CQL command {{delete from k where key='bar';}} causes Cassandra to hit a 
 NullPointerException when the k column family does not exist, and it 
 subsequently closes the Thrift connection instead of reporting an IRE or 
 whatever. This is probably wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4042) add caching to CQL CF options

2012-03-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4042:
--

Reviewer: xedin  (was: jbellis)
Assignee: Sylvain Lebresne  (was: Pavel Yaskevich)

 add caching to CQL CF options
 ---

 Key: CASSANDRA-4042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4042
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Pavel Yaskevich
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.1.0

 Attachments: 4042_v2.txt, CASSANDRA-4042.patch


 Caching option is missing from CQL ColumnFamily options.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3974) Per-CF TTL

2012-03-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3974:
--

Fix Version/s: 1.2

 Per-CF TTL
 --

 Key: CASSANDRA-3974
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3974
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 1.2


 Per-CF TTL would allow compaction optimizations (drop an entire sstable's 
 worth of expired data) that we can't do with per-column.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4079) Check SSTable range before running cleanup

2012-03-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4079:
--

Fix Version/s: (was: 1.1.0)
   1.1.1

 Check SSTable range before running cleanup
 --

 Key: CASSANDRA-4079
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4079
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benjamin Coverston
Assignee: Jonathan Ellis
Priority: Minor
  Labels: compaction
 Fix For: 1.1.1

 Attachments: 4079.txt


 Before running a cleanup compaction on an SSTable we should check the range 
 to see if the SSTable falls into the range we want to remove. If it doesn't 
 we can just mark the SSTable as compacted and be done with it, if it does, we 
 can no-op.
 Will not help with STCS, but for LCS, and perhaps some others we may see a 
 benefit here after topology changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4094) MS.getCommandPendingTasks returns a double

2012-03-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4094:
--

Labels: lhf  (was: )

 MS.getCommandPendingTasks returns a double
 --

 Key: CASSANDRA-4094
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4094
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams
Priority: Trivial
  Labels: lhf
 Fix For: 1.2


 This makes no sense, since you can't have a partial task.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3617) Clean up and optimize Message

2012-03-26 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3617:
--

   Reviewer: jbellis
Component/s: Core
   Assignee: Yuki Morishita  (was: Jonathan Ellis)

I've gotten this to pass unit tests and demonstrate that the approach is sound. 
 Patches are up at https://github.com/jbellis/cassandra/tree/3617-4 with 
hopefully meaningful commit messages.

However, to work in a cluster this requires filling in all the 
IVersionedSerializer serializedSize methods that we left unimplemented in the 
past, which we could get away with because MessagingService would serialize to 
a byte[] (Message.body) before passing to OutboundTCPConnection.  Now we do 
need the serializedSize method to work, since we rely on that to avoid having 
to do that extra copy-to-byte[].

(Where we rely on Thrift for message serialization as in 
RangeSliceCommandSerializer we'll need to do a serializer-internal 
copy-to-byte[] in serializedSize, since TSerializer doesn't expose a size 
method.  We can introduce a new version of those serializers in another ticket 
that does not rely on Thrift, but for this ticket let's keep it simple.)

I've started doing that in the last commit posted, but there are more to do and 
I'm out of time for now, so I'm going to hand this off to Yuki.

 Clean up and optimize Message
 -

 Key: CASSANDRA-3617
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3617
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
 Fix For: 1.2


 The Message class has grown largely by accretion and it shows.  There are 
 several problems:
 - Outbound and inbound messages aren't really the same thing and should not 
 be conflated
 - We pre-serialize message bodies to byte[], then copy those bytes onto the 
 Socket buffer, instead of just keeping a reference to the object being 
 serialized and then writing it out directly to the socket
 - MessagingService versioning is poorly encapsulating, scattering version 
 variables and references to things like CachingMessageProducer across the 
 codebase

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4078) StackOverflowError when upgrading to 1.0.8 from 0.8.10

2012-03-23 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4078:
--

Description: 
Hello

I am trying to upgrade our 1-node setup from 0.8.10 to 1.0.8 and seeing the 
following exception when starting up 1.0.8.  We have been running 0.8.10 
without any issues.
 
Attached is the entire log file during startup of 1.0.8.  There are 2 
exceptions:

1. StackOverflowError (line 2599)
2. InstanceAlreadyExistsException (line 3632)

I tried run scrub under 0.8.10 first, it did not help.  Also, I tried 
dropping the column family which caused the exception, it just got the same 
exceptions from another column family.

Thanks


  was:

Hello

I am trying to upgrade our 1-node setup from 0.8.10 to 1.0.8 and seeing the 
following exception when starting up 1.0.8.  We have been running 0.8.10 
without any issues.
 
Attached is the entire log file during startup of 1.0.8.  There are 2 
exceptions:

1. StackOverflowError (line 2599)
2. InstanceAlreadyExistsException (line 3632)

I tried run scrub under 0.8.10 first, it did not help.  Also, I tried 
dropping the column family which caused the exception, it just got the same 
exceptions from another column family.

Thanks


   Priority: Critical  (was: Major)

 StackOverflowError when upgrading to 1.0.8 from 0.8.10
 --

 Key: CASSANDRA-4078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4078
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.10
 Environment: OS: Linux xps.openfin 2.6.35.13-91.fc14.i686 #1 SMP Tue 
 May 3 13:36:36 UTC 2011 i686 i686 i386 GNU/Linux
 Java: JVM vendor/version: Java HotSpot(TM) Server VM/1.6.0_31
Reporter: Wenjun
Assignee: paul cannon
Priority: Critical
 Fix For: 0.8.10

 Attachments: system.log


 Hello
 I am trying to upgrade our 1-node setup from 0.8.10 to 1.0.8 and seeing the 
 following exception when starting up 1.0.8.  We have been running 0.8.10 
 without any issues.
  
 Attached is the entire log file during startup of 1.0.8.  There are 2 
 exceptions:
 1. StackOverflowError (line 2599)
 2. InstanceAlreadyExistsException (line 3632)
 I tried run scrub under 0.8.10 first, it did not help.  Also, I tried 
 dropping the column family which caused the exception, it just got the same 
 exceptions from another column family.
 Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4067) Report lifetime compaction throughput

2012-03-23 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4067:
--

Reviewer: nickmbailey  (was: jbellis)

 Report lifetime compaction throughput
 -

 Key: CASSANDRA-4067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4067
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Brandon Williams
Priority: Trivial
  Labels: compaction
 Fix For: 1.1.0

 Attachments: 0001-Track-and-expose-lifetime-bytes-compacted.txt, 
 0002-Track-and-expose-total-compactions.txt


 Would be useful to be able to monitor total compaction throughput without 
 having to poll frequently enough to make sure we get every CompactionInfo 
 object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4080) Cut down on the comparisons needed during shouldPurge and needDeserialize

2012-03-23 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4080:
--

Reviewer: yukim  (was: slebresne)

 Cut down on the comparisons needed during shouldPurge and needDeserialize
 -

 Key: CASSANDRA-4080
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4080
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
  Labels: compaction
 Fix For: 1.1.1


 shouldPurge in particular is still a performance sore point with LCS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4079) Check SSTable range before running cleanup

2012-03-23 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4079:
--

 Reviewer: bcoverston
Affects Version/s: (was: 1.0.8)
Fix Version/s: 1.1.0
 Assignee: Jonathan Ellis
   Labels: compaction  (was: )

 Check SSTable range before running cleanup
 --

 Key: CASSANDRA-4079
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4079
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benjamin Coverston
Assignee: Jonathan Ellis
Priority: Minor
  Labels: compaction
 Fix For: 1.1.0


 Before running a cleanup compaction on an SSTable we should check the range 
 to see if the SSTable falls into the range we want to remove. If it doesn't 
 we can just mark the SSTable as compacted and be done with it, if it does, we 
 can no-op.
 Will not help with STCS, but for LCS, and perhaps some others we may see a 
 benefit here after topology changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4079) Check SSTable range before running cleanup

2012-03-23 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4079:
--

Attachment: 4079.txt

Good idea, patch attached.

 Check SSTable range before running cleanup
 --

 Key: CASSANDRA-4079
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4079
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benjamin Coverston
Assignee: Jonathan Ellis
Priority: Minor
  Labels: compaction
 Fix For: 1.1.0

 Attachments: 4079.txt


 Before running a cleanup compaction on an SSTable we should check the range 
 to see if the SSTable falls into the range we want to remove. If it doesn't 
 we can just mark the SSTable as compacted and be done with it, if it does, we 
 can no-op.
 Will not help with STCS, but for LCS, and perhaps some others we may see a 
 benefit here after topology changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-23 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3997:
--

Reviewer: thepaul  (was: jbellis)

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: 0001-CASSANDRA-3997.patch, jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3772) Evaluate Murmur3-based partitioner

2012-03-23 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3772:
--

Reviewer: vijay  (was: yukim)

 Evaluate Murmur3-based partitioner
 --

 Key: CASSANDRA-3772
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3772
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Dave Brosius
 Fix For: 1.2

 Attachments: try_murmur3.diff, try_murmur3_2.diff


 MD5 is a relatively heavyweight hash to use when we don't need cryptographic 
 qualities, just a good output distribution.  Let's see how much overhead we 
 can save by using Murmur3 instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3943) Too many small size sstables after loading data using sstableloader or BulkOutputFormat increases compaction time.

2012-03-22 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3943:
--

Affects Version/s: (was: 1.1.0)
Fix Version/s: 1.2
 Assignee: Stu Hood

 Too many small size sstables after loading data using sstableloader or 
 BulkOutputFormat increases compaction time.
 --

 Key: CASSANDRA-3943
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3943
 Project: Cassandra
  Issue Type: Wish
  Components: Hadoop, Tools
Affects Versions: 0.8.2, 1.1.0
Reporter: Samarth Gahire
Assignee: Stu Hood
Priority: Minor
  Labels: bulkloader, hadoop, ponies, sstableloader, streaming, 
 tools
 Fix For: 1.2

   Original Estimate: 168h
  Remaining Estimate: 168h

 When we create sstables using SimpleUnsortedWriter or BulkOutputFormat,the 
 size of sstables created is around the buffer size provided.
 But After loading , sstables created in the cluster nodes are of size around
 {code}( (sstable_size_before_loading) * replication_factor ) / 
 No_Of_Nodes_In_Cluster{code}
 As the no of nodes in cluster goes increasing, size of each sstable loaded to 
 cassandra node decreases.Such small size sstables take too much time to 
 compact (minor compaction) as compare to relatively large size sstables.
 One solution that we have tried is to increase the buffer size while 
 generating sstables.But as we increase the buffer size ,time taken to 
 generate sstables increases.Is there any solution to this in existing 
 versions or are you fixing this in future version?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3943) Too many small size sstables after loading data using sstableloader or BulkOutputFormat increases compaction time.

2012-03-22 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3943:
--

Affects Version/s: 1.1.0

 Too many small size sstables after loading data using sstableloader or 
 BulkOutputFormat increases compaction time.
 --

 Key: CASSANDRA-3943
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3943
 Project: Cassandra
  Issue Type: Wish
  Components: Hadoop, Tools
Affects Versions: 0.8.2, 1.1.0
Reporter: Samarth Gahire
Assignee: Stu Hood
Priority: Minor
  Labels: bulkloader, hadoop, ponies, sstableloader, streaming, 
 tools
 Fix For: 1.2

   Original Estimate: 168h
  Remaining Estimate: 168h

 When we create sstables using SimpleUnsortedWriter or BulkOutputFormat,the 
 size of sstables created is around the buffer size provided.
 But After loading , sstables created in the cluster nodes are of size around
 {code}( (sstable_size_before_loading) * replication_factor ) / 
 No_Of_Nodes_In_Cluster{code}
 As the no of nodes in cluster goes increasing, size of each sstable loaded to 
 cassandra node decreases.Such small size sstables take too much time to 
 compact (minor compaction) as compare to relatively large size sstables.
 One solution that we have tried is to increase the buffer size while 
 generating sstables.But as we increase the buffer size ,time taken to 
 generate sstables increases.Is there any solution to this in existing 
 versions or are you fixing this in future version?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3912) support incremental repair controlled by external agent

2012-03-22 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3912:
--

Reviewer: stuhood

Stu, could you review?

 support incremental repair controlled by external agent
 ---

 Key: CASSANDRA-3912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3912
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
 Fix For: 1.2

 Attachments: CASSANDRA-3912-trunk-v1.txt, 
 CASSANDRA-3912-v2-001-add-nodetool-commands.txt, 
 CASSANDRA-3912-v2-002-fix-antientropyservice.txt


 As a poor man's pre-cursor to CASSANDRA-2699, exposing the ability to repair 
 small parts of a range is extremely useful because it allows (with external 
 scripting logic) to slowly repair a node's content over time. Other than 
 avoiding the bulkyness of complete repairs, it means that you can safely do 
 repairs even if you absolutely cannot afford e.g. disk spaces spikes (see 
 CASSANDRA-2699 for what the issues are).
 Attaching a patch that exposes a repairincremental command to nodetool, 
 where you specify a step and the number of total steps. Incrementally 
 performing a repair in 100 steps, for example, would be done by:
 {code}
 nodetool repairincremental 0 100
 nodetool repairincremental 1 100
 ...
 nodetool repairincremental 99 100
 {code}
 An external script can be used to keep track of what has been repaired and 
 when. This should allow (1) allow incremental repair to happen now/soon, and 
 (2) allow experimentation and evaluation for an implementation of 
 CASSANDRA-2699 which I still think is a good idea. This patch does nothing to 
 help the average deployment, but at least makes incremental repair possible 
 given sufficient effort spent on external scripting.
 The big no-no about the patch is that it is entirely specific to 
 RandomPartitioner and BigIntegerToken. If someone can suggest a way to 
 implement this command generically using the Range/Token abstractions, I'd be 
 happy to hear suggestions.
 An alternative would be to provide a nodetool command that allows you to 
 simply specify the specific token ranges on the command line. It makes using 
 it a bit more difficult, but would mean that it works for any partitioner and 
 token type.
 Unless someone can suggest a better way to do this, I think I'll provide a 
 patch that does this. I'm still leaning towards supporting the simple step N 
 out of M form though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2942) Dropped columnfamilies can leave orphaned data files that do not get cleared on restart

2012-03-22 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2942:
--

Description: 
* Bring up 3 node cluster
* From node1: Run Stress Tool
{code} stress --num-keys=10 --columns=10 --consistency-level=ALL 
--average-size-values --replication-factor=3 --nodes=node1,node2 {code}
* Shutdown node3
* From node1: drop the Standard1 CF in Keyspace1
* Shutdown node2 and node3
* Bring up node1 and node2. Check that the Standard1 files are gone.
{code}
ls -al /var/lib/cassandra/data/Keyspace1/
{code}
* Bring up node3. The log file shows the drop column family occurs
{code}
 INFO 00:51:25,742 Applying migration 9a76f880-b4c5-11e0--8901a7c5c9ce Drop 
column family: Keyspace1.Standard1
{code}
* Restart node3 to clear out dropped tables from the filesystem
{code}
root@cathy3:~/cass-0.8/bin# ls -al /var/lib/cassandra/data/Keyspace1/
total 36
drwxr-xr-x 3 root root 4096 Jul 23 00:51 .
drwxr-xr-x 6 root root 4096 Jul 23 00:48 ..
-rw-r--r-- 1 root root0 Jul 23 00:51 Standard1-g-1-Compacted
-rw-r--r-- 2 root root 5770 Jul 23 00:51 Standard1-g-1-Data.db
-rw-r--r-- 2 root root   32 Jul 23 00:51 Standard1-g-1-Filter.db
-rw-r--r-- 2 root root  120 Jul 23 00:51 Standard1-g-1-Index.db
-rw-r--r-- 2 root root 4276 Jul 23 00:51 Standard1-g-1-Statistics.db
drwxr-xr-x 3 root root 4096 Jul 23 00:51 snapshots
{code}
*Bug:  The files for Standard1 are orphaned on node3*



  was:

* Bring up 3 node cluster
* From node1: Run Stress Tool
{code} stress --num-keys=10 --columns=10 --consistency-level=ALL 
--average-size-values --replication-factor=3 --nodes=node1,node2 {code}
* Shutdown node3
* From node1: drop the Standard1 CF in Keyspace1
* Shutdown node2 and node3
* Bring up node1 and node2. Check that the Standard1 files are gone.
{code}
ls -al /var/lib/cassandra/data/Keyspace1/
{code}
* Bring up node3. The log file shows the drop column family occurs
{code}
 INFO 00:51:25,742 Applying migration 9a76f880-b4c5-11e0--8901a7c5c9ce Drop 
column family: Keyspace1.Standard1
{code}
* Restart node3 to clear out dropped tables from the filesystem
{code}
root@cathy3:~/cass-0.8/bin# ls -al /var/lib/cassandra/data/Keyspace1/
total 36
drwxr-xr-x 3 root root 4096 Jul 23 00:51 .
drwxr-xr-x 6 root root 4096 Jul 23 00:48 ..
-rw-r--r-- 1 root root0 Jul 23 00:51 Standard1-g-1-Compacted
-rw-r--r-- 2 root root 5770 Jul 23 00:51 Standard1-g-1-Data.db
-rw-r--r-- 2 root root   32 Jul 23 00:51 Standard1-g-1-Filter.db
-rw-r--r-- 2 root root  120 Jul 23 00:51 Standard1-g-1-Index.db
-rw-r--r-- 2 root root 4276 Jul 23 00:51 Standard1-g-1-Statistics.db
drwxr-xr-x 3 root root 4096 Jul 23 00:51 snapshots
{code}
*Bug:  The files for Standard1 are orphaned on node3*



Summary: Dropped columnfamilies can leave orphaned data files that do 
not get cleared on restart  (was: If you drop a CF when one node is down the 
files are orphaned on the downed node)

 Dropped columnfamilies can leave orphaned data files that do not get cleared 
 on restart
 ---

 Key: CASSANDRA-2942
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2942
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.0
Reporter: Cathy Daw
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.0.0

 Attachments: 2942.txt


 * Bring up 3 node cluster
 * From node1: Run Stress Tool
 {code} stress --num-keys=10 --columns=10 --consistency-level=ALL 
 --average-size-values --replication-factor=3 --nodes=node1,node2 {code}
 * Shutdown node3
 * From node1: drop the Standard1 CF in Keyspace1
 * Shutdown node2 and node3
 * Bring up node1 and node2. Check that the Standard1 files are gone.
 {code}
 ls -al /var/lib/cassandra/data/Keyspace1/
 {code}
 * Bring up node3. The log file shows the drop column family occurs
 {code}
  INFO 00:51:25,742 Applying migration 9a76f880-b4c5-11e0--8901a7c5c9ce 
 Drop column family: Keyspace1.Standard1
 {code}
 * Restart node3 to clear out dropped tables from the filesystem
 {code}
 root@cathy3:~/cass-0.8/bin# ls -al /var/lib/cassandra/data/Keyspace1/
 total 36
 drwxr-xr-x 3 root root 4096 Jul 23 00:51 .
 drwxr-xr-x 6 root root 4096 Jul 23 00:48 ..
 -rw-r--r-- 1 root root0 Jul 23 00:51 Standard1-g-1-Compacted
 -rw-r--r-- 2 root root 5770 Jul 23 00:51 Standard1-g-1-Data.db
 -rw-r--r-- 2 root root   32 Jul 23 00:51 Standard1-g-1-Filter.db
 -rw-r--r-- 2 root root  120 Jul 23 00:51 Standard1-g-1-Index.db
 -rw-r--r-- 2 root root 4276 Jul 23 00:51 Standard1-g-1-Statistics.db
 drwxr-xr-x 3 root root 4096 Jul 23 00:51 snapshots
 {code}
 *Bug:  The files for Standard1 are orphaned on node3*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

[jira] [Updated] (CASSANDRA-4072) Clean up DataOutputBuffer

2012-03-21 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4072:
--

Component/s: Core
   Priority: Minor  (was: Major)
   Assignee: Jonathan Ellis

 Clean up DataOutputBuffer
 -

 Key: CASSANDRA-4072
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4072
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor

 The DataOutputBuffer/OutputBuffer split is unnecessarily baroque.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3468) SStable data corruption in 1.0.x

2012-03-21 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3468:
--

Reviewer:   (was: slebresne)
  Labels:   (was: patch)

 SStable data corruption in 1.0.x
 

 Key: CASSANDRA-3468
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3468
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
 Environment: RHEL 6 running Cassandra 1.0.x.
Reporter: Terry Cumaranatunge
 Attachments: 3468-assert.txt


 We have noticed several instances of sstable corruptions in 1.0.x. This has 
 occurred in 1.0.0-rcx and 1.0.0 and 1.0.1. It has happened on multiple nodes 
 and multiple hosts with different disks, so this is the reason the software 
 is suspected at this time. The file system used is XFS, but no resets or any 
 type of failure scenarios have been run to create the problem. We were 
 basically running under load and every so often, we see that the sstable gets 
 corrupted and compaction stops on that node.
 I will attach the relevant sstable files if it lets me do that when I create 
 this ticket.
 ERROR [CompactionExecutor:23] 2011-10-27 11:14:09,309 PrecompactedRow.java 
 (line 119) Skipping row DecoratedKey(128013852116656632841539411062933532114, 
 37303730303138313533) in 
 /var/lib/cassandra/data/MSA/participants-h-8688-Data.db
 java.io.EOFException
 at java.io.RandomAccessFile.readFully(RandomAccessFile.java:399)
 at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
 at 
 org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:388)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:350)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:96)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
 at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:143)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:231)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:115)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:102)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:127)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:102)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:87)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:116)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:99)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 at 
 com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:179)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:47)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:131)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:114)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 This was Sylvain's analysis:
 I don't have much better news. Basically it seems the 2 last MB of the file 
 are complete garbage (which also explain the mmap error btw). And given where 
 the corruption actually starts, it suggests that it's either a very low level 
 bug in our file writer code that start writting bad data at some point for 
 some reason, or it's corruption not related to Cassandra. But given that, a 
 Cassandra bug sounds fairly unlikely.
 You said that you saw that corruption more than once. Could you be more 
 precise? In particular, did you get it on different hosts? Also, what 

[jira] [Updated] (CASSANDRA-4072) Clean up DataOutputBuffer

2012-03-21 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4072:
--

Attachment: 4072.txt

Patch inlines the relevant parts of OB into DOB. Also replaces FBOS+getBytes 
with DOB+getData in a couple places that look performance-sensitive-ish.

 Clean up DataOutputBuffer
 -

 Key: CASSANDRA-4072
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4072
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Attachments: 4072.txt


 The DataOutputBuffer/OutputBuffer split is unnecessarily baroque.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4066) Cassandra cluster stops responding on time change (scheduling not using monotonic time?)

2012-03-20 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4066:
--

  Component/s: Core
 Priority: Minor  (was: Major)
Affects Version/s: (was: 1.0.6)
Fix Version/s: 1.1.1
 Assignee: Brandon Williams
   Labels: gossip  (was: )

We make extensive use of Java's ScheduledExecutorService, which does not deal 
well with the system time being pulled out from under it: 
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7139684

I'm willing to live with this for the majority of scheduled tasks, however, it 
might be worth updating Gossip to use it's own thread + sleep calls to avoid 
this.

On the other hand, if you didn't have Gossip dying with UAE, it would be very 
difficult to figure out why the rest of the background tasks stopped executing, 
which would cause things to go bad a lot more gradually.

What do you think, Brandon?

 Cassandra cluster stops responding on time change (scheduling not using 
 monotonic time?) 
 -

 Key: CASSANDRA-4066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4066
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux; CentOS6 2.6.32-220.4.2.el6.x86_64
Reporter: David Daeschler
Assignee: Brandon Williams
Priority: Minor
  Labels: gossip
 Fix For: 1.1.1


 The server installation I set up did not have ntpd installed in the base 
 installation. When I noticed that the clocks were skewing I installed ntp and 
 set the date on all the servers in the cluster. A short time later, I started 
 getting UnavailableExceptions on the clients. 
 Also, one sever seemed to be unaffected by the time change. That server 
 happened to have it's time pushed forward, not backwards like the other 3 in 
 the cluster. This leads me to believe something is running on a 
 timer/schedule that is not monotonic.
 I'm posting this as a bug, but I suppose it might just be part of the 
 communication protocols etc for the cluster and part of the design. But I 
 think the devs should be aware of what I saw.
 Otherwise, thank you for a fantastic product. Even after restarting 75% of 
 the cluster things seem to have recovered nicely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4063) Expose nodetool cfhistograms for secondary index CFs

2012-03-19 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4063:
--

Reviewer: nickmbailey

 Expose nodetool cfhistograms for secondary index CFs
 

 Key: CASSANDRA-4063
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4063
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Brandon Williams
Priority: Minor
  Labels: jmx
 Attachments: 4063.txt


 With the ObjectName that NodeProbe uses, the JMX query can only match mbeans 
 with type ColumnFamilies.  Secondary index CFs have a type of 
 IndexColumnFamilies, so the query won't match them.
 The [ObjectName 
 documentation|http://docs.oracle.com/javase/6/docs/api/javax/management/ObjectName.html]
  indicates that you can use wildcards, which would be the perfect solution if 
 it actually worked.  I'm not sure if it's some quoted vs non-quoted pattern 
 issue, or if it's particular to the {{newMBeanProxy()}} method, but I could 
 not get wildcards to match the secondary index CFs.  Explicitly setting the 
 type field to IndexColumnFamilies did work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4060) Track time elapsed for deletes in cassandra-cli

2012-03-17 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4060:
--

 Reviewer: xedin
Affects Version/s: (was: 1.0.0)

 Track time elapsed for deletes in cassandra-cli
 ---

 Key: CASSANDRA-4060
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4060
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Radim Kolar
Assignee: Radim Kolar
Priority: Minor
  Labels: trunk
 Fix For: 1.2

 Attachments: del-elapsed-time.txt


 Track Elapsed time for deletes too like it is tracked for get
 [default@test] get sipdb[34512];
 = (column=kam, value=34...@customer143.sip.ourdomain.net, 
 timestamp=131694724011)
 Returned 1 results.
 Elapsed time: 79 msec(s).
 [default@test] del sipdb[-3212];
 row removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3811) Empty rpc_address prevents running MapReduce job outside a cluster

2012-03-15 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3811:
--

Priority: Minor  (was: Critical)

Changing to Minor.

I don't like to argue about priorities, but Critical means things are badly 
broken; either it doesn't work AT ALL in the common case, or in edge cases it 
can fail catastrophically (data loss or cascading failure).

This is not the case here; we have a problem with an edge case that we barely 
support (jobs from outside the cluster) that does not affect more normal 
setups.  That's minor for the project as a whole.


 Empty rpc_address prevents running MapReduce job outside a cluster
 --

 Key: CASSANDRA-3811
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3811
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 0.8.9, 0.8.10
 Environment: Debian Stable,
 Cassandra 0.8.9,
 Java(TM) SE Runtime Environment (build 1.6.0_26-b03),
 Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
Reporter: Patrik Modesto
Priority: Minor

 Setting rpc_address to empty to make Cassandra listen on all network 
 intefaceces breaks running mapredude job from outside the cluster. The jobs 
 wont even start, showing these messages:
 {noformat}
 12/01/26 11:15:21 DEBUG  hadoop.ColumnFamilyInputFormat: failed
 connect to endpoint 0.0.0.0
 java.io.IOException: unable to connect to server
at 
 org.apache.cassandra.hadoop.ConfigHelper.createConnection(ConfigHelper.java:389)
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSubSplits(ColumnFamilyInputFormat.java:224)
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.access$200(ColumnFamilyInputFormat.java:73)
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:193)
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:178)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.thrift.transport.TTransportException:
 java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:183)
at 
 org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)
at 
 org.apache.cassandra.hadoop.ConfigHelper.createConnection(ConfigHelper.java:385)
... 9 more
 Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:211)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at org.apache.thrift.transport.TSocket.open(TSocket.java:178)
... 11 more
 ...
 Caused by: java.util.concurrent.ExecutionException:
 java.io.IOException: failed connecting to all endpoints
 10.0.18.129,10.0.18.99,10.0.18.98
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:156)
... 19 more
 Caused by: java.io.IOException: failed connecting to all endpoints
 10.0.18.129,10.0.18.99,10.0.18.98
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSubSplits(ColumnFamilyInputFormat.java:241)
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.access$200(ColumnFamilyInputFormat.java:73)
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:193)
at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:178)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 {noformat}
 Describe ring retunrs:
 {noformat}
 describe_ring returns:
 endpoints: 

[jira] [Updated] (CASSANDRA-4022) Compaction of hints can get stuck in a loop

2012-03-15 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4022:
--

Affects Version/s: 1.2

 Compaction of hints can get stuck in a loop
 ---

 Key: CASSANDRA-4022
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4022
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2
Reporter: Brandon Williams
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2

 Attachments: 4022.txt


 Not exactly sure how I caused this as I was working on something else in 
 trunk, but:
 {noformat}
  INFO 17:41:35,682 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-339-Data.db')]
  INFO 17:41:36,430 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-340-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes 
 for 1 keys at 5.912220MB/s.  Time: 748ms.
  INFO 17:41:36,431 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-340-Data.db')]
  INFO 17:41:37,238 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-341-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes 
 for 1 keys at 5.479976MB/s.  Time: 807ms.
  INFO 17:41:37,239 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-341-Data.db')]
  INFO 17:41:38,163 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-342-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes 
 for 1 keys at 4.786083MB/s.  Time: 924ms.
  INFO 17:41:38,164 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-342-Data.db')]
  INFO 17:41:39,014 GC for ParNew: 274 ms for 1 collections, 541261288 used; 
 max is 1024458752
  INFO 17:41:39,151 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-343-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes 
 for 1 keys at 4.485132MB/s.  Time: 986ms.
  INFO 17:41:39,151 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-343-Data.db')]
  INFO 17:41:40,016 GC for ParNew: 308 ms for 1 collections, 585582200 used; 
 max is 1024458752
  INFO 17:41:40,200 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-344-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes 
 for 1 keys at 4.223821MB/s.  Time: 1,047ms.
  INFO 17:41:40,201 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-344-Data.db')]
  INFO 17:41:41,017 GC for ParNew: 252 ms for 1 collections, 617877904 used; 
 max is 1024458752
  INFO 17:41:41,178 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-345-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes 
 for 1 keys at 4.526449MB/s.  Time: 977ms.
  INFO 17:41:41,179 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-345-Data.db')]
  INFO 17:41:41,885 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-346-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes 
 for 1 keys at 6.263938MB/s.  Time: 706ms.
  INFO 17:41:41,887 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-346-Data.db')]
  INFO 17:41:42,617 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-347-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes for 1 keys at 
 6.066311MB/s.  Time: 729ms.
  INFO 17:41:42,618 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-347-Data.db')]
  INFO 17:41:43,376 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-348-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes for 1 keys at 
 5.834222MB/s.  Time: 758ms.
  INFO 17:41:43,377 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-348-Data.db')]
  INFO 17:41:44,307 Compacted to 
 [/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-349-Data.db,].
   4,637,160 to 4,637,160 (~100% of original) bytes for 1 keys at 
 4.760323MB/s.  Time: 929ms.
  INFO 17:41:44,308 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hd-349-Data.db')]
  INFO 17:41:45,021 GC for ParNew: 

[jira] [Updated] (CASSANDRA-4054) SStableImport and SStableExport does not serialize row level deletion

2012-03-15 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4054:
--

Affects Version/s: (was: 1.0.8)
   0.5
Fix Version/s: 1.1.0

Changing fix version to 1.1.0 since this would be backwards-incompatible.  If 
we miss 1.1.0 we can push to 1.2.

 SStableImport and SStableExport does not serialize row level deletion
 -

 Key: CASSANDRA-4054
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4054
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.5
Reporter: Zhu Han
 Fix For: 1.1.0


 SSTableImport and SSTableExport does not serialize/de-serialize the row-level 
 deletion info to/from the json file. This brings back the deleted data after 
 restore from the json file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4050) cassandra unnecessarily holds file locks on snapshot files

2012-03-14 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4050:
--

Priority: Minor  (was: Major)

Currently we take the snapshot using mklink /H, but I've experimented with the 
Java7 Files.createLink and see the same behavior.  It may simply be normal 
behavior for Windows that links are considered open until their creator is 
closed.



 cassandra unnecessarily holds file locks on snapshot files
 --

 Key: CASSANDRA-4050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.8
 Environment: Windows 7
Reporter: Jim Newsham
Priority: Minor

 I'm using Cassandra 1.0.8, on Windows 7.  When I take a snapshot of the 
 database, I find that I am unable to delete the snapshot directory (i.e., dir 
 named {datadir}\{keyspacename}\snapshots\{snapshottag}) while Cassandra is 
 running:  The action can't be completed because the folder or a file in it 
 is open in another program.  Close the folder or file and try again [in 
 Windows Explorer].  If I terminate Cassandra, then I can delete the directory 
 with no problem.
 I expect to be able to move or delete the snapshotted files while Cassandra 
 is running, as this should not affect the runtime operation of Cassandra.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3781) CQL support for changing row key type in ALTER TABLE

2012-03-13 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3781:
--

 Labels: cql  (was: )
Summary: CQL support for changing row key type in ALTER TABLE  (was: CQL 
support for altering key_validation_class in ALTER TABLE)

 CQL support for changing row key type in ALTER TABLE
 

 Key: CASSANDRA-3781
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3781
 Project: Cassandra
  Issue Type: Improvement
Reporter: Rick Branson
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 1.1.0

 Attachments: 3781.patch


 There is currently no way to alter the key_validation_class from CQL. jbellis 
 suggested that this could be done by being able to ALTER the type of the KEY 
 alias.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-2388) ColumnFamilyRecordReader fails for a given split because a host is down, even if records could reasonably be read from other replica.

2012-03-08 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2388:
--

Priority: Minor  (was: Major)

Marking as minor since the job should get re-submitted, and it's very difficult 
to reproduce when the tasktrackers are colocated with cassandra nodes (the 
recommended configuration).

 ColumnFamilyRecordReader fails for a given split because a host is down, even 
 if records could reasonably be read from other replica.
 -

 Key: CASSANDRA-2388
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2388
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 0.6
Reporter: Eldon Stegall
Assignee: Mck SembWever
Priority: Minor
  Labels: hadoop, inputformat
 Fix For: 1.1.1

 Attachments: 0002_On_TException_try_next_split.patch, 
 CASSANDRA-2388-addition1.patch, CASSANDRA-2388-extended.patch, 
 CASSANDRA-2388.patch, CASSANDRA-2388.patch, CASSANDRA-2388.patch, 
 CASSANDRA-2388.patch


 ColumnFamilyRecordReader only tries the first location for a given split. We 
 should try multiple locations for a given split.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3926) all column validator options are not represented in cli help

2012-03-07 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3926:
--

Reviewer: xedin

 all column validator options are not represented in cli help
 

 Key: CASSANDRA-3926
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3926
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Affects Versions: 0.8.10, 1.0.7
Reporter: Jeremy Hanna
Assignee: Kirk True
Priority: Minor
  Labels: cli, lhf
 Fix For: 1.1.0

 Attachments: trunk-2530.txt


 The options added to column validators from CASSANDRA-2530 are not shown as 
 options in the CLI help.  I was going to create a column family with a float 
 validator and double checked the help and it wasn't shown.  So I just had to 
 double check that I could.  Would be nice to have those added to those docs, 
 even though CQL is the way forward.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3792) add type information to new schema_ columnfamilies

2012-03-07 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3792:
--

Reviewer: jbellis  (was: xedin)
Assignee: (was: Jonathan Ellis)

 add type information to new schema_ columnfamilies
 --

 Key: CASSANDRA-3792
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3792
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Jonathan Ellis
 Fix For: 1.1.0


 Should also fix the quotes that the current Thrift-based serialization embeds 
 in string schema data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3997) Make SerializingCache Memory Pluggable

2012-03-06 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3997:
--

 Reviewer: jbellis
Affects Version/s: (was: 1.2)

 Make SerializingCache Memory Pluggable
 --

 Key: CASSANDRA-3997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3997
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
  Labels: cache
 Fix For: 1.2

 Attachments: jna.zip


 Serializing cache uses native malloc and free by making FM pluggable, users 
 will have a choice of gcc malloc, TCMalloc or JEMalloc as needed. 
 Initial tests shows less fragmentation in JEMalloc but the only issue with it 
 is that (both TCMalloc and JEMalloc) are kind of single threaded (at-least 
 they crash in my test otherwise).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter

2012-03-05 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3885:
--

Description: 
This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow 
sub-sub-tasks.

We need to support multiple ranges in a SliceQueryFilter, and we want querying 
them to be efficient, i.e., one pass through the row to get all of the ranges, 
rather than one pass per range.

Supercolumns are irrelevant since the goal is to replace them anyway.  Ignore 
supercolumn-related code or rip it out, whichever is easier.

This is ONLY dealing with the storage engine part, not the StorageProxy and 
Command intra-node messages or the Thrift or CQL client APIs.  Thus, a unit 
test should be added to ColumnFamilyStoreTest to demonstrate that it works.

  was:
This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow 
sub-sub-tasks.

We need to support multiple ranges in a SQF, and we want querying them to be 
efficient, i.e., one pass through the row to get all of the ranges, rather than 
one pass per range.

Supercolumns are irrelevant since the goal is to replace them anyway.  Ignore 
supercolumn-related code or rip it out, whichever is easier.

This is ONLY dealing with the storage engine part, not the StorageProxy and 
Command intra-node messages or the Thrift or CQL client APIs.  Thus, a unit 
test should be added to ColumnFamilyStoreTest to demonstrate that it works.


 Support multiple ranges in SliceQueryFilter
 ---

 Key: CASSANDRA-3885
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3885
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Todd Nine
 Fix For: 1.2


 This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow 
 sub-sub-tasks.
 We need to support multiple ranges in a SliceQueryFilter, and we want 
 querying them to be efficient, i.e., one pass through the row to get all of 
 the ranges, rather than one pass per range.
 Supercolumns are irrelevant since the goal is to replace them anyway.  Ignore 
 supercolumn-related code or rip it out, whichever is easier.
 This is ONLY dealing with the storage engine part, not the StorageProxy and 
 Command intra-node messages or the Thrift or CQL client APIs.  Thus, a unit 
 test should be added to ColumnFamilyStoreTest to demonstrate that it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3985) Ensure a directory is selected for Compaction

2012-03-05 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3985:
--

Reviewer: xedin

 Ensure a directory is selected for Compaction
 -

 Key: CASSANDRA-3985
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3985
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Minor
 Attachments: cassandra-1.0-3985.txt


 From http://www.mail-archive.com/user@cassandra.apache.org/msg20757.html
 CompactionTask.execute() checks if there is a valid compactionFileLocation 
 only if partialCompactionsAcceptable() . upgradesstables results in a 
 CompactionTask with userdefined set, so the valid location check is not 
 performed. 
 The result is a NPE, partial stack 
 {code:java}
 $ nodetool -h localhost upgradesstables
 Error occured while upgrading the sstables for keyspace MyKeySpace
 java.util.concurrent.ExecutionException: java.lang.NullPointerException
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:203)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:219)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:995)
 at 
 org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:1648)
 snip
 Caused by: java.lang.NullPointerException
 at java.io.File.init(File.java:222)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTempSSTablePath(ColumnFamilyStore.java:641)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTempSSTablePath(ColumnFamilyStore.java:652)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.createCompactionWriter(ColumnFamilyStore.java:1888)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:151)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$4.perform(CompactionManager.java:229)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:182)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 {code}
 (night time here, will fix tomorrow, anyone else feel free to fix it.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3996) Keys index skips results

2012-03-04 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3996:
--

Reviewer: tjake

 Keys index skips results
 

 Key: CASSANDRA-3996
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3996
 Project: Cassandra
  Issue Type: Bug
Reporter: Dmitry Petrashko
 Attachments: KeysSearcher_fix_and_refactor.patch


 While scanning results page if range index meets result already seen in 
 previous result set it decreases columnsRead that causes next iteration to 
 treat columsReadrowsPerQuery as if last page was not full and scan is done.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3952) avoid quadratic startup time in LeveledManifest

2012-03-03 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3952:
--

Reviewer: scode
Assignee: Dave Brosius

 avoid quadratic startup time in LeveledManifest
 ---

 Key: CASSANDRA-3952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3952
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: speed_up_level_of.diff


 Checking that each sstable is in the manifest on startup is O(N**2) in the 
 number of sstables:
 {code}
 .   // ensure all SSTables are in the manifest
 for (SSTableReader ssTableReader : cfs.getSSTables())
 {
 if (manifest.levelOf(ssTableReader)  0)
 manifest.add(ssTableReader);
 }
 {code}
 {code}
 private int levelOf(SSTableReader sstable)
 {
 for (int level = 0; level  generations.length; level++)
 {
 if (generations[level].contains(sstable))
 return level;
 }
 return -1;
 }
 {code}
 Note that the contains call is a linear List.contains.
 We need to switch to a sorted list and bsearch, or a tree, to support 
 TB-levels of data in LeveledCompactionStrategy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3989) nodetool cleanup/scrub/upgradesstables promotes all sstables to next level (LeveledCompaction)

2012-03-02 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3989:
--

Affects Version/s: (was: 1.0.7)
   1.0.0
Fix Version/s: 1.1.0
   1.0.9

 nodetool cleanup/scrub/upgradesstables promotes all sstables to next level 
 (LeveledCompaction)
 --

 Key: CASSANDRA-3989
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3989
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
 Environment: RHEL6
Reporter: Maki Watanabe
Priority: Minor
 Fix For: 1.0.9, 1.1.0

 Attachments: 
 0001-Fix-promote-not-to-promote-files-at-cleanup-compacti.patch


 1.0.7 + LeveledCompactionStrategy
 If you run nodetool cleanup, scrub, or upgradesstables, Cassandra execute 
 compaction for each sstable. During the compaction, it put the new sstable to 
 next level of the original sstable. If you run cleanup many times, sstables 
 will reached to the highest level, and CASSANDRA-3608 will happens at next 
 cleanup.
 Reproduce procedure:
 # create column family CF1 with compaction_strategy=LeveledCompactionStrategy 
 and compaction_strategy_options={sstable_size_in_mb: 5};
 # Insert some data into CF1.
 # nodetool flush
 # Verify the sstable is created at L1 in CF1.json
 # nodetool cleanup
 # Verify sstable in L1 is removed and new sstable is created at L2 in CF1.json
 # repeat nodetool cleanup some times

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3983) Change order of directory searching for c*.in.sh

2012-03-01 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3983:
--

Priority: Minor  (was: Major)
Assignee: paul cannon

 Change order of directory searching for c*.in.sh
 

 Key: CASSANDRA-3983
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3983
 Project: Cassandra
  Issue Type: Improvement
Reporter: Nick Bailey
Assignee: paul cannon
Priority: Minor

 When you have a c* package installed but attempt to run from a source build, 
 'bin/cassandra' will search the packaged dirs for 'cassandra.in.sh' before 
 searching the dirs in your source build. We should reverse the order of that 
 search so it checks locally first. Also the init scripts for a package should 
 set the environment variables correctly so no searching needs to be done and 
 there is no worry of the init scripts loading the wrong file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3976) [patch[ don't compare byte arrays with ==

2012-03-01 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3976:
--

 Reviewer: jbellis
Affects Version/s: 1.1.0
Fix Version/s: 1.1.0
 Assignee: Dave Brosius

 [patch[ don't compare byte arrays with ==
 -

 Key: CASSANDRA-3976
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3976
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.1.0
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1.0

 Attachments: cmp_bytearrays_w_equals.diff, 
 cmp_bytearrays_w_equals_2.diff


 code compares byte arrays with ==, use Arrays.equals
 patch against trunk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3988) NullPointerException in org.apache.cassandra.service.AntiEntropyService when repair finds a keyspace with no CFs

2012-03-01 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3988:
--

 Priority: Minor  (was: Major)
Fix Version/s: 1.0.9
 Assignee: Sylvain Lebresne

 NullPointerException in org.apache.cassandra.service.AntiEntropyService when 
 repair finds a keyspace with no CFs
 

 Key: CASSANDRA-3988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3988
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Bill Hathaway
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.0.9


 2012-03-01 21:38:09,039 [RMI TCP Connection(142)-10.253.106.21] INFO  
 StorageService - Starting repair command #15, repairing 3 ranges.
 2012-03-01 21:38:09,039 [AntiEntropySessions:14] INFO  AntiEntropyService - 
 [repair #d68369f0-63e6-11e1--8add8b9398fd] new session: will sync 
 /10.253.106.21, /10.253.106.248, /10.253.106.247 on range 
 (85070591730234615865843651857942052864,106338239662793269832304564822427566080]
  for PersonalizationDataService2.[]
 2012-03-01 21:38:09,039 [AntiEntropySessions:14] ERROR 
 AbstractCassandraDaemon - Fatal exception in thread 
 Thread[AntiEntropySessions:14,5,RMI Runtime]
 java.lang.NullPointerException
 at 
 org.apache.cassandra.service.AntiEntropyService$RepairSession.runMayThrow(AntiEntropyService.java:691)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3442) TTL histogram for sstable metadata

2012-02-28 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3442:
--

Attachment: 3442-v3.txt

v3 attached.  It's a bit of two steps forward, one step back:

- renames RowStats - ColumnStats; separates row size computation from 
column/tombstone counts, moves ColumnStats computation out of serializer and 
into AbstractCompactedRow
- I switched from checking instanceof ExipiringColumn, to instanceof 
DeletedColumn, since an ExpiringColumn just means it will expire eventually (at 
which point it turns into a DeletedColumn), whereas a DeletedColumn is a 
tombstone that will be eligible for dropping after gc_grace_seconds.  A common 
use case for TTL is to expire all data in a row after N days; if we're just 
going by this column has a TTL we'll compact these sstables daily even if 
none of the data has actually expired yet.  Switching to checking for DC 
instead mitigates this a little.

However, the more I think about it, the more I think what we *really* want to 
track is a histogram of *when tombstones are eligible to be dropped*, relative 
to the sstable creation time.  So, if I had a column that expired after 30 
days, and a gc_grace_seconds of 10 days, I'd add an entry for 40 days to the 
histogram.  If I had a new manual delete, I'd add an entry for 10 days.

This would allow us to have a good estimate of *how much of the sstable could 
actually be cleaned out by compaction*, and we could drop the 
single_compaction_interval code entirely.

What do you think?

Minor note: the new test seems fairly involved -- what would we lose by just 
testing compaction of a single sstable w/ tombstones? 

 TTL histogram for sstable metadata
 --

 Key: CASSANDRA-3442
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3442
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
Priority: Minor
  Labels: compaction
 Fix For: 1.2

 Attachments: 3442-v3.txt, cassandra-1.1-3442.txt


 Under size-tiered compaction, you can generate large sstables that compact 
 infrequently.  With expiring columns mixed in, we could waste a lot of space 
 in this situation.
 If we kept a TTL EstimatedHistogram in the sstable metadata, we could do a 
 single-sstable compaction aginst sstables with over 20% (?) expired data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3862) RowCache misses Updates

2012-02-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3862:
--

Attachment: 3862-v8.txt

v8 attached w/ long sentinel and IRowCacheEntry.

 RowCache misses Updates
 ---

 Key: CASSANDRA-3862
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3862
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6
Reporter: Daniel Doubleday
Assignee: Sylvain Lebresne
 Fix For: 1.1.0

 Attachments: 3862-7.txt, 3862-cleanup.txt, 3862-v2.patch, 
 3862-v4.patch, 3862-v5.txt, 3862-v6.txt, 3862-v8.txt, 3862.patch, 
 3862_v3.patch, include_memtables_in_rowcache_read.patch


 While performing stress tests to find any race problems for CASSANDRA-2864 I 
 guess I (re-)found one for the standard on-heap row cache.
 During my stress test I hava lots of threads running with some of them only 
 reading other writing and re-reading the value.
 This seems to happen:
 - Reader tries to read row A for the first time doing a getTopLevelColumns
 - Row A which is not in the cache yet is updated by Writer. The row is not 
 eagerly read during write (because we want fast writes) so the writer cannot 
 perform a cache update
 - Reader puts the row in the cache which is now missing the update
 I already asked this some time ago on the mailing list but unfortunately 
 didn't dig after I got no answer since I assumed that I just missed 
 something. In a way I still do but haven't found any locking mechanism that 
 makes sure that this should not happen.
 The problem can be reproduced with every run of my stress test. When I 
 restart the server the expected column is there. It's just missing from the 
 cache.
 To test I have created a patch that merges memtables with the row cache. With 
 the patch the problem is gone.
 I can also reproduce in 0.8. Haven't checked 1.1 but I haven't found any 
 relevant change their either so I assume the same aplies there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3959) [patch] report bad meta data field in cli instead of silently ignoring

2012-02-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3959:
--

Fix Version/s: 1.1.0
 Assignee: Pavel Yaskevich

 [patch] report bad meta data field in cli instead of silently ignoring
 --

 Key: CASSANDRA-3959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3959
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Dave Brosius
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 1.1.0

 Attachments: better_cli_errors.diff


 If cli is parsing an
 ^(ARRAY ^(HASH ^(PAIR .. ..) ^(PAIR .. ..)) ^(HASH ...))
 and a hash pair has a key that is unrecognized, it just ignores and 
 continues.. better to report
 patch does this.
 for instance
 update column family cf with column_metadata = 
 [{comparator_type:UTF8Type,column_name:idx,validation_class:IntegerType,index_type:0,index_name:idxname}];
 comparator_type is not processed, just ignored.
 (patch against trunk)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3959) [patch] report bad meta data field in cli instead of silently ignoring

2012-02-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3959:
--

Reviewer: xedin
Assignee: Dave Brosius  (was: Pavel Yaskevich)

 [patch] report bad meta data field in cli instead of silently ignoring
 --

 Key: CASSANDRA-3959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3959
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1.0

 Attachments: better_cli_errors.diff


 If cli is parsing an
 ^(ARRAY ^(HASH ^(PAIR .. ..) ^(PAIR .. ..)) ^(HASH ...))
 and a hash pair has a key that is unrecognized, it just ignores and 
 continues.. better to report
 patch does this.
 for instance
 update column family cf with column_metadata = 
 [{comparator_type:UTF8Type,column_name:idx,validation_class:IntegerType,index_type:0,index_name:idxname}];
 comparator_type is not processed, just ignored.
 (patch against trunk)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3953) Replace deprecated and removed CfDef and KsDef attributes in thrift spec

2012-02-27 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3953:
--

Reviewer: xedin

 Replace deprecated and removed CfDef and KsDef attributes in thrift spec
 

 Key: CASSANDRA-3953
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3953
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.0.0
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
  Labels: thrift_protocol
 Fix For: 1.1.0


 In a discussion on irc this morning around the interface backwards 
 compatibility topic (as explained in CASSANDRA-3951), the opinion was 
 expressed that it might not hurt to provide backwards compat for c* servers 
 as well as clients.
 This could be done by adding back all CfDef and KsDef attributes that were 
 removed since thrift spec 19.0.0 (0.7.0-beta2). Namely:
 * bool CfDef.preload_row_cache (only in 0.7.0 betas; probably not necessary)
 * double CfDef.row_cache_size
 * double CfDef.key_cache_size
 * i32 CfDef.row_cache_save_period_in_seconds
 * i32 CfDef.key_cache_save_period_in_seconds
 * i32 CfDef.memtable_flush_after_mins
 * i32 CfDef.memtable_throughput_in_mb
 * double CfDef.memtable_operations_in_millions
 * string CfDef.row_cache_provider
 * i32 CfDef.row_cache_keys_to_save
 * double CfDef.merge_shards_chance
 * i32 KsDef.replication_factor
 Obviously these attributes should not be expected to have any effect when 
 used with the current version of Cassandra; they may be marked ignored, 
 unused, or deprecated or whatever, as appropriate.
 This should allow library software to be built against one thrift spec (the 
 latest) and be then expected to work (keeping all necessary attributes 
 available and usable) against any Cassandra version back to 0.7.0-beta2.
 (To really achieve this goal 100%, we should reinstate the 
 system_rename_column_family() and system_rename_keyspace() calls too, and 
 just have them raise InvalidRequestException, but they never really worked 
 anyway, so it's probably better to leave them out.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   3   4   5   6   7   >