[jira] [Resolved] (CASSANDRA-4167) nodetool compactionstats displays the compaction's remaining time

2012-04-18 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4167.
---

Resolution: Fixed
  Reviewer: jbellis
  Assignee: Fabien Rousseau

committed, thanks!

 nodetool compactionstats displays the compaction's remaining time
 -

 Key: CASSANDRA-4167
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4167
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Fabien Rousseau
Assignee: Fabien Rousseau
Priority: Trivial
 Fix For: 1.1.0

 Attachments: 
 4167-nodetool-compaction-displays-remaining-time-v2.patch, 
 4167-nodetool-compactions-displays-remaining-time.patch


 nodetool compactionstats allows to display active compactions with their 
 progress (a percentage of their completion).
 For big compactions (up to a few hundreds of GB), it is sometimes difficult 
 to know how much time is left before the compactions ends.
 The attached patch allows to display the remaining time of active compactions 
 based on the compaction_througput parameter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4171) cql3 ALTER TABLE foo WITH default_validation=int has no effect

2012-04-18 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4171.
---

Resolution: Fixed

We should raise an error for trying to use default_validation under cql3.  The 
right way to model this would be something like:

{noformat}
CREATE TABLE test (
foo text,
i   int,
PRIMARY KEY (foo, i)
) WITH COMPACT STORAGE;
{noformat}


 cql3 ALTER TABLE foo WITH default_validation=int has no effect
 --

 Key: CASSANDRA-4171
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4171
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Affects Versions: 1.1.0
Reporter: paul cannon
Assignee: paul cannon
  Labels: cql3
 Fix For: 1.1.0


 running the following with cql3:
 {noformat}
 CREATE TABLE test (foo text PRIMARY KEY) WITH default_validation=timestamp;
 ALTER TABLE test WITH default_validation=int;
 {noformat}
 does not actually change the default validation type of the CF. It does under 
 cql2.
 No error is thrown. Some properties *can* be successfully changed using ALTER 
 WITH, such as comment and gc_grace_seconds, but I haven't tested all of them. 
 It seems probable that default_validation is the only problematic one, since 
 it's the only (changeable) property which accepts CQL typenames.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4065) Bogus MemoryMeter liveRatio calculations

2012-04-18 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4065.
---

Resolution: Fixed

sounds reasonable to me.  committed, thanks!

 Bogus MemoryMeter liveRatio calculations
 

 Key: CASSANDRA-4065
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4065
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
Reporter: Daniel Doubleday
Assignee: Daniel Doubleday
Priority: Minor
 Fix For: 1.1.0


 I get strange cfs.liveRatios.
 A couple of mem meter runs seem to calculate bogus results: 
 {noformat}
 Tue 09:14:48 dd@blnrzh045:~$ grep 'setting live ratio to maximum of 64 
 instead of' /var/log/cassandra/system.log
  WARN [MemoryMeter:1] 2012-03-20 08:08:07,253 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of Infinity
  WARN [MemoryMeter:1] 2012-03-20 08:08:09,160 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of Infinity
  WARN [MemoryMeter:1] 2012-03-20 08:08:13,274 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of Infinity
  WARN [MemoryMeter:1] 2012-03-20 08:08:22,032 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of Infinity
  WARN [MemoryMeter:1] 2012-03-20 08:12:41,057 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of 67.11787351054079
  WARN [MemoryMeter:1] 2012-03-20 08:13:50,877 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of 112.58547951925435
  WARN [MemoryMeter:1] 2012-03-20 08:15:29,021 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of 193.36945063589877
  WARN [MemoryMeter:1] 2012-03-20 08:17:50,716 Memtable.java (line 193) 
 setting live ratio to maximum of 64 instead of 348.45008340969434
 {noformat}
 Because meter runs never decrease liveRatio in Memtable (Which seems strange 
 to me. If past calcs should be included for any reason wouldn't averaging 
 make more sense?):
 {noformat}
 cfs.liveRatio = Math.max(cfs.liveRatio, newRatio);
 {noformat}
 Memtables are flushed every couple of secs:
 {noformat}
 ColumnFamilyStore.java (line 712) Enqueuing flush of 
 Memtable-BlobStore@935814661(1874540/149963200 serialized/live bytes, 202 ops)
 {noformat}
 Even though a saner liveRatio has been calculated after the bogus runs:
 {noformat}
 INFO [MemoryMeter:1] 2012-03-20 08:19:55,934 Memtable.java (line 198) 
 CFS(Keyspace='SmeetBlob', ColumnFamily='BlobStore') 
liveRatio is 64.0 (just-counted was 2.97165811895841).  calculation took 
 124ms for 58 columns
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4158) CLI is missing newer column family attributes

2012-04-17 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4158.
---

Resolution: Won't Fix

We're making attributes that only CQL cares about, only accessible from CQL.  
This will result in less foot-shooting.

 CLI is missing newer column family attributes
 -

 Key: CASSANDRA-4158
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4158
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.0.9, 1.1.0
Reporter: Tyler Hobbs
Priority: Minor
  Labels: cli

 The CLI doesn't support setting some of the newer column family attributes 
 when creating or updating.  For example, key_alias cannot be set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4162) nodetool disablegossip does not prevent gossip delivery of writes via already-initiated hinted handoff

2012-04-17 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4162.
---

Resolution: Invalid

Hint delivery does not depend on gossip, so I would not expect disabling gossip 
to stop an already-started delivery, nor should it.

(It *should* however stop subsequent handoff runs.)

 nodetool disablegossip does not prevent gossip delivery of writes via 
 already-initiated hinted handoff
 --

 Key: CASSANDRA-4162
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4162
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.9
 Environment: reported on IRC, believe it was a linux environment, 
 nick rhone, cassandra 1.0.8
Reporter: Robert Coli
Priority: Minor
  Labels: gossip

 This ticket derives from #cassandra, aaron_morton and I assisted a user who 
 had run disablethrift and disablegossip and was confused as to why he was 
 seeing writes to his node.
 Aaron and I went through a series of debugging questions, user verified that 
 there was traffic on the gossip port. His node was showing as down from the 
 perspective of other nodes, and nodetool also showed that gossip was not 
 active.
 Aaron read the code and had the user turn debug logging on. The user saw 
 Hinted Handoff messages being delivered and Aaron confirmed in the code that 
 a hinted handoff delivery session only checks gossip state when it first 
 starts. As a result, it will continue to deliver hints and disregard gossip 
 state on the target node.
 per nodetool docs
 
 disablegossip  - Disable gossip (effectively marking the node dead)
 
 I believe most people will be using disablegossip and disablethrift for 
 operational reasons, and propose that they do not expect HH delivery to 
 continue, via gossip, when they have run disablegossip.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4151) Apache project branding requirements: DOAP file [PATCH]

2012-04-17 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4151.
---

Resolution: Fixed

Added the maintainer section to our existing DOAP file and left the rest alone.

(Unsure if I can just change SVNRepository tag to GitRepository. If so, that 
would be better.)

 Apache project branding requirements: DOAP file [PATCH]
 ---

 Key: CASSANDRA-4151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4151
 Project: Cassandra
  Issue Type: Improvement
Reporter: Shane Curcuru
  Labels: branding
 Attachments: doap_Cassandra.rdf


 Attached.  Re: http://www.apache.org/foundation/marks/pmcs
 See Also: http://projects.apache.org/create.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4153) Optimize truncate when snapshots are disabled or keyspace not durable

2012-04-16 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4153.
---

   Resolution: Fixed
Fix Version/s: 1.1.1
 Reviewer: jbellis
 Assignee: Christian Spriegel

Looks good to me, committed.

(We do want the lock: we're not concerned about writes-in-progress per se 
(either keeping them or discarding them is fine), but we definitely want to 
keep them consistent with their indexes, and taking out the writeLock here is 
the only way I can see to do that.)

 Optimize truncate when snapshots are disabled or keyspace not durable
 -

 Key: CASSANDRA-4153
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4153
 Project: Cassandra
  Issue Type: Improvement
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
 Fix For: 1.1.1

 Attachments: OptimizeTruncate_v1.diff


 My goal is to make truncate to be less IO intensive so that my junit tests 
 run faster (as already explained in CASSANDRA-3710). I think I have now a 
 solution which does not change too much:
 I created a patch that optimizes three things within truncate:
 - Skip the whole Commitlog.forceNewSegment/discardCompletedSegments, if 
 durable_writes are disabled for the keyspace.
 - With CASSANDRA-3710 implemented, truncate does not need to flush memtables 
 to disk when snapshots are disabled.
 - Reduce the sleep interval
 The patch works nicely for me. Applying it and disabling 
 durable_writes/autoSnapshot increased the speed of my testsuite vastly. I 
 hope I did not overlook something.
 Let me know if my patch needs cleanup. I'd be glad to change it, if it means 
 the patch will get accepted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4145) NullPointerException when using sstableloader with PropertyFileSnitch configured

2012-04-13 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4145.
---

Resolution: Fixed

lgtm, committed

 NullPointerException when using sstableloader with PropertyFileSnitch 
 configured
 

 Key: CASSANDRA-4145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4145
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
Affects Versions: 0.8.1
Reporter: Ji Cheng
Assignee: Ji Cheng
Priority: Minor
  Labels: bulkloader
 Fix For: 1.0.10, 1.1.0

 Attachments: 4145.txt


 I got a NullPointerException when using sstableloader on 1.0.6. The cluster 
 is using PropertyFileSnitch. The same configuration file is used for 
 sstableloader. 
 The problem is if StorageService is initialized before DatabaseDescriptor, 
 PropertyFileSnitch will try to access StorageService.instance before it 
 finishes initialization.
 {code}
  ERROR 01:14:05,601 Fatal configuration error
 org.apache.cassandra.config.ConfigurationException: Error instantiating 
 snitch class 'org.apache.cassandra.locator.PropertyFileSnitch'.
 at 
 org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:607)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:454)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:306)
 at 
 org.apache.cassandra.service.StorageService.init(StorageService.java:187)
 at 
 org.apache.cassandra.service.StorageService.clinit(StorageService.java:190)
 at 
 org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:183)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:106)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:62)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown 
 Source)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
 Source)
 at java.lang.reflect.Constructor.newInstance(Unknown Source)
 at 
 org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:589)
 ... 7 more
 Caused by: java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.reloadConfiguration(PropertyFileSnitch.java:170)
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.init(PropertyFileSnitch.java:60)
 ... 12 more
 Error instantiating snitch class 
 'org.apache.cassandra.locator.PropertyFileSnitch'.
 Fatal configuration error; unable to start server.  See log for stacktrace.
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3283) node is sending streams to itself during move operation

2012-04-13 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3283.
---

   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.1.1)
 Reviewer:   (was: thepaul)

 node is sending streams to itself during move operation
 ---

 Key: CASSANDRA-3283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: linux debian stable (squeeze), cassandra 0.8.6, java 6 
 Update 26
Reporter: Zenek Kraweznik
Priority: Minor

 I'm moving node 10.10.10.231 from 113427455640312821154458202477256070485 to 
 0.
 ring:
 Address DC  RackStatus State   LoadOwns   
  Token
   
  127605887595351923798765477786913079296
 10.10.10.232  datacenter1 rack1   Up Normal  148.7 GB50.00%  
 42535295865117307932921825928971026432
 10.10.10.233  datacenter1 rack1   Up Normal  156.77 GB   25.00%  
 85070591730234615865843651857942052864
 10.10.10.231  datacenter1 rack1   Up Moving  188.75 GB   16.67%  
 113427455640312821154458202477256070485
 10.10.10.234  datacenter1 rack1   Up Normal  94.89 GB8.33%   
 127605887595351923798765477786913079296
 netstats from node1:
 Streaming to: /10.10.10.234
/var/lib/cassandra/data/testdb2/ChangeLog-g-30-Data.db sections=2 
 progress=0/6859992288 - 0%
/var/lib/cassandra/data/testdb2/ChangeLogIndex-g-10-Data.db sections=1 
 progress=0/15286 - 0%
/var/lib/cassandra/data/testdb/ChangeLogIndex-g-48-Data.db sections=1 
 progress=0/90 - 0%
/var/lib/cassandra/data/testdb/Testcoll-g-107-Data.db sections=2 
 progress=0/5276 - 0%
/var/lib/cassandra/data/testdb/Testcoll2-g-74-Data.db sections=2 
 progress=0/470 - 0%
/var/lib/cassandra/data/testdb/ChangeLogIndex-g-47-Data.db sections=1 
 progress=0/1156 - 0%
/var/lib/cassandra/data/testdb/Testcoll-g-106-Data.db sections=2 
 progress=0/329027714 - 0%
/var/lib/cassandra/data/testdb/ChangeLog-g-61-Data.db sections=2 
 progress=0/30212596 - 0%
/var/lib/cassandra/data/testdb3/Testcoll2-g-42-Data.db sections=2 
 progress=0/774117 - 0%
/var/lib/cassandra/data/testdb3/ChangeLogIndex-g-10-Data.db sections=1 
 progress=0/90 - 0%
 Streaming to: /10.10.10.231
/var/lib/cassandra/data/testdb2/Testcoll-g-30-Data.db sections=2 
 progress=39059456000/87950308260 - 44%
/var/lib/cassandra/data/testdb2/ChangeLog-g-30-Data.db sections=2 
 progress=0/7806077255 - 0%
/var/lib/cassandra/data/testdb2/ChangeLogIndex-g-10-Data.db sections=1 
 progress=0/15286 - 0%
/var/lib/cassandra/data/testdb3/Testcoll2-g-42-Data.db sections=2 
 progress=0/784033 - 0%
/var/lib/cassandra/data/testdb3/ChangeLogIndex-g-10-Data.db sections=1 
 progress=0/90 - 0%
/var/lib/cassandra/data/testdb/ChangeLogIndex-g-48-Data.db sections=1 
 progress=0/90 - 0%
/var/lib/cassandra/data/testdb/Testcoll-g-107-Data.db sections=2 
 progress=0/10499 - 0%
/var/lib/cassandra/data/testdb/Testcoll2-g-74-Data.db sections=2 
 progress=0/1042 - 0%
/var/lib/cassandra/data/testdb/ChangeLogIndex-g-47-Data.db sections=1 
 progress=0/1156 - 0%
/var/lib/cassandra/data/testdb/Testcoll-g-106-Data.db sections=2 
 progress=0/329965993 - 0%
/var/lib/cassandra/data/testdb/ChangeLog-g-61-Data.db sections=2 
 progress=0/24633913 - 0%
 Streaming from: /10.10.10.231
testdb: /var/lib/cassandra/data/testdb/ChangeLogIndex-g-47-Data.db 
 sections=1 progress=0/1156 - 0%
testdb: /var/lib/cassandra/data/testdb/ChangeLogIndex-g-48-Data.db 
 sections=1 progress=0/90 - 0%
testdb: /var/lib/cassandra/data/testdb/ChangeLog-g-61-Data.db sections=2 
 progress=0/24633913 - 0%
testdb3: /var/lib/cassandra/data/testdb3/Testcoll2-g-42-Data.db sections=2 
 progress=0/784033 - 0%
testdb: /var/lib/cassandra/data/testdb/Testcoll2-g-74-Data.db sections=2 
 progress=0/1042 - 0%
testdb3: /var/lib/cassandra/data/testdb3/ChangeLogIndex-g-10-Data.db 
 sections=1 progress=0/90 - 0%
testdb2: /var/lib/cassandra/data/testdb2/ChangeLog-g-30-Data.db sections=2 
 progress=0/7806077255 - 0%
testdb2: /var/lib/cassandra/data/testdb2/ChangeLogIndex-g-10-Data.db 
 sections=1 progress=0/15286 - 0%
testdb: /var/lib/cassandra/data/testdb/Testcoll-g-106-Data.db sections=2 
 progress=0/329965993 - 0%
testdb2: /var/lib/cassandra/data/testdb2/Testcoll-g-30-Data.db sections=2 
 progress=39059456000/87950308260 - 44%
testdb: /var/lib/cassandra/data/testdb/Testcoll-g-107-Data.db sections=2 
 progress=0/10499 - 0%
 Pool NameActive   Pending  Completed
 Commandsn/a 0 23
 

[jira] [Resolved] (CASSANDRA-4137) QUORUM Multiget RangeSliceQuery causes unnecessary writes to read entries

2012-04-11 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4137.
---

   Resolution: Duplicate
Fix Version/s: (was: 0.8.11)

fixed in 1.0.8 for CASSANDRA-3843

 QUORUM Multiget RangeSliceQuery causes unnecessary writes to read entries
 -

 Key: CASSANDRA-4137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.9
Reporter: Thibaut

 From the mailing list:
 I created a new test keyspace and added 10 000 keys to it. The cluster has 3 
 machines, RF=3, read repair disabled (enabling it didn't change anything). 
 The keyspace doesn't contain any thumbstones. No keys were deleted.
 When I fetch a rangeslice through hector and set the consistency level to 
 quorum, according to cfstats (and also to the output files on the hd), 
 cassandra seems to execute a write request for each read I execute. The write 
 count in cfstats is increased when I execute the rangeslice function over the 
 same range again and again (without saving anything at all).
 If I set the consistency level to ONE or ALL, no writes are executed.
 I checked the writes on one machine. They increased by 2300 for each 
 iteration over the 1 keys. I didn't check, but this probably corresponds 
 to the number of keys for which the machine is responsible.
 Code:
 Keyspace ks = getConnection(cluster, 
 consistencylevel);
   RangeSlicesQueryString, String, V 
 rangeSlicesQuery = HFactory.createRangeSlicesQuery(ks, 
 StringSerializer.get(), StringSerializer.get(), s);
   rangeSlicesQuery.setColumnFamily(columnFamily);
   rangeSlicesQuery.setColumnNames(column);
   rangeSlicesQuery.setKeys(start, end);
   rangeSlicesQuery.setRowCount(maxrows);
   QueryResultOrderedRowsString, String, V 
 result = rangeSlicesQuery.execute();
   return result.get();

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3883) CFIF WideRowIterator only returns batch size columns

2012-04-11 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3883.
---

Resolution: Fixed

committed

 CFIF WideRowIterator only returns batch size columns
 

 Key: CASSANDRA-3883
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3883
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
Reporter: Brandon Williams
Assignee: Jonathan Ellis
 Fix For: 1.1.0

 Attachments: 3883-v1.txt, 3883-v2.txt, 3883-v3.txt


 Most evident with the word count, where there are 1250 'word1' items in two 
 rows (1000 in one, 250 in another) and it counts 198 with the batch size set 
 to 99.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4118) ConcurrentModificationException in ColumnFamily.updateDigest(ColumnFamily.java:294) (cassandra 1.0.8)

2012-04-10 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4118.
---

   Resolution: Invalid
Fix Version/s: (was: 1.0.10)
   (was: 1.1.0)
 Assignee: (was: Vijay)

That makes sense to me.

Resolving as invalid, unless someone can reproduce with the Thrift API.

 ConcurrentModificationException in 
 ColumnFamily.updateDigest(ColumnFamily.java:294)  (cassandra 1.0.8)
 --

 Key: CASSANDRA-4118
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4118
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.8
 Environment: two nodes, replication factor=2
Reporter: Zklanu Ryś

 Sometimes when reading data I receive them without any exception but I can 
 see in Cassandra logs, that there is an error:
 ERROR [ReadRepairStage:58] 2012-04-05 12:04:35,732 
 AbstractCassandraDaemon.java (line 139) Fatal exception in thread 
 Thread[ReadRepairStage:58,5,main]
 java.util.ConcurrentModificationException
 at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
 at java.util.AbstractList$Itr.next(AbstractList.java:343)
 at 
 org.apache.cassandra.db.ColumnFamily.updateDigest(ColumnFamily.java:294)
 at org.apache.cassandra.db.ColumnFamily.digest(ColumnFamily.java:288)
 at 
 org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:102)
 at 
 org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:30)
 at 
 org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.runMayThrow(ReadCallback.java:227)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3690) Streaming CommitLog backup

2012-04-09 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3690.
---

Resolution: Fixed

committed w/ some final improvements to yaml comments

 Streaming CommitLog backup
 --

 Key: CASSANDRA-3690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3690
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-CASSANDRA-3690-v2.patch, 
 0001-CASSANDRA-3690-v4.patch, 0001-CASSANDRA-3690-v5.patch, 
 0001-Make-commitlog-recycle-configurable.patch, 
 0002-support-commit-log-listener.patch, 0003-helper-jmx-methods.patch, 
 0004-external-commitlog-with-sockets.patch, 
 0005-cmmiting-comments-to-yaml.patch, 3690-v6.txt


 Problems with the current SST backups
 1) The current backup doesn't allow us to restore point in time (within a SST)
 2) Current SST implementation needs the backup to read from the filesystem 
 and hence additional IO during the normal operational Disks
 3) in 1.0 we have removed the flush interval and size when the flush will be 
 triggered per CF, 
   For some use cases where there is less writes it becomes 
 increasingly difficult to time it right.
 4) Use cases which needs BI which are external (Non cassandra), needs the 
 data in regular intervals than waiting for longer or unpredictable intervals.
 Disadvantages of the new solution
 1) Over head in processing the mutations during the recover phase.
 2) More complicated solution than just copying the file to the archive.
 Additional advantages:
 Online and offline restore.
 Close to live incremental backup.
 Note: If the listener agent gets restarted, it is the agents responsibility 
 to Stream the files missed or incomplete.
 There are 3 Options in the initial implementation:
 1) Backup - Once a socket is connected we will switch the commit log and 
 send new updates via the socket.
 2) Stream - will take the absolute path of the file and will read the file 
 and send the updates via the socket.
 3) Restore - this will get the serialized bytes and apply's the mutation.
 Side NOTE: (Not related to this patch as such) The agent which will take 
 incremental backup is planned to be open sourced soon (Name: Priam).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4129) Cannot create keyspace with specific keywords through cli

2012-04-06 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4129.
---

Resolution: Invalid

cli keywords must be quoted

 Cannot create keyspace with specific keywords through cli
 -

 Key: CASSANDRA-4129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4129
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.8
Reporter: Manoj Kanta Mainali
Priority: Minor

 Keyspaces cannot be create when the keyspace name which are used as keywords 
 in the cli, such as 'keyspace', 'family' etc., through CLI. Even when 
 surrounding the keyspace with quotation does not solve the problem. However, 
 such keyspaces can be created through other client such as Hector.
 This is similar to the issue CASSANDRA-3195, in which the column families 
 could not be created. Similar to the solution of CASSANDRA-3195, using String 
 keyspaceName = CliUtil.unescapeSQLString(statement.getChild(0).getText()) in 
 executeAddKeySpace would solve the problem. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3932) schema IAE and read path NPE after cluster re-deploy

2012-04-04 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3932.
---

   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.1.0)
 Assignee: (was: Pavel Yaskevich)

I'm pretty sure that whatever was causing this is not relevant after the 
CASSANDRA-3792 rewrite of schema serialization.

 schema IAE and read path NPE after cluster re-deploy
 

 Key: CASSANDRA-3932
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3932
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
Reporter: Peter Schuller

 On the same cluster (but later) as the one where we observed CASSANDRA-3931 
 we were running some performance/latency testing. ycsb reads, plus a separate 
 little python client. All was fine.
 I then did a fast re-deploy for changed GC settings, which would have let to 
 a complete cluster restart almost simultaneously (triggering races?). When I 
 re-ran my Python client, I suddenly got an error saying Keyspace1 did not 
 exist. On re-run I started getting timeouts. Looking at the endpoints of the 
 key that I was getting a timeout for, the first error ever seen is:
 {code}
 java.lang.IllegalArgumentException: Unknown ColumnFamily Standard1 in 
 keyspace Keyspace1
 at org.apache.cassandra.config.Schema.getComparator(Schema.java:234)
 at 
 org.apache.cassandra.db.ColumnFamily.getComparatorFor(ColumnFamily.java:312)
 at 
 org.apache.cassandra.db.ReadCommand.getComparator(ReadCommand.java:94)
 at 
 org.apache.cassandra.db.SliceByNamesReadCommand.init(SliceByNamesReadCommand.java:44)
 at 
 org.apache.cassandra.db.SliceByNamesReadCommandSerializer.deserialize(SliceByNamesReadCommand.java:113)
 at 
 org.apache.cassandra.db.SliceByNamesReadCommandSerializer.deserialize(SliceByNamesReadCommand.java:81)
 at 
 org.apache.cassandra.db.ReadCommandSerializer.deserialize(ReadCommand.java:134)
 at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:53)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 And later in the read path NPE:s like these:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.Table.createReplicationStrategy(Table.java:321)
 at org.apache.cassandra.db.Table.init(Table.java:277)
 at org.apache.cassandra.db.Table.open(Table.java:120)
 at org.apache.cassandra.db.Table.open(Table.java:103)
 at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:54)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3968) LoadNewSSTables can conflict with old incremental backups

2012-04-04 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3968.
---

   Resolution: Duplicate
Fix Version/s: (was: 1.1.0)

done in CASSANDRA-3967

 LoadNewSSTables can conflict with old incremental backups
 -

 Key: CASSANDRA-3968
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3968
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Priority: Minor

 If we load a new sstable from the filesystem with a generation in our past, 
 the incremental backup hard link may conflict with an existing one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3770) Timestamp datatype in CQL gets weird value?

2012-04-04 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3770.
---

   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.2)

 Timestamp datatype in CQL gets weird value?
 ---

 Key: CASSANDRA-3770
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3770
 Project: Cassandra
  Issue Type: New Feature
 Environment: MacOS, Java
Reporter: Jawahar Prasad JP
Priority: Minor
  Labels: cql, java

 Hi..
 I have created a columnfamily through CQL, having datatype as timestamp,
 I generate timestamp like this in Java:
 System.currentTimeMillis()
 (or)
 System.currentTimeMillis()*1000
 When I see the output through CQL, I get the data like below:
 1.32725062505e+12
 Also, I am not able to use any operators against this (like  '01 January 
 2012' etc.,), I get the below error:
 No indexed columns present in by-columns clause with equals operator
 But, I have created an index for the timestamp column.
 Any help ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3424) Selecting just the row_key returns nil instead of just the row_key

2012-04-02 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3424.
---

Resolution: Incomplete

Re-resolving as incomplete since CASSANDRA-3982 is open for the more general 
problem of range ghosts in general.

 Selecting just the row_key returns nil instead of just the row_key
 --

 Key: CASSANDRA-3424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: Kelley Reynolds
Assignee: Jonathan Ellis
Priority: Minor
  Labels: cql
 Fix For: 1.0.3

 Attachments: 3424-v2.txt, CASSANDRA-3424.patch


 CREATE KEYSPACE CassandraCQLTestKeyspace WITH 
 strategy_class='org.apache.cassandra.locator.SimpleStrategy' AND 
 strategy_options:replication_factor=1
 USE CassandraCQLTestKeyspace
 CREATE COLUMNFAMILY row_key_validation_cf_ascii (id ascii PRIMARY KEY, 
 test_column text)
 INSERT INTO row_key_validation_cf_ascii (id, test_column) VALUES ('test 
 string', 'test')
 # Works as expected
 SELECT * FROM row_key_validation_cf_ascii WHERE id = 'test string'
 # Returns an empty result, unexpected
 SELECT id FROM row_key_validation_cf_ascii WHERE id = 'test string'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3819) Cannot restart server after making schema changes to composite CFs

2012-04-02 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3819.
---

   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.0.10)
 Assignee: (was: Sylvain Lebresne)

Resolving as cantrepro.  Carlo/Huy, if you or anyone else can give us a test 
case to work on, please re-open.

 Cannot restart server after making schema changes to composite CFs
 --

 Key: CASSANDRA-3819
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3819
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.6, 1.0.7
 Environment: Ubuntu 11.0.4
Reporter: Huy Le

 This JIRA is for issue discussed in this thread 
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cannot-start-cassandra-node-anymore-tp7150978p7150978.html.
 We were using version 1.0.6.  We added new keyspace using built-in composite 
 data type.  We then decided to change the schema, specifically just the CF 
 names, so we dropped the keyspace.  We recreated the key space with different 
 CF names in the key space.
 There were a lot of uncommitted data in commit logs.  Data dated back before 
 the original key space was created.  When we restarted the server, the server 
 failed when it read it the commit logs, and the server stopped.  Here is 
 snippet of the stack trace:
 {code}
 -3881-11e1-ac7f-12313d23ead3:true:4@1326223353559001,])}
 DEBUG 18:02:01,057 Reading mutation at 66336992
 DEBUG 18:02:01,058 replaying mutation for 
 Springpad.696d6167652d7363616c65722d6d657461: 
 {ColumnFamily(CassandraOrderedQueue 
 [0,eb321490-3881-11e1-ac7f-12313d23ead3:true:4@132622335356,])}
 DEBUG 18:02:01,058 Reading mutation at 66337118
 DEBUG 18:02:01,058 replaying mutation for 
 Springpad.737072696e674d6f64656c44617461626173652d6d657461: 
 {ColumnFamily(CassandraOrderedQueue 
 [0,80dc0cd0-3bc0-11e1-83a8-12313d23ead3:false:8@1326223386668000,])}
 DEBUG 18:02:01,058 Reading mutation at 66337255
 DEBUG 18:02:01,058 replaying mutation for 
 system.38363233616337302d336263302d313165312d303030302d323366623834646463346633:
  {ColumnFamily(Schema 
 [Avro/Schema:false:2725@1326223386807,Backups:false:431@1326223386807,Springpad:false:10814@1326223386807,SpringpadGraph:false:2931@1326223386807,])}
 DEBUG 18:02:01,059 Reading mutation at 66354352
 DEBUG 18:02:01,059 replaying mutation for 
 system.4d6967726174696f6e73204b6579: {ColumnFamily(Migrations 
 [8623ac70-3bc0-11e1--23fb84ddc4f3:false:23728@1326223386812,])}
 DEBUG 18:02:01,059 Reading mutation at 66378184
 DEBUG 18:02:01,059 replaying mutation for 
 system.4c617374204d6967726174696f6e: {ColumnFamily(Schema [Last 
 Migration:false:16@1326223386812,])}
 DEBUG 18:02:01,059 Reading mutation at 66378302
  INFO 18:02:01,060 Finished reading 
 /mnt/cassandra/commitlog/CommitLog-1325861435420.log
 ERROR 18:02:01,061 Exception encountered during startup
 java.lang.IllegalArgumentException
 at java.nio.Buffer.limit(Buffer.java:247)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:57)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:66)
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.getString(AbstractCompositeType.java:129)
 at org.apache.cassandra.db.Column.getString(Column.java:250)
 at 
 org.apache.cassandra.db.marshal.AbstractType.getColumnsString(AbstractType.java:137)
 at 
 org.apache.cassandra.db.ColumnFamily.toString(ColumnFamily.java:280)
 at org.apache.commons.lang.ObjectUtils.toString(ObjectUtils.java:241)
 at org.apache.commons.lang.StringUtils.join(StringUtils.java:3073)
 at org.apache.commons.lang.StringUtils.join(StringUtils.java:3133)
 at 
 org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:301)
 at 
 org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:172)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:215)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:356)
 at 
 org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107)
 Exception encountered during startup: null 
 {code}
 Sample original CF schema:
 {code}
 create column family InEdges
   with column_type = 'Standard'
   and comparator = 
 'CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'UTF8Type'
   and rows_cached = 0.0
   and row_cache_save_period = 0
   and 

[jira] [Resolved] (CASSANDRA-4064) cfstats should include the cfId

2012-04-02 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4064.
---

   Resolution: Won't Fix
Fix Version/s: (was: 1.0.10)
 Assignee: (was: Brandon Williams)

Wontfixing this since 1.1 should be out soon and hopefully make UCFE obsolete.

 cfstats should include the cfId
 ---

 Key: CASSANDRA-4064
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4064
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams
Priority: Minor

 Specifically, when troubleshooting the dreaded 
 UnserializableColumnFamilyException: Couldn't find cfId=1001 type of error 
 (and the schema is in agreement), it would be really useful to easily see the 
 cfIds so you know where the problem is (or is not)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4107) fix broken link in cassandra-env.sh

2012-04-02 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4107.
---

   Resolution: Fixed
Fix Version/s: 1.1.0
   1.0.10
   0.8.11
 Assignee: Ilya Shipitsin

committed, thanks!

 fix broken link in cassandra-env.sh
 ---

 Key: CASSANDRA-4107
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4107
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ilya Shipitsin
Assignee: Ilya Shipitsin
 Fix For: 0.8.11, 1.0.10, 1.1.0


 link at blogs.sun.com leads to 404
 {noformat}
 --- conf/cassandra-env.sh.orig  Mon Apr  2 20:26:42 2012
 +++ conf/cassandra-env.sh   Mon Apr  2 20:27:30 2012
 @@ -186,7 +186,7 @@
  # JVM_OPTS=$JVM_OPTS -Djava.rmi.server.hostname=public name
  #
  # see
 -# 
 http://blogs.sun.com/jmxetc/entry/troubleshooting_connection_problems_in_jconsole
 +# 
 https://blogs.oracle.com/jmxetc/entry/troubleshooting_connection_problems_in_jconsole
  # for more on configuring JMX through firewalls, etc. (Short version:
  # get it working with no firewall first.)
  JVM_OPTS=$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3635) Throttle validation separately from other compaction

2012-04-02 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3635.
---

   Resolution: Incomplete
Fix Version/s: (was: 1.0.10)
 Assignee: (was: Sylvain Lebresne)

bq. I think we should get some feedback of the here's what my workload like 
and this diminishes my repair pain nature before committing this

Resolving as incomplete in the meantime.

For the record, I think incremental repair as proposed in CASSANDRA-3912 is a 
more promising approach overall.


 Throttle validation separately from other compaction
 

 Key: CASSANDRA-3635
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3635
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Sylvain Lebresne
Priority: Minor
  Labels: repair
 Attachments: 0001-separate-validation-throttling.patch


 Validation compaction is fairly ressource intensive. It is possible to 
 throttle it with other compaction, but there is cases where you really want 
 to throttle it rather aggressively but don't necessarily want to have minor 
 compactions throttled that much. The goal is to (optionally) allow to set a 
 separate throttling value for validation.
 PS: I'm not pretending this will solve every repair problem or anything. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3776) Streaming task hangs forever during repair after unexpected connection reset by peer

2012-03-29 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3776.
---

   Resolution: Duplicate
Fix Version/s: (was: 1.0.9)
 Assignee: (was: Yuki Morishita)

WFM, marking duplicate.

 Streaming task hangs forever during repair after unexpected connection reset 
 by peer
 

 Key: CASSANDRA-3776
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3776
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
 Environment: Windows Server 2008 R2
 Sun Java 7u2 64bit
Reporter: Viktor Jevdokimov
Priority: Minor

 During streaming (repair) a stream receiving node thrown an exceptions:
 ERROR [Streaming:1] 2012-01-24 10:17:03,828 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[Streaming:1,1,main]
 java.lang.RuntimeException: java.net.SocketException: Connection reset by 
 peer: socket write error
   at 
 org.apache.cassandra.utils.FBUtilities.unchecked(FBUtilities.java:689)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
   at java.lang.Thread.run(Unknown Source)
 Caused by: java.net.SocketException: Connection reset by peer: socket write 
 error
   at java.net.SocketOutputStream.socketWrite0(Native Method)
   at java.net.SocketOutputStream.socketWrite(Unknown Source)
   at java.net.SocketOutputStream.write(Unknown Source)
   at 
 com.ning.compress.lzf.LZFChunk.writeCompressedHeader(LZFChunk.java:77)
   at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:132)
   at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
   at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
   at 
 org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:181)
   at 
 org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:145)
   at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   ... 3 more
 ERROR [Streaming:1] 2012-01-24 10:17:03,891 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[Streaming:1,1,main]
 java.lang.RuntimeException: java.net.SocketException: Connection reset by 
 peer: socket write error
   at 
 org.apache.cassandra.utils.FBUtilities.unchecked(FBUtilities.java:689)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
   at java.lang.Thread.run(Unknown Source)
 Caused by: java.net.SocketException: Connection reset by peer: socket write 
 error
   at java.net.SocketOutputStream.socketWrite0(Native Method)
   at java.net.SocketOutputStream.socketWrite(Unknown Source)
   at java.net.SocketOutputStream.write(Unknown Source)
   at 
 com.ning.compress.lzf.LZFChunk.writeCompressedHeader(LZFChunk.java:77)
   at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:132)
   at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
   at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
   at 
 org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:181)
   at 
 org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:145)
   at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   ... 3 more
 After which streaming hanged forever.
 A few seconds later the sending node had an exception (may not be related):
 ERROR [Thread-17224] 2012-01-24 10:17:07,817 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[Thread-17224,5,main]
 java.lang.ArrayIndexOutOfBoundsException
 Other than that, nodes behave normally, communicating each other.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3402) Runtime exception thrown periodically under load

2012-03-27 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3402.
---

   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.0.9)
 Assignee: (was: Sylvain Lebresne)

I haven't seen anyone else hit this either. I'm going to guess it was fixed in 
one of the 1.0 maintenance releases. Please feel free to re-open if you hit 
this on 1.0.8+.

 Runtime exception thrown periodically under load
 

 Key: CASSANDRA-3402
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3402
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
 Environment: Cassandra 1.0
 Red Hat Enterprise Linux Server 6.0
Reporter: Andy Stec
 Attachments: system.log.gz


 The exception listed below is thrown periodically.  We're using thrift 
 interface for C++.  Jonathan Ellis requested that we open a bug for this.
 ERROR [ReadStage:1761] 2011-10-25 12:17:16,088 AbstractCassandraDaemon.java 
 (line 133) Fatal exception in thread Thread[ReadStage:1761,5,main]
 java.lang.RuntimeException: error reading 5 of 5
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:83)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:40)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:107)
 at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:145)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:124)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:116)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:144)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:225)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:61)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1297)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.cacheRow(ColumnFamilyStore.java:1128)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1157)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1114)
 at org.apache.cassandra.db.Table.getRow(Table.java:388)
 at 
 org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:58)
 at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:62)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: 
 invalid column name length 0
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:89)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:82)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:72)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:79)
 ... 24 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3077) Support TTL option to be set for column family

2012-03-27 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3077.
---

Resolution: Duplicate

duplicate of CASSANDRA-3974

 Support TTL option to be set for column family
 --

 Key: CASSANDRA-3077
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3077
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Affects Versions: 0.8.4
Reporter: Aleksey Vorona
Priority: Minor

 Use case: I want one of my CFs not to store any data older than two months. 
 It is a notifications CF which is of no interest to user.
 Currently I am setting TTL with each insert in the CF, but since it is a 
 constant it makes sense to me to have it configured in CF definition to apply 
 automatically to all rows in the CF.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4072) Clean up DataOutputBuffer

2012-03-27 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4072.
---

   Resolution: Fixed
Fix Version/s: 1.1.1

I was thinking we should avoid the cast since that is not free at runtime, but 
getData and getLength are typically only called once so you're right, that's 
premature optimization.

Committed w/ that change and the comment reformat.

 Clean up DataOutputBuffer
 -

 Key: CASSANDRA-4072
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4072
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.1

 Attachments: 4072.txt


 The DataOutputBuffer/OutputBuffer split is unnecessarily baroque.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3423) Log configuration on startup for troubleshooting

2012-03-27 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3423.
---

   Resolution: Won't Fix
Fix Version/s: (was: 1.0.9)

This doesn't add a whole lot of value over please attach your cassandra.yaml.

(Schema is not available there, but is less frequently needed, and is available 
via cli or cqlsh.  Finally, including 100s or 1000s of CF schema in the log on 
startup is too noisy.)

 Log configuration on startup for troubleshooting
 

 Key: CASSANDRA-3423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3423
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
  Labels: lhf

 It would help troubleshooting if we logged pertinent details about server and 
 CF configuration on startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4091) multiget_slice thrift interface always returns empty list in erlang

2012-03-27 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4091.
---

Resolution: Not A Problem

I hate to say it's not our problem, but this is a Thrift bug (unless, I note 
for completeness, it's a bug in your client code).  Our Thrift bindings are 
100% autogenerated, so either the Thrift erlang support library or (less 
likely) IDL compiler must be doing something wrong.

On the bright side, I believe there is an active erlang maintainer for Thrift.

 multiget_slice thrift interface always returns empty list in erlang
 ---

 Key: CASSANDRA-4091
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4091
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.6, 1.0.8
 Environment: OS: tried on os x lion and fedora 16
 thrift: 0.8.0
 cassandra: tried apache cassandra 1.0.6 and datastax 1.0.8
 erlang: R14B04
Reporter: varnitk

 multiget_slice doesn't work in erlang and always returns an empty list, 
 however multiget_count does for the same set of keys. Sample code:
 Keys = [key1, key2],
 ColumnParent = #columnParent{column_family=ColumnFamily}, 
   
 SliceRange = #sliceRange{start=, finish=, reversed=false, 
 count=2147483647},
 SlicePredicate = #slicePredicate{slice_range=SliceRange, 
 column_names=undefined},
 {ok, Conn} = thrift_client_util:new(Host, Port, cassandra_thrift, [{framed, 
 true}]), ok,
 {Conn2, {ok, ok}} = thrift_client:call(Conn, set_keyspace, [Keyspace]),
 {NewCon, Response} = thrift_client:call(Conn2, multiget_slice, [Keys, 
 ColumnParent, SlicePredicate, 1]),
 Response: {ok, []}
 Please fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4023) Improve BloomFilter deserialization performance

2012-03-26 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4023.
---

Resolution: Fixed

committed.

(I note, for the record, that my original 4023.txt patch is unnecessary because 
we're already using a buffered inputstream in loadBloomFilter.)

 Improve BloomFilter deserialization performance
 ---

 Key: CASSANDRA-4023
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4023
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.1
Reporter: Joaquin Casares
Assignee: Yuki Morishita
Priority: Minor
  Labels: datastax_qa
 Fix For: 1.0.9, 1.1.0

 Attachments: 4023.txt, cassandra-1.0-4023-v2.txt, 
 cassandra-1.0-4023-v3.txt, trunk-4023.txt


 The difference of startup times between a 0.8.7 cluster and 1.0.7 cluster 
 with the same amount of data is 4x greater in 1.0.7.
 It seems as though 1.0.7 loads the BloomFilter through a series of reading 
 longs out in a multithreaded process while 0.8.7 reads the entire object.
 Perhaps we should update the new BloomFilter to do reading in batch as well?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3469) More fine-grained request statistics

2012-03-22 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3469.
---

   Resolution: Won't Fix
Fix Version/s: (was: 1.2)
 Assignee: (was: Yuki Morishita)

resolving as wontfix for now.  will revise if necessary depending on how 1123 
goes.

 More fine-grained request statistics
 

 Key: CASSANDRA-3469
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3469
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor

 It would be useful to split the CFS stats up by query type.  slice vs named 
 vs range vs index, to start with (right now we don't track range scans at 
 all), but also at the prepared statement level as it were:
 {{SELECT x FROM foo WHERE key = ?}} would be one query no matter what the ? 
 is, but {{SELECT y FROM foo WHERE key = ?}} would be different.  {{SELECT 
 x..y FROM foo WHERE key = ?}} would be another, as would {{SELECT x FROM foo 
 WHERE key = ? AND bar= ?}}.  (But {{SELECT x FROM foo WHERE bar = ? AND key = 
 ?}} would be identical to the former, of course.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4069) hinted-handoff doesn't work all right at some time .

2012-03-21 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4069.
---

Resolution: Duplicate

Hints are handled independently by each node. It sounds like A wasn't down long 
enough for the other nodes to mark it down; this was fixed in CASSANDRA-3554

 hinted-handoff  doesn't work all right at some time .
 -

 Key: CASSANDRA-4069
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4069
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: MaHaiyang

 4 nodes(A,B,C,D) in the cluster .
 A is down ,then B、C、D write hints for A .
 When A come back , not all nodes (eg: just only B) recover hints to A .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4023) Improve BloomFilter deserialization performance

2012-03-21 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4023.
---

Resolution: Fixed
  Reviewer: jbellis  (was: j.casares)

committed, thanks!

 Improve BloomFilter deserialization performance
 ---

 Key: CASSANDRA-4023
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4023
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.1
Reporter: Joaquin Casares
Assignee: Yuki Morishita
Priority: Minor
  Labels: datastax_qa
 Fix For: 1.0.9, 1.1.0

 Attachments: 4023.txt, cassandra-1.0-4023-v2.txt, 
 cassandra-1.0-4023-v3.txt


 The difference of startup times between a 0.8.7 cluster and 1.0.7 cluster 
 with the same amount of data is 4x greater in 1.0.7.
 It seems as though 1.0.7 loads the BloomFilter through a series of reading 
 longs out in a multithreaded process while 0.8.7 reads the entire object.
 Perhaps we should update the new BloomFilter to do reading in batch as well?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3468) SStable data corruption in 1.0.x

2012-03-21 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3468.
---

Resolution: Duplicate

Terry's team traced this to buggy behavior of posix_fadvise in their 
environment.  CASSANDRA-3878 is open to make posix_fadvise optional.

 SStable data corruption in 1.0.x
 

 Key: CASSANDRA-3468
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3468
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
 Environment: RHEL 6 running Cassandra 1.0.x.
Reporter: Terry Cumaranatunge
 Attachments: 3468-assert.txt


 We have noticed several instances of sstable corruptions in 1.0.x. This has 
 occurred in 1.0.0-rcx and 1.0.0 and 1.0.1. It has happened on multiple nodes 
 and multiple hosts with different disks, so this is the reason the software 
 is suspected at this time. The file system used is XFS, but no resets or any 
 type of failure scenarios have been run to create the problem. We were 
 basically running under load and every so often, we see that the sstable gets 
 corrupted and compaction stops on that node.
 I will attach the relevant sstable files if it lets me do that when I create 
 this ticket.
 ERROR [CompactionExecutor:23] 2011-10-27 11:14:09,309 PrecompactedRow.java 
 (line 119) Skipping row DecoratedKey(128013852116656632841539411062933532114, 
 37303730303138313533) in 
 /var/lib/cassandra/data/MSA/participants-h-8688-Data.db
 java.io.EOFException
 at java.io.RandomAccessFile.readFully(RandomAccessFile.java:399)
 at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
 at 
 org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:388)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:350)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:96)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
 at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:143)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:231)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:115)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:102)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:127)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:102)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:87)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:116)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:99)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 at 
 com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:179)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:47)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:131)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:114)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 This was Sylvain's analysis:
 I don't have much better news. Basically it seems the 2 last MB of the file 
 are complete garbage (which also explain the mmap error btw). And given where 
 the corruption actually starts, it suggests that it's either a very low level 
 bug in our file writer code that start writting bad data at some point for 
 some reason, or it's corruption not related to Cassandra. But given that, a 
 Cassandra bug sounds fairly unlikely.
 You said that you saw that 

[jira] [Resolved] (CASSANDRA-3994) Allow indexes for comparisons other than equal

2012-03-19 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3994.
---

Resolution: Not A Problem

bq. ThriftValidation.validateKeyRange uses 
ThriftValidation.validateFilterClauses (Line 507) to validate range.row_filter

But that call ignores the return value, which is what says this filter 
involves an indexed column with an EQ clause.  The return value is only 
checked for the path used by get_indexed_slices, as I said.  For 
get_range_slices we only make use of validateFilterClause to make sure that the 
column names and values are appropriate for their declared types.


 Allow indexes for comparisons other than equal
 --

 Key: CASSANDRA-3994
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3994
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dmitry Petrashko
 Attachments: Validation_fix_for_filters_other_than_EQuals-v2.patch, 
 Validation_fix_for_filters_other_than_EQuals.patch


 As for now, validation marks filters with operations other than equal as 
 invalid.
 This is also gives initial support for indexes other than KEYS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4055) Hector RetryService drop host

2012-03-15 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4055.
---

Resolution: Fixed

The Hector client is maintained at https://github.com/rantav/hector.

 Hector RetryService drop host
 -

 Key: CASSANDRA-4055
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4055
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
 Environment: This bug is in Hector code.
 If there is exception in addCassandraHost() before adding host to hostPools, 
 since addCassandraHost does not throw exception, the host will be removed 
 from downedHostQueue, and the host will be gone forever.
 if(downedHostQueue.contains(cassandraHost)  
 verifyConnection(cassandraHost)) {
   connectionManager.addCassandraHost(cassandraHost);
   downedHostQueue.remove(cassandraHost);
   return;
 }
Reporter: Danny Wang
Priority: Critical



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4039) CQL3 ALTER should deal with columns, not old thrift metadata

2012-03-13 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4039.
---

   Resolution: Not A Problem
Fix Version/s: (was: 1.1.0)
 Reviewer:   (was: slebresne)
 Assignee: (was: paul cannon)

 CQL3 ALTER should deal with columns, not old thrift metadata
 

 Key: CASSANDRA-4039
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4039
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.1.0
Reporter: Jonathan Ellis
  Labels: cql, cql3

 key alias, default validator, column metadata should not be modified as such; 
 rather, we should alter by column name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3994) Allow indexes for comparisons other than equal

2012-03-12 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3994.
---

Resolution: Not A Problem

 Allow indexes for comparisons other than equal
 --

 Key: CASSANDRA-3994
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3994
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dmitry Petrashko
 Attachments: Validation_fix_for_filters_other_than_EQuals-v2.patch, 
 Validation_fix_for_filters_other_than_EQuals.patch


 As for now, validation marks filters with operations other than equal as 
 invalid.
 This is also gives initial support for indexes other than KEYS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3970) (100% reproducible) JVM crash in streamingTransferTest on Windows

2012-03-08 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3970.
---

   Resolution: Invalid
Fix Version/s: (was: 1.1.0)
 Assignee: (was: Sylvain Lebresne)

Looks like you're right, with my second patch on CASSANDRA-3967 this doesn't 
happen.

 (100% reproducible) JVM crash in streamingTransferTest on Windows
 -

 Key: CASSANDRA-3970
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3970
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Jonathan Ellis
 Attachments: hs_err_pid95744.log


 {noformat}
 $ ant test -Dtest.name=StreamingTransferTest
 ...
 [junit] Testsuite: org.apache.cassandra.streaming.StreamingTransferTest
 [junit] #
 [junit] # A fatal error has been detected by the Java Runtime Environment:
 [junit] #
 [junit] #  EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x6da5ccca, 
 pid=95744, tid=94924
 [junit] #
 [junit] # JRE version: 6.0_27-b07
 [junit] # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.2-b06 mixed mode 
 windows-amd64 compressed oops)
 [junit] # Problematic frame:
 [junit] # V  [jvm.dll+0x1a]
 [junit] #
 [junit] # An error report file with more information is saved as:
 [junit] # c:\Users\Jonathan\projects\cassandra\git\hs_err_pid95744.log
 [junit] #
 [junit] # If you would like to submit a bug report, please visit:
 [junit] #   http://java.sun.com/webapps/bugreport/crash.jsp
 [junit] #
 [junit] Testsuite: org.apache.cassandra.streaming.StreamingTransferTest
 [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
 [junit]
 [junit] Testcase: 
 org.apache.cassandra.streaming.StreamingTransferTest:testTransferTable:   
 Caused an ERROR
 [junit] Forked Java VM exited abnormally. Please note the time in the report 
 does not reflect the time until the VM exit.
 [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
 abnormally. Please note the time in the report does not reflect the time 
 until the VM exit.
 [junit]
 [junit]
 [junit] Test org.apache.cassandra.streaming.StreamingTransferTest FAILED 
 (crashed)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4002) cqlsh: error when selecting on certain column families

2012-03-07 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4002.
---

Resolution: Duplicate
  Assignee: (was: paul cannon)

closing as duplicate then

 cqlsh: error when selecting on certain column families
 --

 Key: CASSANDRA-4002
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4002
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu and osx
Reporter: Tyler Patterson

 Here are two examples that produce the error:
 {code}
 CREATE COLUMNFAMILY cf8 (KEY text PRIMARY KEY) WITH default_validation=uuid 
 AND comparator=uuid;
 INSERT INTO cf8 (KEY, '2097887f-53b2-4b89-bfb4-09fea6980d40', 
 'c3e990b5-238c-46f1-8c88-0267f5a5c446') VALUES ('76616c7565305f30', 
 '2097887f-53b2-4b89-bfb4-09fea6980d40', 
 'c3e990b5-238c-46f1-8c88-0267f5a5c446');
 select * from cf8 where KEY='76616c7565305f30';
 {code}
 produces the error: cannot concatenate 'str' and 'bool' objects
 {code}
 CREATE COLUMNFAMILY cf_blob_bool (KEY blob PRIMARY KEY) WITH 
 default_validation=boolean AND comparator=boolean;
 INSERT INTO cf_blob_bool (KEY, 'True', 'False') VALUES ('76616c7565305f30', 
 'True', 'False');
 select * from cf_blob_bool where KEY='76616c7565305f30';
 {code}
 produces the error: cannot concatenate 'str' and 'bool' objects

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4016) WARN No appenders could be found for logger (org.apache.cassandra.confi

2012-03-07 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4016.
---

   Resolution: Duplicate
Fix Version/s: (was: 1.1.0)

dupe of CASSANDRA-4013

 WARN No appenders could be found for logger (org.apache.cassandra.confi
 ---

 Key: CASSANDRA-4016
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4016
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website, Hadoop
Affects Versions: 1.1.0
 Environment: win7 
 apacheTomcat6.0
 JDK1.6
Reporter: Emotion
  Labels: patch
   Original Estimate: 72h
  Remaining Estimate: 72h

 i'm installed apache-cassandra-1.1.0-beta1 version.
 so, modify window7Config to Basic Linux Config 
 and i play cmd mode
 write cassandra-cli and entered.
 so i write create keyspace keyspace1;
 but not created. output is error
 what the... help...me...
 and.. where is keyspace config?
 different 0.6.8version to 1.1.0version
 xmlto yaml?
  please send to mail ..
 xyz...@nate.com...
 C:\apache-cassandra-1.1.0-beta1\bincassandra-cli
 Starting Cassandra Client
 Connected to: Test Cluster on 127.0.0.1/9160
 Welcome to Cassandra CLI version 1.1.0-beta1
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] create keyspace keyspace1;
 log4j:WARN No appenders could be found for logger (org.apache.cassandra.confi
 atabaseDescriptor).
 log4j:WARN Please initialize the log4j system properly.
 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more
 fo.
 Cannot locate cassandra.yaml
 Fatal configuration error; unable to start serv

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4013) WARN No appenders could be found for logger (org.apache.cassandra.confi

2012-03-07 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4013.
---

   Resolution: Fixed
Fix Version/s: (was: 1.1.0)

Cannot locate cassandra.yaml was fixed in CASSANDRA-3986.  In the meantime if 
you run it from one directory up ({{bin\cassandra-cli}}) that should work.

The user mailing list is the right place to address questions like where is 
keyspace config.

 WARN No appenders could be found for logger (org.apache.cassandra.confi
 ---

 Key: CASSANDRA-4013
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4013
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website, Hadoop
Affects Versions: 1.1.0
 Environment: win7 
 apacheTomcat6.0
 JDK1.6
Reporter: Emotion
  Labels: patch
   Original Estimate: 72h
  Remaining Estimate: 72h

 i'm installed apache-cassandra-1.1.0-beta1 version.
 so, modify window7Config to Basic Linux Config 
 and i play cmd mode
 write cassandra-cli and entered.
 so i write create keyspace keyspace1;
 but not created. output is error
 what the... help...me...
 and.. where is keyspace config?
 different 0.6.8version to 1.1.0version
 xmlto yaml?
  please send to mail ..
 xyz...@nate.com...
 C:\apache-cassandra-1.1.0-beta1\bincassandra-cli
 Starting Cassandra Client
 Connected to: Test Cluster on 127.0.0.1/9160
 Welcome to Cassandra CLI version 1.1.0-beta1
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] create keyspace keyspace1;
 log4j:WARN No appenders could be found for logger (org.apache.cassandra.confi
 atabaseDescriptor).
 log4j:WARN Please initialize the log4j system properly.
 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more
 fo.
 Cannot locate cassandra.yaml
 Fatal configuration error; unable to start serv

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4005) After running scrub , while major compaction getting exception FileNotFound with (too many open files)

2012-03-06 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4005.
---

   Resolution: Cannot Reproduce
Fix Version/s: (was: 0.8.11)

we had some FD leaks in early 0.8 releases; please upgrade to 0.8.10 or 1.0.8.

 After running scrub , while major compaction getting exception FileNotFound 
 with (too many open files)
 --

 Key: CASSANDRA-4005
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4005
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.2
Reporter: Samarth Gahire
Priority: Minor
  Labels: compaction, exception, scrub
   Original Estimate: 48h
  Remaining Estimate: 48h

 I was unable to trigger compaction on one of the column families so I ran 
 scrub on that CF.
 After scrub when I tried to run compaction I got following error.
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /mnt2/var/lib/cassandra/data/AudienceNetwork/Audience-g-9898-Data.db (Too 
 many open files)
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performMajor(CompactionManager.java:277)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1762)
 at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1358)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
 at sun.rmi.transport.Transport$1.run(Transport.java:159)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /mnt2/var/lib/cassandra/data/AudienceNetwork/Audience-g-9898-Data.db (Too 
 many open files)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:660)
 at 
 org.apache.cassandra.db.compaction.CompactionIterator.getCollatingIterator(CompactionIterator.java:92)
 at 
 

[jira] [Resolved] (CASSANDRA-3957) Supercolumn serialization assertion failure

2012-03-06 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3957.
---

   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.0.9)
 Assignee: (was: Yuki Morishita)

Jackson's stacktrace involving sblocks was caused by a bug in DSE's CFS code 
re-using a ByteBuffer it had handed off to Thrift.

Could be that the original report had a similar problem.

 Supercolumn serialization assertion failure
 ---

 Key: CASSANDRA-3957
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3957
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
  Labels: datastax_qa

 As reported at 
 http://mail-archives.apache.org/mod_mbox/cassandra-user/201202.mbox/%3CCADJL=w5kH5TEQXOwhTn5Jm3cmR4Rj=nfjcqlryxv7plyasi...@mail.gmail.com%3E,
 {noformat}
 ERROR 10:51:44,282 Fatal exception in thread
 Thread[COMMIT-LOG-WRITER,5,main]
 java.lang.AssertionError: Final buffer length 4690 to accomodate data size
 of 2347 (predicted 2344) for RowMutation(keyspace='Player',
 key='36336138643338652d366162302d343334392d383466302d356166643863353133356465',
 modifications=[ColumnFamily(PlayerCity [SuperColumn(owneditem_1019
 []),SuperColumn(owneditem_1024 []),SuperColumn(owneditem_1026
 []),SuperColumn(owneditem_1074 []),SuperColumn(owneditem_1077
 []),SuperColumn(owneditem_1084 []),SuperColumn(owneditem_1094
 []),SuperColumn(owneditem_1130 []),SuperColumn(owneditem_1136
 []),SuperColumn(owneditem_1141 []),SuperColumn(owneditem_1142
 []),SuperColumn(owneditem_1145 []),SuperColumn(owneditem_1218
 [636f6e6e6563746564:false:5@1329648704269002
 ,63757272656e744865616c7468:false:3@1329648704269006
 ,656e64436f6e737472756374696f6e54696d65:false:13@1329648704269007
 ,6964:false:4@1329648704269000,6974656d4964:false:15@1329648704269001
 ,6c61737444657374726f79656454696d65:false:1@1329648704269008
 ,6c61737454696d65436f6c6c6563746564:false:13@1329648704269005
 ,736b696e4964:false:7@1329648704269009,78:false:4@1329648704269003
 ,79:false:3@1329648704269004,]),SuperColumn(owneditem_133
 []),SuperColumn(owneditem_134 []),SuperColumn(owneditem_135
 []),SuperColumn(owneditem_141 []),SuperColumn(owneditem_147
 []),SuperColumn(owneditem_154 []),SuperColumn(owneditem_159
 []),SuperColumn(owneditem_171 []),SuperColumn(owneditem_253
 []),SuperColumn(owneditem_422 []),SuperColumn(owneditem_438
 []),SuperColumn(owneditem_515 []),SuperColumn(owneditem_521
 []),SuperColumn(owneditem_523 []),SuperColumn(owneditem_525
 []),SuperColumn(owneditem_562 []),SuperColumn(owneditem_61
 []),SuperColumn(owneditem_634 []),SuperColumn(owneditem_636
 []),SuperColumn(owneditem_71 []),SuperColumn(owneditem_712
 []),SuperColumn(owneditem_720 []),SuperColumn(owneditem_728
 []),SuperColumn(owneditem_787 []),SuperColumn(owneditem_797
 []),SuperColumn(owneditem_798 []),SuperColumn(owneditem_838
 []),SuperColumn(owneditem_842 []),SuperColumn(owneditem_847
 []),SuperColumn(owneditem_849 []),SuperColumn(owneditem_851
 []),SuperColumn(owneditem_852 []),SuperColumn(owneditem_853
 []),SuperColumn(owneditem_854 []),SuperColumn(owneditem_857
 []),SuperColumn(owneditem_858 []),SuperColumn(owneditem_874
 []),SuperColumn(owneditem_884 []),SuperColumn(owneditem_886
 []),SuperColumn(owneditem_908 []),SuperColumn(owneditem_91
 []),SuperColumn(owneditem_911 []),SuperColumn(owneditem_930
 []),SuperColumn(owneditem_934 []),SuperColumn(owneditem_937
 []),SuperColumn(owneditem_944 []),SuperColumn(owneditem_945
 []),SuperColumn(owneditem_962 []),SuperColumn(owneditem_963
 []),SuperColumn(owneditem_964 []),])])
 at 
 org.apache.cassandra.utils.FBUtilities.serialize(FBUtilities.java:682)
 at 
 org.apache.cassandra.db.RowMutation.getSerializedBuffer(RowMutation.java:279)
 at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:122)
 at 
 org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:599)
 at 
 org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:49)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3993) JavaDoc fix for org.apache.cassandra.db.filter.QueryFilter

2012-03-06 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3993.
---

   Resolution: Fixed
Fix Version/s: 1.1.1
 Assignee: Dmitry Petrashko

committed, but for future reference it would be a good idea to address multiple 
trivial changes like this in a single ticket

 JavaDoc fix for org.apache.cassandra.db.filter.QueryFilter
 --

 Key: CASSANDRA-3993
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3993
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Dmitry Petrashko
Assignee: Dmitry Petrashko
Priority: Trivial
 Fix For: 1.1.1

 Attachments: JavaDoc_fix_in_QueryFilter.patch

   Original Estimate: 0.05h
  Remaining Estimate: 0.05h

 @param should be on separate line

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3575) java.lang.ArrayIndexOutOfBoundsException during scrub

2012-03-05 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3575.
---

Resolution: Duplicate

Closing as duplicate of the above-mentioned issue.

 java.lang.ArrayIndexOutOfBoundsException during scrub
 -

 Key: CASSANDRA-3575
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3575
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.3
 Environment: Centos 5.7
 16GB Ram
 4GB Java Heap
 Sun JVM
Reporter: Tim McLennan

  INFO [CompactionExecutor:6] 2011-12-05 22:19:28,788 CompactionManager.java 
 (line 477) Scrubbing 
 SSTableReader(path='/var/lib/cassandra/data/cf/Data-hb-256385-Data.db')
 ERROR [CompactionExecutor:6] 2011-12-05 22:19:30,195 
 AbstractCassandraDaemon.java (line 133) Fatal exception in thread 
 Thread[CompactionExecutor:6,1,RMI Runtime]
 java.lang.ArrayIndexOutOfBoundsException: 8
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.add(LeveledManifest.java:293)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:184)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:141)
 at 
 org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:481)
 at org.apache.cassandra.db.DataTracker.replace(DataTracker.java:275)
 at 
 org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:232)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:979)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:654)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doScrub(CompactionManager.java:472)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:63)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$3.call(CompactionManager.java:224)
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3994) Allow indexes for comparisons other than equal

2012-03-04 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3994.
---

Resolution: Duplicate

Applying filters to seq scan was done in CASSANDRA-1600.

 Allow indexes for comparisons other than equal
 --

 Key: CASSANDRA-3994
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3994
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dmitry Petrashko
 Attachments: Validation_fix_for_filters_other_than_EQuals.patch


 As for now, validation marks filters with operations other than equal as 
 invalid.
 This is also gives initial support for indexes other than KEYS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3608) nodetool cleanup fails on LeveledCompactionStrategy tables with ArrayIndexOutOfBoundsException

2012-03-02 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3608.
---

Resolution: Duplicate

 nodetool cleanup fails on LeveledCompactionStrategy tables with 
 ArrayIndexOutOfBoundsException
 --

 Key: CASSANDRA-3608
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3608
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.6
 Environment: Linux
Reporter: Joe Siegrist

 Error occured during cleanup
 java.util.concurrent.ExecutionException: 
 java.lang.ArrayIndexOutOfBoundsException: 7
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:204)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:238)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:958)
 at 
 org.apache.cassandra.service.StorageService.forceTableCleanup(StorageService.java:1527)
 at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)
 at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
 at sun.rmi.transport.Transport$1.run(Transport.java:159)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 7
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.add(LeveledManifest.java:294)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:185)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:141)
 at 
 org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:488)
 at 
 org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:234)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:980)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompaction(CompactionManager.java:788)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:64)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$5.perform(CompactionManager.java:242)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:183)
 

[jira] [Resolved] (CASSANDRA-3987) Cannot reuse row key after deletion.

2012-03-01 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3987.
---

Resolution: Not A Problem

If you delete a row at time X and want to re-insert it, you need to use time  
X.

 Cannot reuse row key after deletion.
 

 Key: CASSANDRA-3987
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3987
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.0.8
 Environment: Mac OSX
Reporter: Kristoffer Carlson
Priority: Minor

 If a row with columns is inserted using key A and then the same key A is 
 deleted, the key A cannot be used again. 
 This only happens through the Thrift interface. It does work, however, 
 through the cli.
 The writes were done using a batch_mutate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3964) Columns isn`t updated, although insert operation ends with success

2012-02-27 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3964.
---

Resolution: Not A Problem

the timestamp in the row is higher than what you are trying to write with

 Columns isn`t updated, although insert operation ends with success
 --

 Key: CASSANDRA-3964
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3964
 Project: Cassandra
  Issue Type: Bug
Reporter: Mariusz

 Hi,
 i`m adding several rows into CF(i`m using hadoop mapreduce jobs and pycassa 
 python client to do that), i`m using default write_consistency_level, it`s 
 one node cluster(but it happens also on 2 node cluster), the timestamps for 
 all inserts are returned, no exception in system.log and output.log in 
 cassandra log directory...
 After that i cannot update these rows(logs from cassandra-cli):
 {code}
 [default@test_keyspace] get TestCF[505fd8b270b0873c4d7e7606c9d54fdf3f13b435];
 = (column=test, value=test2, timestamp=1330338712820056)
 Returned 1 results.
 Elapsed time: 2 msec(s).
 [default@test_keyspace] set 
 TestCF[505fd8b270b0873c4d7e7606c9d54fdf3f13b435][test]=test3;
 Value inserted.
 Elapsed time: 4 msec(s).
 [default@test_keyspace] get TestCF[505fd8b270b0873c4d7e7606c9d54fdf3f13b435]; 

 = (column=test, value=test2, timestamp=1330338712820056)
 Returned 1 results.
 Elapsed time: 2 msec(s).
 {code}
 I`m running cluster on snapshots from cassandra-1.1 branch from about a week 
 ago, however it happen on several snapshot i`ve taken. I can find out the 
 exact commit, if needed.
 Definition of TestCF:
 {code}
 ColumnFamily: TestCF
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 1.0
   DC Local Read repair chance: 0.0
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3705) Don't default the datacenter name in replication_strategies when the datacenter does not exist

2012-02-23 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3705.
---

Resolution: Won't Fix

This was our attempt to push people towards using the Right replication 
strategy (NTS) without losing them in the weeds of DCs and replica counts.

We learned from our mistake and cqlsh requires a fully-specified replication 
strategy, but I don't think it's worth breaking peoples' expectations with the 
CLI this late in the game (i.e. when we're already trying to push people from 
the CLI to cqlsh).

 Don't default the datacenter name in replication_strategies when the 
 datacenter does not exist
 --

 Key: CASSANDRA-3705
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3705
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.5
Reporter: Joaquin Casares

 When using the AMI, which is currently set to use the EC2 snitch and the 
 NetworkTopologyStrategy is set to default by the cli, all keyspaces default 
 to datacenter1 being the datacenter name.
 So when running: 
 {noformat}
 create keyspace test;
 {noformat}
 we get this created:
 {noformat}
 Keyspace: test:
   Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
   Durable Writes: true
 Options: [datacenter1:1]
 {noformat}
 This should error out immediately rather than letting the user go on to 
 discover the error later:
 {noformat}
 [default@test] set User['jsmith']['first'] = 'John';
 null
 UnavailableException()
 at 
 org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:15206)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:858)
 at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:830)
 at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:902)
 at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:216)
 at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:220)
 at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 {noformat}
 Related link:
 http://www.datastax.com/support-forums/topic/new-datastax-ami-ami-fd23ec94-is-not-functionnal

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3511) Supercolumn key caches are not saved

2012-02-18 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3511.
---

Resolution: Cannot Reproduce

 Supercolumn key caches are not saved
 

 Key: CASSANDRA-3511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3511
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.2, 1.0.3
Reporter: Radim Kolar
Priority: Minor
  Labels: supercolumns
 Attachments: failed-to-save-after-load-KeyCache, 
 rapidshare-resultcache-KeyCache


 cache saving seems to be broken in 1.0.2 and 1.0.3 i have 2 CF in keyspace 
 with enabled cache saving and only one gets its key cache saved. It worked 
 perfectly in 0.8, both were saved.
 This one works:
 create column family query2
   with column_type = 'Standard'
   and comparator = 'AsciiType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'UTF8Type'
   and rows_cached = 500.0
   and row_cache_save_period = 0
   and row_cache_keys_to_save = 2147483647
   and keys_cached = 20.0
   and key_cache_save_period = 14400
   and read_repair_chance = 1.0
   and gc_grace = 864000
   and min_compaction_threshold = 5
   and max_compaction_threshold = 10
   and replicate_on_write = false
   and row_cache_provider = 'ConcurrentLinkedHashCacheProvider'
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
 This does not
 create column family dkb13
   with column_type = 'Super'
   and comparator = 'LongType'
   and subcomparator = 'AsciiType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'UTF8Type'
   and rows_cached = 600.0
   and row_cache_save_period = 0
   and row_cache_keys_to_save = 2147483647
   and keys_cached = 20.0
   and key_cache_save_period = 14400
   and read_repair_chance = 1.0
   and gc_grace = 864000
   and min_compaction_threshold = 5
   and max_compaction_threshold = 10
   and replicate_on_write = false
   and row_cache_provider = 'ConcurrentLinkedHashCacheProvider'
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
 in second test system i created these 2 column families and none of them got 
 single cache key saved. Both have save period 30 seoonds - their cache should 
 save often. Its not that standard column family works while super does not.
 create column family test1
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and rows_cached = 0.0
   and row_cache_save_period = 0
   and row_cache_keys_to_save = 2147483647
   and keys_cached = 20.0
   and key_cache_save_period = 30
   and read_repair_chance = 1.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and row_cache_provider = 'SerializingCacheProvider'
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';
 create column family test2
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and rows_cached = 0.0
   and row_cache_save_period = 0
   and row_cache_keys_to_save = 2147483647
   and keys_cached = 20.0
   and key_cache_save_period = 30
   and read_repair_chance = 1.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and row_cache_provider = 'SerializingCacheProvider'
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';
 If this is done on purpose for example cassandra 1.0 is doing some heuristic 
 decision if cache should be saved or not then it should be removed. Saving 
 cache is fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3921) Compaction doesn't clear out expired tombstones from SerializingCache

2012-02-16 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3921.
---

Resolution: Fixed
  Reviewer: slebresne
  Assignee: Jonathan Ellis

bq. we may want to make removeDeletedInCache be a noop for copying caches just 
to avoid the useless deserialization

done in 94860c6c3713cda4f17dabb2ac2ce30cfe92f6e2

 Compaction doesn't clear out expired tombstones from SerializingCache
 -

 Key: CASSANDRA-3921
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3921
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.0


 Compaction calls removeDeletedInCache, which looks like this:
 {code}
 .   public void removeDeletedInCache(DecoratedKey key)
 {
 ColumnFamily cachedRow = cfs.getRawCachedRow(key);
 if (cachedRow != null)
 ColumnFamilyStore.removeDeleted(cachedRow, gcBefore);
 }
 {code}
 For the SerializingCache, this means it calls removeDeleted on a temporary, 
 deserialized copy, which leaves the cache contents unaffected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-598) SuperColumns need to be indexed

2012-02-16 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-598.
--

Resolution: Won't Fix

We're planning to index composite columns (see subtasks of CASSANDRA-3761) 
instead.

 SuperColumns need to be indexed
 ---

 Key: CASSANDRA-598
 URL: https://issues.apache.org/jira/browse/CASSANDRA-598
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Ryan King

 Currently, loading a SuperColumn requires reading a de-serializing the entire 
 thing. This limits the number of subcolumns you can efficiently use per 
 SuperColumn -- typically on the order of a few thousand. We should add an 
 index and whatever else it takes to have much larger SuperColumns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2791) Redhat spec file needs some enhancements for 0.8 and beyond

2012-02-09 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2791.
---

   Resolution: Won't Fix
Fix Version/s: (was: 0.8.11)

dropped rpm packaging in CASSANDRA-3567

 Redhat spec file needs some enhancements for 0.8 and beyond
 ---

 Key: CASSANDRA-2791
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2791
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Affects Versions: 0.8.0
Reporter: Nate McCall

 Version and Release need to be brought up to date. Also need to account for 
 multiple 'apache-cassandra' jars. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2814) Don't create data/commitlog/saved_caches directories in rpm package

2012-02-09 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2814.
---

   Resolution: Won't Fix
Fix Version/s: (was: 0.8.11)
 Assignee: (was: Norman Maurer)

dropped RPM packages in CASSANDRA-3567

 Don't create data/commitlog/saved_caches directories in rpm package
 ---

 Key: CASSANDRA-2814
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2814
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Affects Versions: 0.8.1
Reporter: Nick Bailey
Priority: Minor
  Labels: lhf

 There is no need to create these directories since cassandra will create them 
 if the don't exist. If you install the package and these directories already 
 exist as symlinks then the package will replace them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3843) Unnecessary ReadRepair request during RangeScan

2012-02-09 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3843.
---

Resolution: Fixed

 Unnecessary  ReadRepair request during RangeScan
 

 Key: CASSANDRA-3843
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3843
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: Philip Andronov
Assignee: Jonathan Ellis
 Fix For: 1.0.8

 Attachments: 3843-v2.txt, 3843.txt


 During reading with Quorum level and replication factor greater then 2, 
 Cassandra sends at least one ReadRepair, even if there is no need to do that. 
 With the fact that read requests await until ReadRepair will finish it slows 
 down requsts a lot, up to the Timeout :(
 It seems that the problem has been introduced by the CASSANDRA-2494, 
 unfortunately I have no enought knowledge of Cassandra internals to fix the 
 problem and do not broke CASSANDRA-2494 functionality, so my report without a 
 patch.
 Code explanations:
 {code:title=RangeSliceResponseResolver.java|borderStyle=solid}
 class RangeSliceResponseResolver {
 // 
 private class Reducer extends 
 MergeIterator.ReducerPairRow,InetAddress, Row
 {
 // 
 protected Row getReduced()
 {
 ColumnFamily resolved = versions.size()  1
   ? 
 RowRepairResolver.resolveSuperset(versions)
   : versions.get(0);
 if (versions.size()  sources.size())
 {
 for (InetAddress source : sources)
 {
 if (!versionSources.contains(source))
 {
   
 // [PA] Here we are adding null ColumnFamily.
 // later it will be compared with the desired
 // version and will give us fake difference which
 // forces Cassandra to send ReadRepair to a given 
 source
 versions.add(null);
 versionSources.add(source);
 }
 }
 }
 // 
 if (resolved != null)
 
 repairResults.addAll(RowRepairResolver.scheduleRepairs(resolved, table, key, 
 versions, versionSources));
 // 
 }
 }
 }
 {code}
 {code:title=RowRepairResolver.java|borderStyle=solid}
 public class RowRepairResolver extends AbstractRowResolver {
 // 
 public static ListIAsyncResult scheduleRepairs(ColumnFamily resolved, 
 String table, DecoratedKey? key, ListColumnFamily versions, 
 ListInetAddress endpoints)
 {
 ListIAsyncResult results = new 
 ArrayListIAsyncResult(versions.size());
 for (int i = 0; i  versions.size(); i++)
 {
 // On some iteration we have to compare null and resolved which 
 are obviously
 // not equals, so it will fire a ReadRequest, however it is not 
 needed here
 ColumnFamily diffCf = ColumnFamily.diff(versions.get(i), 
 resolved);
 if (diffCf == null)
 continue;
 //  
 {code}
 Imagine the following situation:
 NodeA has X.1 // row X with the version 1
 NodeB has X.2 
 NodeC has X.? // Unknown version, but because write was with Quorum it is 1 
 or 2
 During the Quorum read from nodes A and B, Cassandra creates version 12 and 
 send ReadRepair, so now nodes has the following content:
 NodeA has X.12
 NodeB has X.12
 which is correct, however Cassandra also will fire ReadRepair to NodeC. There 
 is no need to do that, the next consistent read have a chance to be served by 
 nodes {A, B} (no ReadRepair) or by pair {?, C} and in that case ReadRepair 
 will be fired and brings nodeC to the consistent state
 Right now we are reading from the Index a lot and starting from some point in 
 time we are getting TimeOutException because cluster is overloaded by the 
 ReadRepairRequests *even* if all nodes has the same data :(

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3816) Improve visibility of Cassandra packages

2012-02-09 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3816.
---

Resolution: Fixed
  Reviewer: gdusbabek

 Improve visibility of Cassandra packages
 

 Key: CASSANDRA-3816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3816
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
 Attachments: 3816-v2.txt, 3816.txt


 The preferred way to install (and upgrade) Cassandra is through Debian and 
 RPM packages, but the current Take Action download box on the front page 
 only links the binary tarball.  Even clicking through to other options 
 (http://cassandra.apache.org/download/) doesn't mention anything but tarballs 
 for the core server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3869) [patch] don't duplicate ByteBuffers when hashing

2012-02-08 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3869.
---

   Resolution: Fixed
Fix Version/s: 1.1
 Reviewer: jbellis

committed v2; I think your first intuition was right that this is a hot path 
worth optimizing.

 [patch] don't duplicate ByteBuffers when hashing
 

 Key: CASSANDRA-3869
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3869
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
 Fix For: 1.1

 Attachments: dont_dup_bb_for_hash.diff, dont_dup_bb_for_hash2.diff


 given how often hashing of ByteBuffers occurs, don't duplicate the ByteBuffer 
 when hashing. (trivial - as the byte array was never copied just the BB 
 wrapper -- but still).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3865) Cassandra-cli returns 'command not found' instead of syntax error

2012-02-07 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3865.
---

Resolution: Duplicate
  Reviewer:   (was: brandon.williams)
  Assignee: (was: Yuki Morishita)

 Cassandra-cli returns 'command not found' instead of syntax error
 -

 Key: CASSANDRA-3865
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3865
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 1.0.5
Reporter: Eric Lubow
Priority: Trivial
  Labels: cassandra-cli

 When creating a column family from the output of 'show schema' with an index, 
 there is a trailing comma after index_type: 0,  The return from this is a 
 'command not found'  This is misleading because the command is found, there 
 is just a syntax error.
 'Command not found: `create column family $cfname ...`

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3823) [patch] remove bogus assert - never false

2012-02-01 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3823.
---

   Resolution: Fixed
Fix Version/s: 1.1
 Reviewer: jbellis
 Assignee: Dave Brosius

committed, thanks

 [patch] remove bogus assert - never false
 -

 Key: CASSANDRA-3823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3823
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1

 Attachments: remove_bogus_assert.diff


 code asserts that SSTableScanner is Closeable
  final SSTableScanner scanner = sstable.getScanner(filter);
  scanner.seekTo(startWith);
 -assert scanner instanceof Closeable; // otherwise we leak FDs
 always is, unless null, but of course the line before would throw NPE. 
 Just confusing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3807) bootstrap can silently fail if sources are down for one or more ranges

2012-01-28 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3807.
---

Resolution: Duplicate
  Assignee: (was: Peter Schuller)

 bootstrap can silently fail if sources are down for one or more ranges
 --

 Key: CASSANDRA-3807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3807
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Schuller
Priority: Critical

 Assigning to me, will submit patch after CASSANDRA-3483.
 The silent failure is not new; been like that since forever (well, at least 
 0.8). The result is that the node goes up in the ring and starts serving 
 inconsistent reads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3769) Allow for comments in a cassandra-cli file

2012-01-25 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3769.
---

Resolution: Not A Problem

 Allow for comments in a cassandra-cli file
 --

 Key: CASSANDRA-3769
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3769
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.0.6
 Environment: Amazon Linux w/ Apache Cassandra 1.0.6
Reporter: Andrew Halloran
Priority: Trivial
  Labels: cassandra-cli

 I use the cassandra-cli to create schemas, update schemas, and make other 
 calls to interrogate keyspaces and columns. I load pre-written statements 
 from files using the -f option, example: bin/cassandra-cli -host localhost 
 -port 9160 -f mystatements.txt. It would be useful if I could comment my 
 statement files with comments similar to how you can comment script and 
 C++/Java code.
 Example contents of mystatements.txt file:
 update column family users// This is my column which 
 holds all my user information
 with comparator = UTF8Type// My column names are all 
 strings so I will use UTF8Type
 and key_validation_class = UTF8Type   // My row key values are also 
 UTF8Type
 and default_validation_class = UTF8Type // Column values will be 
 UTF8Type
 and column_metadata = [
 {column_name: username, validation_class: UTF8Type}, // This 
 column stores the login name of the user
 {column_name: realname, validation_class: UTF8Type}];// This 
 column stores the real world name of user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3728) Better error message when a column family creation fails

2012-01-25 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3728.
---

   Resolution: Cannot Reproduce
Fix Version/s: (was: 1.0.8)
 Reviewer:   (was: yukim)
 Assignee: (was: Pavel Yaskevich)

 Better error message when a column family creation fails
 

 Key: CASSANDRA-3728
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3728
 Project: Cassandra
  Issue Type: Bug
Reporter: Eric Lubow
Priority: Minor
  Labels: cli

 Since '-' characters are not allowed in column family names, there should be 
 an error thrown on column family name validation.
 [default@linkcurrent] create column family foo-bar;
 null

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3786) [patch] fix bad comparison of IColumn to ByteBuffer

2012-01-25 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3786.
---

Resolution: Fixed
  Reviewer: jbellis

 [patch] fix bad comparison of IColumn to ByteBuffer
 ---

 Key: CASSANDRA-3786
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3786
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1

 Attachments: bad_compare.diff


 Code does
 firstColumn.equals(startKey)
 changed to 
 firstColumn.name().equals(startKey)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3788) [patch] fix bad comparison in hadoop cf recorder reader

2012-01-25 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3788.
---

Resolution: Fixed
  Reviewer: jbellis

committed, thanks!

 [patch] fix bad comparison in hadoop cf recorder reader
 ---

 Key: CASSANDRA-3788
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3788
 Project: Cassandra
  Issue Type: Bug
  Components: Contrib
Affects Versions: 1.1
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
  Labels: hadoop
 Fix For: 1.1

 Attachments: bad_column_name_compare.diff


 code does
 rows.get(0).columns.get(0).column.equals(startColumn)
 which is a Column against a ByteBuffer
 changed to 
 rows.get(0).columns.get(0).column.name.equals(startColumn)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3790) [patch] allow compaction score to be a floating pt value

2012-01-25 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3790.
---

   Resolution: Fixed
Fix Version/s: 1.0.8
 Reviewer: jbellis
 Assignee: Dave Brosius

committed, thanks!

 [patch] allow compaction score to be a floating pt value
 

 Key: CASSANDRA-3790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3790
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.7
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.0.8

 Attachments: use_double_score.diff


 compaction score is computed with integer math, making the need for 
 compaction under reported. Use floating pt math instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3759) [patch] don't allow dropping the system keyspace

2012-01-24 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3759.
---

Resolution: Fixed

committed patch attachedk to 3755.

 [patch] don't allow dropping the system keyspace
 

 Key: CASSANDRA-3759
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3759
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.0.8

 Attachments: no_drop_system.diff


 throw an IRE if user attempts to drop system keyspace

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3743) Lower memory consumption used by index sampling

2012-01-24 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3743.
---

Resolution: Fixed

committed, thanks!

 Lower memory consumption used by index sampling
 ---

 Key: CASSANDRA-3743
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3743
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.0
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: optimization
 Fix For: 1.1

 Attachments: 3743-trunk-trim.txt, 3743-trunk-v2.txt, 3743-trunk.txt, 
 cassandra-3743-codestyle.txt


 currently j.o.a.c.io.sstable.indexsummary is implemented as ArrayList of 
 KeyPosition (RowPosition key, long offset)i propose to change it to:
 RowPosition keys[]
 long offsets[]
 and use standard binary search on it. This will lower number of java objects 
 used per entry from 2 (KeyPosition + RowPosition) to 1 (RowPosition).
 For building these arrays convenient ArrayList class can be used and then 
 call to .toArray() on it.
 This is very important because index sampling uses a lot of memory on nodes 
 with billions rows

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2474) CQL support for compound columns and wide rows

2012-01-23 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2474.
---

   Resolution: Duplicate
Fix Version/s: (was: 1.1)
 Reviewer:   (was: urandom)
 Assignee: (was: Sylvain Lebresne)

Closing this as duplicate then.

 CQL support for compound columns and wide rows
 --

 Key: CASSANDRA-2474
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2474
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Eric Evans
Priority: Critical
  Labels: cql
 Attachments: 0001-Add-support-for-wide-and-composite-CFs.patch, 
 0002-thrift-generated-code.patch, 2474-transposed-1.PNG, 
 2474-transposed-raw.PNG, 2474-transposed-select-no-sparse.PNG, 
 2474-transposed-select.PNG, cql_tests.py, raw_composite.txt, 
 screenshot-1.jpg, screenshot-2.jpg


 For the most part, this boils down to supporting the specification of 
 compound column names (the CQL syntax is colon-delimted terms), and then 
 teaching the decoders (drivers) to create structures from the results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3775) rpc_timeout error when reading from a cluster that just had a node die. Only happens if gossip hasn't noticed the dead node yet.

2012-01-23 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3775.
---

Resolution: Not A Problem

This is expected behavior. Cassandra only sends one data request for each 
query, so if that replica is dead-but-not-detected-dead then the request will 
time out.

 rpc_timeout error when reading from a cluster that just had a node die. Only 
 happens if gossip hasn't noticed the dead node yet.
 

 Key: CASSANDRA-3775
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3775
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.6
 Environment: ubuntu. Used ccm to create the cluster.
Reporter: Tyler Patterson

 Create a cluster of 3 nodes with RF=3 and CL=QUORUM. Insert some data then 
 kill a node (not the coordinator) and immediately try to read the data. The 
 read request will fail within about 2 seconds. cassandra.yaml has 
 rpc_timeout=1. A failing test has been written in cassandra-dtest, branch 
 read_when_node_is_down. The test can be run like this: nosetests 
 --nocapture read_when_node_down_test.py Here is the error from the test:
 {code}
 ==
 ERROR: read_when_node_down_test.TestReadWhenNodeDown.read_when_node_down_test
 --
 Traceback (most recent call last):
   File /usr/lib/pymodules/python2.7/nose/case.py, line 187, in runTest
 self.test(*self.arg)
   File /home/tahooie/cassandra-dtest/read_when_node_down_test.py, line 40, 
 in read_when_node_down_test
 query_c1c2(cursor, 100, CL)
   File /home/tahooie/cassandra-dtest/tools.py, line 28, in query_c1c2
 cursor.execute('SELECT c1, c2 FROM cf USING CONSISTENCY %s WHERE key=k%d' 
 % (consistency, key))
   File /usr/local/lib/python2.7/dist-packages/cql/cursor.py, line 96, in 
 execute
 raise cql.OperationalError(Request did not complete within rpc_timeout.)
 OperationalError: ('Request did not complete within rpc_timeout.', 'reading 
 failed in 2.0130 seconds.')
 {code}
 I did notice that if I sleep 20 seconds after killing the node and before 
 reading, that the read succeeds. This is probably because gossip has had a 
 chance to notice that the node is down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3688) [patch] avoid map lookups in loops by using entrysets

2012-01-19 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3688.
---

Resolution: Fixed

committed to trunk

 [patch] avoid map lookups in loops by using entrysets
 -

 Key: CASSANDRA-3688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3688
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1

 Attachments: use_entrysets.diff


 code loops over the keySet and does gets for the value, just use entrySet()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3689) [path] minor cleanup of compiler warnings

2012-01-19 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3689.
---

Resolution: Fixed

 [path] minor cleanup of compiler warnings
 -

 Key: CASSANDRA-3689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3689
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1

 Attachments: warnings_AbstractType.diff


 a bunch of minor cleanups around generics use around AbstractType, some 
 imports cleanups, unused fields etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2688) Support wide rows with Hadoop support

2012-01-17 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2688.
---

Resolution: Duplicate

addressed by CASSANDRA-2878

 Support wide rows with Hadoop support
 -

 Key: CASSANDRA-2688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2688
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeremy Hanna
  Labels: hadoop

 Currently the Hadoop support can only operate over the maximum row width of 
 thrift afaik.  Then a user must do paging of the row within their hadoop 
 interface - java, pig, hive.  It would be much nicer to have the hadoop 
 support page through the row internally, if possible.  Seeing that one of 
 cassandra's features is extremely wide rows, it would be nice feature parity 
 so that people didn't have to adjust their cassandra plans based on hadoop 
 support limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3715) Throw error when creating indexes with the same name as other CFs

2012-01-11 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3715.
---

Resolution: Duplicate
  Assignee: (was: Yuki Morishita)

Thanks for looking into that, Yuki.

 Throw error when creating indexes with the same name as other CFs
 -

 Key: CASSANDRA-3715
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3715
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.5
Reporter: Joaquin Casares

 0.8.8 throws: InvalidRequestException(why:Duplicate index name path)
 but 1.0.5 displays: null
 when running this:
 {noformat}
 create column family inode
   with column_type = 'Standard'
   and comparator = 
 'DynamicCompositeType(t=org.apache.cassandra.db.marshal.TimeUUIDType,s=org.apache.cassandra.db.marshal.UTF8Type,b=org.apache.cassandra.db.marshal.BytesType)'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and rows_cached = 0.0
   and row_cache_save_period = 0
   and row_cache_keys_to_save = 2147483647
   and keys_cached = 100.0
   and key_cache_save_period = 14400
   and read_repair_chance = 1.0
   and gc_grace = 60
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and row_cache_provider = 'ConcurrentLinkedHashCacheProvider'
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and comment = 'Stores file meta data'
   and column_metadata = [
 {column_name : 'b@70617468',
 validation_class : BytesType,
 index_name : 'path',
 index_type : 0
 },
 {column_name : 'b@73656e74696e656c',
 validation_class : BytesType,
 index_name : 'sentinel',
 index_type : 0
 },
 {column_name : 'b@706172656e745f70617468',
 validation_class : BytesType,
 index_name : 'parent_path',
 index_type : 0
 }];
 create column family inode_archive
   with column_type = 'Standard'
   and comparator = 
 'DynamicCompositeType(t=org.apache.cassandra.db.marshal.TimeUUIDType,s=org.apache.cassandra.db.marshal.UTF8Type,b=org.apache.cassandra.db.marshal.BytesType)'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and rows_cached = 0.0
   and row_cache_save_period = 0
   and row_cache_keys_to_save = 2147483647
   and keys_cached = 100.0
   and key_cache_save_period = 14400
   and read_repair_chance = 1.0
   and gc_grace = 60
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and row_cache_provider = 'ConcurrentLinkedHashCacheProvider'
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and comment = 'Stores file meta data'
   and column_metadata = [
 {column_name : 'b@70617468',
 validation_class : BytesType,
 index_name : 'path',
 index_type : 0
 },
 {column_name : 'b@73656e74696e656c',
 validation_class : BytesType,
 index_name : 'sentinel',
 index_type : 0
 },
 {column_name : 'b@706172656e745f70617468',
 validation_class : BytesType,
 index_name : 'parent_path',
 index_type : 0
 }];
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3579) AssertionError in hintedhandoff - 1.0.5

2012-01-11 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3579.
---

Resolution: Fixed

committed

 AssertionError in hintedhandoff - 1.0.5
 ---

 Key: CASSANDRA-3579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3579
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1 64 bit, 32 GB RAM, 8 GB allocated to JVM, 
 running XFS filesystem for commit/data directories
Reporter: Ramesh Natarajan
Assignee: Sylvain Lebresne
 Fix For: 1.0.7

 Attachments: 3579-fix-text.txt, 3579-v2.txt, 3579.patch


 We are running a 8 node cassandra cluster running cassandra 1.0.5.
 All our CF use leveled compaction.  We ran a test where we did a lot
 of inserts for 3 days. After that we started to run tests where some
 of the reads could ask for information that was inserted a while back.
 In this scenario we are seeing this assertion error in HintedHandoff.
 ERROR [HintedHandoff:3] 2011-12-05 15:42:04,324
 AbstractCassandraDaemon.java (line 133) Fatal exception in thread
 Thread[HintedHandoff:3,1,main]
 java.lang.RuntimeException: java.lang.RuntimeException:
 java.util.concurrent.ExecutionException: java.lang.AssertionError:
 originally calculated column size of 470937164 but now it is 470294247
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.RuntimeException:
 java.util.concurrent.ExecutionException: java.lang.AssertionError:
 originally calculated column size of 470937164 but now it is 470294247
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:330)
at 
 org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager.java:81)
at 
 org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffManager.java:353)
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
... 3 more
 Caused by: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of
 470937164 but now it is 470294247
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:326)
... 6 more
 Caused by: java.lang.AssertionError: originally calculated column size
 of 470937164 but now it is 470294247
at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:124)
at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:158)
at 
 org.apache.cassandra.db.compaction.CompactionManager$6.call(CompactionManager.java:275)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
 ERROR [HintedHandoff:3] 2011-12-05 15:42:04,333
 AbstractCassandraDaemon.java (line 133) Fatal exception in thread
 Thread[HintedHandoff:3,1,main]
 java.lang.RuntimeException: java.lang.RuntimeException:
 java.util.concurrent.ExecutionException: java.lang.AssertionError:
 originally calculated column size of 470937164 but now it is 470294247
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.RuntimeException:
 java.util.concurrent.ExecutionException: java.lang.AssertionError:
 originally calculated column size of 470937164 but now it is 470294247
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:330)
at 
 org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager.java:81)
at 
 org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffManager.java:353)
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
... 3 more
 Caused by: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of
 470937164 but now it is 470294247
at 

[jira] [Resolved] (CASSANDRA-3579) AssertionError in hintedhandoff - 1.0.5

2012-01-10 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3579.
---

   Resolution: Fixed
Fix Version/s: (was: 1.1)

Created CASSANDRA-3716 for the 1.1 followup

 AssertionError in hintedhandoff - 1.0.5
 ---

 Key: CASSANDRA-3579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3579
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1 64 bit, 32 GB RAM, 8 GB allocated to JVM, 
 running XFS filesystem for commit/data directories
Reporter: Ramesh Natarajan
Assignee: Sylvain Lebresne
 Fix For: 1.0.7

 Attachments: 3579-v2.txt, 3579.patch


 We are running a 8 node cassandra cluster running cassandra 1.0.5.
 All our CF use leveled compaction.  We ran a test where we did a lot
 of inserts for 3 days. After that we started to run tests where some
 of the reads could ask for information that was inserted a while back.
 In this scenario we are seeing this assertion error in HintedHandoff.
 ERROR [HintedHandoff:3] 2011-12-05 15:42:04,324
 AbstractCassandraDaemon.java (line 133) Fatal exception in thread
 Thread[HintedHandoff:3,1,main]
 java.lang.RuntimeException: java.lang.RuntimeException:
 java.util.concurrent.ExecutionException: java.lang.AssertionError:
 originally calculated column size of 470937164 but now it is 470294247
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.RuntimeException:
 java.util.concurrent.ExecutionException: java.lang.AssertionError:
 originally calculated column size of 470937164 but now it is 470294247
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:330)
at 
 org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager.java:81)
at 
 org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffManager.java:353)
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
... 3 more
 Caused by: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of
 470937164 but now it is 470294247
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:326)
... 6 more
 Caused by: java.lang.AssertionError: originally calculated column size
 of 470937164 but now it is 470294247
at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:124)
at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:158)
at 
 org.apache.cassandra.db.compaction.CompactionManager$6.call(CompactionManager.java:275)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
 ERROR [HintedHandoff:3] 2011-12-05 15:42:04,333
 AbstractCassandraDaemon.java (line 133) Fatal exception in thread
 Thread[HintedHandoff:3,1,main]
 java.lang.RuntimeException: java.lang.RuntimeException:
 java.util.concurrent.ExecutionException: java.lang.AssertionError:
 originally calculated column size of 470937164 but now it is 470294247
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.RuntimeException:
 java.util.concurrent.ExecutionException: java.lang.AssertionError:
 originally calculated column size of 470937164 but now it is 470294247
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:330)
at 
 org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager.java:81)
at 
 org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffManager.java:353)
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
... 3 more
 Caused by: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of
 

[jira] [Resolved] (CASSANDRA-3700) SelectStatement start/end key are not set correctly when a key alias is involved

2012-01-10 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3700.
---

Resolution: Fixed
  Assignee: Pavel Yaskevich  (was: Jonathan Ellis)

committed

 SelectStatement start/end key are not set correctly when a key alias is 
 involved
 

 Key: CASSANDRA-3700
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3700
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 0.8.1
Reporter: Jonathan Ellis
Assignee: Pavel Yaskevich
  Labels: cql
 Fix For: 1.0.7

 Attachments: 3700-case-insensitivity.txt, CASSANDRA-3700.patch


 start/end key are set by antlr in WhereClause, but this depends on the KEY 
 keyword.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3389) Evaluate CSLM alternatives for improved cache or GC performance

2012-01-10 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3389.
---

   Resolution: Fixed
Fix Version/s: (was: 1.1)

Too bad.  Thanks for checking it out.

 Evaluate CSLM alternatives for improved cache or GC performance
 ---

 Key: CASSANDRA-3389
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3389
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Brandon Williams
Priority: Minor
 Attachments: 0001-Replace-CSLM-with-ConcurrentSkipTreeMap.patch, 
 0001-Switch-CSLM-to-SnapTree.patch


 Ben Manes commented on 
 http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-performance that 
 it's worth evaluating https://github.com/mspiegel/lockfreeskiptree and 
 https://github.com/nbronson/snaptree as CSLM replacements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3724) CompositeType doesn't check number of components when validating

2012-01-10 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3724.
---

Resolution: Not A Problem

This is a feature.  (See also CASSANDRA-3657.)

 CompositeType doesn't check number of components when validating
 

 Key: CASSANDRA-3724
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3724
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Priority: Minor

 In {{AbstractCompositeType.validate()}}, there isn't any kind of check to 
 verify that the data has the same number of components as the comparator (or 
 validator).  This means that if you say the comparator is 
 {{CompositeType(UTF8Type, UTF8Type)}}, you can use column names that only 
 have the first component (ie, the last thing in the bytestring is the eof for 
 the first component).
 This behavior isn't explicitly stated anywhere.  Personally, I wouldn't 
 expect this to validate, but I could see an argument for why it should.  
 Either way, we need to either check the number of components or explicitly 
 state that this is expected behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3683) NullPointerException in getTempSSTablePath() with leveled compaction strategy

2012-01-08 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3683.
---

Resolution: Duplicate

Thanks, Kent.

 NullPointerException in getTempSSTablePath() with leveled compaction strategy
 -

 Key: CASSANDRA-3683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3683
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.5
 Environment: Linux
Reporter: Kent Tong

 When using the leveled compaction strategy, sometimes a NullPointerException 
 is raised during compaction. See the stacktrace below for the details.
 INFO [ScheduledTasks:1] 2011-12-29 08:43:52,117 GCInspector.java (line 123) 
 GC for ParNew: 1481 ms for 1 collections, 5577352760 used; max is 10171187200
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,117 StatusLogger.java (line 50) 
 Pool NameActive   Pending   Blocked
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,118 StatusLogger.java (line 65) 
 ReadStage 1 3 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,118 StatusLogger.java (line 65) 
 RequestResponseStage  0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,118 StatusLogger.java (line 65) 
 ReadRepairStage   0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,118 StatusLogger.java (line 65) 
 MutationStage 0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,119 StatusLogger.java (line 65) 
 ReplicateOnWriteStage 0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,119 StatusLogger.java (line 65) 
 GossipStage   0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,119 StatusLogger.java (line 65) 
 AntiEntropyStage  0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,120 StatusLogger.java (line 65) 
 MigrationStage0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,120 StatusLogger.java (line 65) 
 StreamStage   0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,120 StatusLogger.java (line 65) 
 MemtablePostFlusher   1 1 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,120 StatusLogger.java (line 65) 
 FlushWriter   1 1 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,121 StatusLogger.java (line 65) 
 MiscStage 0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,121 StatusLogger.java (line 65) 
 InternalResponseStage 0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,121 StatusLogger.java (line 65) 
 HintedHandoff 0 0 0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,122 StatusLogger.java (line 69) 
 CompactionManager   n/a-3
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,122 StatusLogger.java (line 81) 
 MessagingServicen/a   0,0
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,122 StatusLogger.java (line 85) 
 ColumnFamilyMemtable ops,data  Row cache size/cap  Key cache 
 size/cap
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,122 StatusLogger.java (line 88) 
 system.NodeIdInfo 0,0 0/0 
 0/1
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,122 StatusLogger.java (line 88) 
 system.IndexInfo  0,0 0/0 
 0/1
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,123 StatusLogger.java (line 88) 
 system.LocationInfo   0,0 0/0 
 2/3
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,123 StatusLogger.java (line 88) 
 system.Versions 3,103 0/0 
 0/1
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,123 StatusLogger.java (line 88) 
 system.Migrations 0,0 0/0 
 0/3
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,123 StatusLogger.java (line 88) 
 system.HintsColumnFamily  0,0 0/0 
 0/1
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,123 StatusLogger.java (line 88) 
 system.Schema 0,0 0/0 
 0/3
  INFO [ScheduledTasks:1] 2011-12-29 08:43:52,124 StatusLogger.java (line 88) 
 mcas.timebasedtoken  2198,1027880 0/0 
 700/700
  INFO [ScheduledTasks:1] 2011-12-29 08:43:53,565 GCInspector.java (line 123) 
 GC for ParNew: 372 ms for 1 collections, 5582229656 used; max is 10171187200
  INFO 

[jira] [Resolved] (CASSANDRA-3685) CQL support for non-utf8 column names

2012-01-08 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3685.
---

   Resolution: Not A Problem
Fix Version/s: (was: 1.2)

You're right: if we restrict non-wide-rows to utf8 columns, we don't need this 
at all (given CASSANDRA-2474).

 CQL support for non-utf8 column names
 -

 Key: CASSANDRA-3685
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3685
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
  Labels: cql

 Eric Evans' suggestions from the mailing list:
 {code}
 CREATE TABLE test (
   int(10) text,
   uuid(92d21d0a-d6cb-437c-9d3f-b67aa733a19f) bigint
 )
 {code}
 {code}
 CREATE TABLE test (
   (int)10 text,
   (uuid)92d21d0a-d6cb-437c-9d3f-b67aa733a19f bigint
 )
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3701) 1.1 cannot bootstrap into 1.0.6 cluster

2012-01-05 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3701.
---

Resolution: Won't Fix

Streaming (including bootstrap but also repair, decommission, and token 
changes) is not usually supported across multiple versions.  You'd want to 
bootstrap a 1.0.6, then upgrade it to 1.1 instead.

 1.1 cannot bootstrap into 1.0.6 cluster
 ---

 Key: CASSANDRA-3701
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3701
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.6, 1.1
Reporter: Sam Overton
  Labels: bootstrap, migration

 I appreciate that 1.1 is still unreleased, but I wanted to flag this up early 
 in case nobody had tested it.
 Tested with current 1.0.6 branch (0f2121e31032df105ca2846380b98a563d1b2c8a) 
 and current trunk (38c04fef0a431bf29010074bad1d35d87a739c02) from git.
 Steps to reproduce:
 * start a cass 1.0.6 instance with initial_token = 0 and partitioner = RP
 * insert some data (with rf=2)
 {noformat}
 create keyspace ks with 
 placement_strategy='org.apache.cassandra.locator.SimpleStrategy' and 
 strategy_options = {replication_factor:2};
 use ks;
 create column family cf with column_type='Standard';
 set cf[ascii('foo')][ascii('bar')] = ascii('baz');
 set cf[ascii('bar')][ascii('baz')] = ascii('quux');
 {noformat}
 * start a cass 1.1 instance with auto_bootstrap: true, half the token ring 
 and the first host as a seed
 * 1.1 node logs this error:
 {noformat}
 ERROR [ReadStage:1] 2012-01-05 16:12:19,855 AbstractCassandraDaemon.java 
 (line 137) Fatal exception in thread Thread[ReadStage:1,5,main]
 java.lang.NullPointerException
 at 
 org.apache.cassandra.config.CFMetaData.fromAvro(CFMetaData.java:395)
 at 
 org.apache.cassandra.db.migration.AddColumnFamily.subinflate(AddColumnFamily.java:104)
 at 
 org.apache.cassandra.db.migration.Migration.deserialize(Migration.java:292)
 at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler.doVerb(DefinitionsUpdateVerbHandler.java:57)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 {noformat}
 * apply patches from CASSANDRA-1391 to the 1.1 node, since this is scheduled 
 for 1.1 and removes the to/fromAvro stuff
 * bootstrap again, this time 1.0.6 node logs these errors:
 {noformat}
 ERROR [GossipStage:1] 2012-01-05 16:53:59,076 AbstractCassandraDaemon.java 
 (line 138) Fatal exception in thread Thread[GossipStage:1,5,main]
 java.lang.UnsupportedOperationException: Not a time-based UUID
 at java.util.UUID.timestamp(UUID.java:331)
 at 
 org.apache.cassandra.service.MigrationManager.updateHighestKnown(MigrationManager.java:121)
 at 
 org.apache.cassandra.service.MigrationManager.rectify(MigrationManager.java:99)
 at 
 org.apache.cassandra.service.MigrationManager.onJoin(MigrationManager.java:64)
 at 
 org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:857)
 at 
 org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:908)
 at 
 org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:68)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 ERROR [ReadStage:3] 2012-01-05 16:54:00,053 AbstractCassandraDaemon.java 
 (line 138) Fatal exception in thread Thread[ReadStage:3,5,main]
 java.lang.RuntimeException: java.io.EOFException
 at 
 org.apache.cassandra.service.IndexScanVerbHandler.doVerb(IndexScanVerbHandler.java:51)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 Caused by: java.io.EOFException
 at java.io.DataInputStream.readInt(DataInputStream.java:392)
 at 
 org.apache.cassandra.utils.FBUtilities.deserialize(FBUtilities.java:404)
 at 
 org.apache.cassandra.db.IndexScanCommand$IndexScanCommandSerializer.deserialize(IndexScanCommand.java:102)
 at 
 

[jira] [Resolved] (CASSANDRA-3696) Adding another datacenter's node results in 0 rows returned on first datacenter

2012-01-05 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3696.
---

   Resolution: Fixed
Fix Version/s: 1.0.7
   0.8.10

committed

 Adding another datacenter's node results in 0 rows returned on first 
 datacenter
 ---

 Key: CASSANDRA-3696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3696
 Project: Cassandra
  Issue Type: Bug
Reporter: Joaquin Casares
Assignee: Jonathan Ellis
 Fix For: 0.8.10, 1.0.7

 Attachments: 3696.txt


 On Cassandra-1.0.5:
 1. Create a node in C* with a fresh installation and create a keyspace on 
 that node with one column family -
 CREATE KEYSPACE test 
 WITH placement_strategy = 'SimpleStrategy' 
 and strategy_options={replication_factor:1};
 use test; 
 create column family cf1;
 2. Insert values into cf1 -
 set cf1[ascii('k')][ascii('c')] = ascii('v');
 get cf1[ascii('k')]; 
 = (column=63, value=v, timestamp=1325689630397000) 
 Returned 1 results.
 3. update the strategy options from simple to networktopology with 
 {Cassandra:1, Backup:1} 
 4. read from cf1 to make sure the options change doesn't affect anything -
 consistencylevel as LOCAL_QUORUM; 
 get cf1[ascii('k')]; 
 = (column=63, value=v, timestamp=1325689630397000) 
 Returned 1 results.
 5. start a second node in the Backup datacenter 
 6. read from cf1 again (on the first node) -
 consistencylevel as LOCAL_QUORUM; 
 get cf1[ascii('k')]; 
 Returned 0 results.
 After about 60 seconds, get cf1[ascii('k')] started to return results 
 again. 
 Also, when running at a CL of ONE on 1.0's head, we were able to see issues 
 as well.
 But, if more than one node was added to the second datacenter, then 
 replication_strategy is changed, it seems okay.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3681) Multiple threads can attempt hint handoff to the same target

2012-01-05 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3681.
---

Resolution: Fixed

committed

 Multiple threads can attempt hint handoff to the same target
 

 Key: CASSANDRA-3681
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3681
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
  Labels: hintedhandoff
 Fix For: 1.0.7

 Attachments: 3681-v3.txt, 3681.txt, 3681v2.txt


 HintedHandOffManager attempts to prevent multiple threads sending hints to 
 the same target with the queuedDeliveries set, but the code is buggy.  If two 
 handoffs *do* occur concurrently, the second thread can use an arbitrarily 
 large amount of memory skipping tombstones when it starts paging from the 
 beginning of the hint row, looking for the first live hint.  (This is not a 
 problem with a single thread, since it always pages starting with the 
 last-seen hint column name, effectively skipping the tombstones.  Then it 
 compacts when it's done.)
 Technically this bug is present in all older Cassandra releases, but it only 
 causes problems in 1.0.x since the hint rows tend to be much larger (since 
 there is one hint per write containing the entire mutation, instead of just 
 one per row consisting of just the key).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3497) BloomFilter FP ratio should be configurable or size-restricted some other way

2012-01-05 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3497.
---

Resolution: Fixed

bq. Patch attached so that cli show schema or describe commands show 
bloom_filter_fp_chance if set.

committed

 BloomFilter FP ratio should be configurable or size-restricted some other way
 -

 Key: CASSANDRA-3497
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3497
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Brandon Williams
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.0.7

 Attachments: 0001-Add-bloom_filter_fp_chance-to-cli.patch, 
 0001-give-default-val-to-fp_chance.patch, 3497-v3.txt, 3497-v4.txt, 
 CASSANDRA-1.0-3497.txt


 When you have a live dc and purely analytical dc, in many situations you can 
 have less nodes on the analytical side, but end up getting restricted by 
 having the BloomFilters in-memory, even though you have absolutely no use for 
 them.  It would be nice if you could reduce this memory requirement by tuning 
 the desired FP ratio, or even just disabling them altogether.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3704) CQl support for aggrrgate functions like group by order by

2012-01-05 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3704.
---

Resolution: Won't Fix

Neither of these is in scope for CQL.  (If you want to do analytics with 
Cassandra you should look at running Hive on top via Hadoop.)

 CQl support for aggrrgate functions like group by  order by
 

 Key: CASSANDRA-3704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3704
 Project: Cassandra
  Issue Type: New Feature
Reporter: nish gowda

 Currently facing this issue
 cqlsh:testspace SELECT * from abc ORDER BY key ASC;
 Bad Request: line 1:31 mismatched input 'ORDER' expecting EOF

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3663) Change syntax of cqlsh for creating column families to be more descriptive for comparator and default_validation

2012-01-05 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3663.
---

Resolution: Won't Fix

comparator and default_validation are both going to be obsolete and deprecated 
with CASSANDRA-2474 done, so it's not worth bikeshedding what they are called 
until then.

 Change syntax of cqlsh for creating column families to be more descriptive 
 for comparator and default_validation
 

 Key: CASSANDRA-3663
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3663
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Donald Smith
Priority: Minor

 According to 
 http://crlog.info/2011/09/17/cassandra-query-language-cql-v2-0-reference/#Column+Family+Options+%28optional%29
 the syntax for creating column families in cqlsh uses keywords
 comparator  and default_validation.
 Better, more descriptive names for these would be
 column_name_comparator  and column_value_validation
 or perhaps better yet
 column_key_comparator  and column_value_validation.
 Two other people on the cassandra users' mailing list agreed with this 
 suggestion.
 The existing syntax is unclear and confusing to beginners.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2827) Thrift error

2012-01-04 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2827.
---

Resolution: Invalid

Either you're using a Hector connection in a thread-unsafe way, or there is a 
Hector bug.

 Thrift error
 

 Key: CASSANDRA-2827
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2827
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.4
 Environment: 2 nodes with 0.7.4 on linux
Reporter: Olivier Smadja

 This exeception occured of a non seed node.
 ERROR [pool-1-thread-9] 2011-06-25 17:41:37,723 CustomTThreadPoolServer.java 
 (line 218) Thrift error occurred during processing of message.
 org.apache.thrift.TException: Negative length: -2147418111
   at 
 org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:388)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363)
   at 
 org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:15964)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3023)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2555)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
   at java.lang.Thread.run(Thread.java:619)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3531) Fix crack-smoking in ConsistencyLevelTest

2012-01-03 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3531.
---

Resolution: Fixed
  Reviewer: jbellis
  Assignee: Sylvain Lebresne

You're right: while it's reasonable to unit-test assureSufficientLiveNodes, the 
right place to do that is just with IWriteResponseHandler objects instead of 
mocking up a ring.  This test also pre-dated NTS and the different IWRH for 
that, so it's pretty fragile.

Went ahead and deleted it per your suggestion.

 Fix crack-smoking in ConsistencyLevelTest 
 --

 Key: CASSANDRA-3531
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3531
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.0.4
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.0.7


 First, let's note that this test fails in current 1.0 branch. It was broken 
 (emphasis on the quotes) by CASSANDRA-3529. But it's not CASSANDRA-3529 
 fault, it's only that the use of NonBlockingHashMap changed the order of the 
 tables returned by Schema.instance.getNonSystemTables(). *And*,  it turns out 
 that ConsistencyLevelTest bails out as soon as it has found one keyspace with 
 rf = 2 due to a misplaced return. So it use to be that ConsistencyLevelTest 
 was only ran for Keyspace5 (whose RF is 2) for which the test work. But for 
 any RF  2, the test fails.
 The reason of this failing is that the test creates a 3 node cluster for whom 
 only 1 node is alive as far as the failure detector is concerned. So for RF=3 
 and CL=QUORUM, the writes are unavailable (the failure detector is queried), 
 while for reads we pretend two nodes are alive so we end up with a case 
 where isWriteUnavailable != isReadUnavailable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3524) add support for creating and altering composite CFs to CQL + cqlsh

2011-12-30 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3524.
---

   Resolution: Duplicate
Fix Version/s: (was: 1.1)
 Assignee: (was: paul cannon)

Let's leave this to CASSANDRA-2474

 add support for creating and altering composite CFs to CQL + cqlsh
 --

 Key: CASSANDRA-3524
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3524
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Priority: Minor
  Labels: cql
   Original Estimate: 72h
  Remaining Estimate: 72h

 I believe we want this in CREATE, ASSUME, and ALTER.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3686) Streaming retry is no longer performed

2011-12-30 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3686.
---

   Resolution: Fixed
Fix Version/s: 1.0.7
 Reviewer: jbellis

committed

 Streaming retry is no longer performed
 --

 Key: CASSANDRA-3686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.6
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
  Labels: stream
 Fix For: 1.0.7

 Attachments: 0001-Fix-error-handling-for-streaming-retry.patch


 CASSANDRA-3532 changed exception handling when processing incoming stream, 
 but since it wraps all exception into RuntimeException, streaming retry which 
 had been occurred when IOException is thrown no longer works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3497) BloomFilter FP ratio should be configurable or size-restricted some other way

2011-12-27 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3497.
---

Resolution: Fixed

committed

 BloomFilter FP ratio should be configurable or size-restricted some other way
 -

 Key: CASSANDRA-3497
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3497
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Brandon Williams
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.0.7

 Attachments: 0001-give-default-val-to-fp_chance.patch, 3497-v3.txt, 
 3497-v4.txt, CASSANDRA-1.0-3497.txt


 When you have a live dc and purely analytical dc, in many situations you can 
 have less nodes on the analytical side, but end up getting restricted by 
 having the BloomFilters in-memory, even though you have absolutely no use for 
 them.  It would be nice if you could reduce this memory requirement by tuning 
 the desired FP ratio, or even just disabling them altogether.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3664) [patch] fix some obvious javadoc issues generated via ant javadoc

2011-12-25 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3664.
---

   Resolution: Fixed
Fix Version/s: 1.1
 Reviewer: jbellis
 Assignee: Dave Brosius

committed, thanks!

 [patch] fix some obvious javadoc issues generated via ant javadoc
 -

 Key: CASSANDRA-3664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3664
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.6
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1

 Attachments: jd.diff, jd2.diff




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3660) Change syntax of cli for creating column families to be more intuitive

2011-12-22 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3660.
---

Resolution: Won't Fix

cli is kept around for backwards compatiblity at this point; cqlsh is the 
future.

 Change syntax of cli for creating column families to be more intuitive
 --

 Key: CASSANDRA-3660
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3660
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Donald Smith
Priority: Minor

 Currently, the syntax for creating column families is like this:
 create column family Users
 with comparator=UTF8Type
 and default_validation_class=UTF8Type
 and key_validation_class=UTF8Type;
 It's not clear what comparator and default_validation_class refer to. 
 Much clearer would be:
 create column family Users
 with column_name_comparator=UTF8Type
 and column_value_validation_class=UTF8Type
 and key_validation_class=UTF8Type;
 BTW, instead of column_name_comparator, I'd actually prefer 
 column_key_comparator since it seems more accurate to call column names 
 column keys.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3653) cqlsh cant select from super CF

2011-12-20 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3653.
---

Resolution: Not A Problem

CQL does not support supercolumns, and won't until at least CASSANDRA-2474.  
(Which is really targetted at composites, so I say at least.)

 cqlsh cant select from super CF
 ---

 Key: CASSANDRA-3653
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3653
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.0.6
 Environment: Cassandra 1.0.6
Reporter: Radim Kolar
Priority: Minor

 Selecting from Super CF returns error
 cqlsh:Keyspace1 select * from Keyspace1.Super1 limit 10;
 Internal application error
 cqlsh:Keyspace1 describe columnfamily Super1;
 CREATE COLUMNFAMILY Super1 (
   KEY text PRIMARY KEY,
   crc32 bigint,
   id bigint,
   name ascii,
   size bigint
 ) WITH
   comment='' AND
   comparator=ascii AND
   row_cache_provider='ConcurrentLinkedHashCacheProvider' AND
   key_cache_size=20.00 AND
   row_cache_size=600.00 AND
   read_repair_chance=1.00 AND
   gc_grace_seconds=864000 AND
   default_validation=blob AND
   min_compaction_threshold=5 AND
   max_compaction_threshold=10 AND
   row_cache_save_period_in_seconds=0 AND
   key_cache_save_period_in_seconds=14400 AND
   replicate_on_write=False;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3645) Can't delete row with cqlsh via row key

2011-12-19 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3645.
---

Resolution: Duplicate

You're right, this is normal tombstone behavior.  (See CASSANDRA-2569 for an 
earlier example.)

 Can't delete row with cqlsh via row key
 ---

 Key: CASSANDRA-3645
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3645
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.6
Reporter: Oleksandr Shyshko
  Labels: cql, cqlsh, delete

 This is probably not a bug, but standard tombstone/deletion behavior.
 Maybe it would be nice to have a built-in filter for tombstones, so they 
 won't appears in queries.
 Reproduce by:
 ==
 cqlsh CREATE KEYSPACE ss WITH strategy_class = 'SimpleStrategy' AND 
 strategy_options:replication_factor = 1;
 cqlsh use ss;
 cqlsh:ss create columnfamily users (name text primary key, pass text);
 cqlsh:ss select * from users;
 cqlsh:ss insert into users (name, pass) values ('john', 'secret');
 cqlsh:ss select * from users;
  name |   pass |
  john | secret |
 cqlsh:ss delete from users where name = 'john';
 cqlsh:ss select * from users;
  name |
  john |
 cqlsh:ss
 ==
 Desired behavior:
 ==
 cqlsh:ss delete from users where name = 'john';
 cqlsh:ss select * from users;
 cqlsh:ss
 ==

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3637) data file size limit

2011-12-15 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3637.
---

Resolution: Not A Problem

LeveledCompactionStrategy addresses this. 
http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra

 data file size limit
 

 Key: CASSANDRA-3637
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3637
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Zenek Kraweznik

 For 100GB cassandra database (on 500GB disk) I need another 100GB space for 
 compacting (caused by large files, one of data file is 80GB).
 Limitng file size for ex to 5GB (limit shoud be configurable) I need 
 significantly less space for that operation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-1537) Add option (on CF) to remove expired column on minor compactions

2011-12-15 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1537.
---

   Resolution: Won't Fix
Fix Version/s: (was: 1.1)
 Assignee: (was: Sylvain Lebresne)

This doesn't seem urgent or useful enough to justify adding more options and 
complexity to the TTL code.

 Add option (on CF) to remove expired column on minor compactions
 

 Key: CASSANDRA-1537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1537
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.1
Reporter: Sylvain Lebresne
Priority: Minor
   Original Estimate: 8h
  Remaining Estimate: 8h

 In some use cases, you can safely remove the tombstones of an expired column.
 In theory, this is true in each case where you know that you will never 
 update a column 
 using a ttl strictly lesser that the one of the old column.
 This will be the case for instance if you always use the same ttl on all the 
 columns of a CF
 (say you use the CF for a long term persistent cache).
 I propose adding an option (by CF) that says 'always remove tombstone of 
 expired columns
 for that CF'.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2056) Need a way of flattening schemas.

2011-12-15 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2056.
---

   Resolution: Invalid
Fix Version/s: (was: 1.1)
 Assignee: (was: Gary Dusbabek)

This is obsolete post-CASSANDRA-1391

 Need a way of flattening schemas.
 -

 Key: CASSANDRA-2056
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2056
 Project: Cassandra
  Issue Type: Improvement
Reporter: Gary Dusbabek
Priority: Minor
 Attachments: v2-0001-convert-MigrationManager-into-a-singleton.txt, 
 v2-0002-bail-on-migrations-originating-from-newer-protocol-ver.txt, 
 v2-0003-a-way-to-upgrade-schema-when-protocol-version-changes.txt


 For all of our trying not to, we still managed to screw this up.  Schema 
 updates currently contain a serialized RowMutation stored as a column value.  
 When a node needs updated schema, it requests these values, deserializes them 
 and applies them.  As the serialization scheme for RowMutation changes over 
 time (this is inevitable), those old migrations will become incompatible with 
 newer implementations of the RowMutation deserializer.  This means that when 
 new nodes come online, they'll get migration messages that they have trouble 
 deserializing.  (Remember, we've only made the promise that we'll be 
 backwards compatible for one version--see CASSANDRA-1015--even though we'd 
 eventually have this problem without that guarantee.)
 What I propose is a cluster command to flatten the schema prior to upgrading. 
  This would basically purge the old schema updates and replace them with a 
 single serialized migration (serialized in the current protocol version).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-2876) JDBC 1.1 Roadmap of Enhancements

2011-12-15 Thread Jonathan Ellis (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2876.
---

Resolution: Fixed
  Assignee: Rick Shaw

Resolving as fixed since the subtasks are, but really JDBC driver moved 
out-of-tree anyway.

 JDBC 1.1 Roadmap of Enhancements
 

 Key: CASSANDRA-2876
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2876
 Project: Cassandra
  Issue Type: Improvement
  Components: Drivers
Affects Versions: 0.8.1
Reporter: Rick Shaw
Assignee: Rick Shaw
Priority: Minor
  Labels: cql, jdbc
 Fix For: 1.1


 Organizational ticket to tie together the proposed improvements to 
 Cassandra's JDBC driver  in order to coincide with the 1.0 release of the 
 server-side product in the fall of 2011.
 The target list of improvements (in no particular order for the moment) are 
 as follows:
 # Complete the {{PreparedStatement}} functionality by implementing true 
 server side variable binding against pre-compiled CQL references.
 # Provide simple {{DataSource}} Support.
 # Provide a full {{PooledDataSource}} implementation that integrates the C* 
 JDBC driver with App Servers, JPA implementations and POJO Frameworks (like 
 Spring).
 # Add the {{BigDecimal}} datatype to the list of {{AbstractType}} classes to 
 complete the planned datatype support for {{PreparedStatement}} and 
 {{ResultSet}}.
 # Enhance the {{Driver}} features to support automatic error recovery and 
 reconnection.
 # Support {{RowId}} in {{ResultSet}}
 # Allow bi-directional row access scrolling  to complete functionality in the 
 {{ResultSet}}.
 # Deliver unit tests for each of the major components of the suite.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >