[jira] Commented: (CASSANDRA-2006) Serverwide caps on memtable thresholds

2011-01-25 Thread David Boxenhorn (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986286#action_12986286
 ] 

David Boxenhorn commented on CASSANDRA-2006:


I guess that what my suggestion means, in practice, is that the memtable in 
the system that was using the largest fraction of it's local threshold would be 
flushed would be applied when a keyspace threshold is exceeded, rather than 
when a system threshold is exceeded.

When a server threshold is exceeded, you would first look for the keyspace that 
is using the largest fraction of its threshold, then flush the memtable in that 
keyspace that is using the largest fraction of its local threshold.

 Serverwide caps on memtable thresholds
 --

 Key: CASSANDRA-2006
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2006
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Stu Hood
 Fix For: 0.8


 By storing global operation and throughput thresholds, we could eliminate the 
 many small memtables problem caused by having many CFs. The global 
 threshold would be set in the config file, to allow different classes of 
 servers to have different values configured.
 Operations occurring in the memtable would add to the global counters, in 
 addition to the memtable-local counters. When a global threshold was 
 violated, the memtable in the system that was using the largest fraction of 
 it's local threshold would be flushed. Local thresholds would continue to act 
 as they always have.
 The result would be larger sstables, safer operation with multiple CFs and 
 per node tuning.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2047) Stress --keep-going should become --keep-trying

2011-01-25 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986310#action_12986310
 ] 

Pavel Yaskevich commented on CASSANDRA-2047:


Do you mean that stress should retry each failed read/write request till it 
succeeds (possibly infinitely)?

 Stress --keep-going should become --keep-trying
 ---

 Key: CASSANDRA-2047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2047
 Project: Cassandra
  Issue Type: Improvement
  Components: Contrib
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 0.7.1


 The --keep-going flag makes the stress tool drop messages that time out on 
 the floor.
 I think it's more realistic (esp for a stress tool) to keep trying till this 
 read/write succeeds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (CASSANDRA-2047) Stress --keep-going should become --keep-trying

2011-01-25 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986312#action_12986312
 ] 

Pavel Yaskevich edited comment on CASSANDRA-2047 at 1/25/11 5:12 AM:
-

The idea behind keep-going is that  errors will be skipped to continue to do 
operations to collect data. I think that keep-trying should be made as a 
separate option, not the replacement for keep-going...

  was (Author: xedin):
The idea behind --keep-going is that  errors will be skipped to continue 
to do operations to collect data. I think that --keep-trying should be made 
as a separate option, not the replacement for keep-going...
  
 Stress --keep-going should become --keep-trying
 ---

 Key: CASSANDRA-2047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2047
 Project: Cassandra
  Issue Type: Improvement
  Components: Contrib
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 0.7.1


 The --keep-going flag makes the stress tool drop messages that time out on 
 the floor.
 I think it's more realistic (esp for a stress tool) to keep trying till this 
 read/write succeeds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (CASSANDRA-1951) offline local nodes

2011-01-25 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-1951:
---

Assignee: Sylvain Lebresne

 offline local nodes
 ---

 Key: CASSANDRA-1951
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1951
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Gary Dusbabek
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 0.8


 We'd like the ability to take a node offline (gossip, thrift, etc), but 
 without bringing down cassandra.  The main reason is so that compactions can 
 be performed completely off-line.
 CASSANDRA-1108 gets us most of the way there, but not all the way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1951) offline local nodes

2011-01-25 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-1951:


Attachment: 0001-Allow-to-start-and-stop-the-thrift-server-through-JM.patch

Attached patch does the missing part, that is it allows to stop and restart the 
thrift server (or avro, though I've tested the latter less extensively) from 
JMX. In addition, it allows to not start the thrift server at boot time through 
-Dcassandra.start_rpc=false (in which case it can be start through JMX).

Implementation note: the thrift server (connection accepting code) was running 
as the main thread (serverEngine.server() was blocking) which didn't make this 
easily doable so the patch change this and spawn a thread for the said 
connection acception code (Avro was already doing this in a separate thread).

 offline local nodes
 ---

 Key: CASSANDRA-1951
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1951
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Gary Dusbabek
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-Allow-to-start-and-stop-the-thrift-server-through-JM.patch


 We'd like the ability to take a node offline (gossip, thrift, etc), but 
 without bringing down cassandra.  The main reason is so that compactions can 
 be performed completely off-line.
 CASSANDRA-1108 gets us most of the way there, but not all the way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2047) Stress --keep-going should become --keep-trying

2011-01-25 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986386#action_12986386
 ] 

T Jake Luciani commented on CASSANDRA-2047:
---

yeah retry, that's fine if you make it a separate option.

 Stress --keep-going should become --keep-trying
 ---

 Key: CASSANDRA-2047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2047
 Project: Cassandra
  Issue Type: Improvement
  Components: Contrib
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 0.7.1


 The --keep-going flag makes the stress tool drop messages that time out on 
 the floor.
 I think it's more realistic (esp for a stress tool) to keep trying till this 
 read/write succeeds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1919) Add shutdownhook to flush commitlog

2011-01-25 Thread Gary Dusbabek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986390#action_12986390
 ] 

Gary Dusbabek commented on CASSANDRA-1919:
--

+1. 

I'm beginning to think something was introduced in the SSL patch that altered 
the behavior the sockets.  I've seen odd socket errors twice in the last few 
days while running the unit tests, I think in RemoveTest.  fwiw, I didn't see 
any errors while running the tests with this patch.

 Add shutdownhook to flush commitlog
 ---

 Key: CASSANDRA-1919
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1919
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 1919-v2.txt, 1919.txt

   Original Estimate: 4h
  Time Spent: 6h
  Remaining Estimate: 4h

 this replaces the periodic_with_flush approach from CASSANDRA-1780 / 
 CASSANDRA-1917

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1954) Double-check or replace RRW memtable lock

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986392#action_12986392
 ] 

Jonathan Ellis commented on CASSANDRA-1954:
---

The benefit is that we can have multiple writers acquire the readlock (yes, 
that's confusing :), but they will all be blocked on flush while the writelock 
is acquired.

 Double-check or replace RRW memtable lock
 -

 Key: CASSANDRA-1954
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1954
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Stu Hood
Priority: Minor

 {quote}...when a Memtable reaches its threshold, up to (all) N write threads 
 will often notice, and race to acquire the writeLock in order to freeze the 
 memtable. This means that we do way more writeLock acquisitions than we need 
 to...{quote}
 See CASSANDRA-1930 for backstory, but adding double checking inside a read 
 lock before trying to re-entrantly acquire the writelock would eliminate most 
 of these excess writelock acquisitions.
 Alternatively, we should explore removing locking from these structures 
 entirely, and replacing the writeLock acquisition with a per-memtable counter 
 of active threads.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2014) Can't delete whole row from Hadoop MapReduce

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2014:
--

 Reviewer: stuhood
Fix Version/s: 0.7.1

 Can't delete whole row from Hadoop MapReduce
 

 Key: CASSANDRA-2014
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2014
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 0.7.0
 Environment: Debian Linux 2.6.32 amd64
Reporter: Patrik Modesto
 Fix For: 0.7.1

 Attachments: 2014-mr-delete-whole-row.patch


 ColumnFamilyRecordWriter.java doesn't support Mutation with Deletion without 
 slice_predicat and super_column to delete whole row. The other way I tried is 
 to specify SlicePredicate with empty start and finish and I got:
 {code}
 java.io.IOException: InvalidRequestException(why:Deletion does not yet 
 support SliceRange predicates.)
 at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordWriter$RangeClient.run(ColumnFamilyRecordWriter.java:355)
 {code}
 I tryied to patch the ColumnFamilyRecordWriter.java like this:
 {code}
 --- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
 +++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordWriter.java
 @@ -166,10 +166,17 @@ implements 
 org.apache.hadoop.mapred.RecordWriterByteBuffer,Listorg.apache.cass
  // deletion
  Deletion deletion = new Deletion(amut.deletion.timestamp);
  mutation.setDeletion(deletion);
 +
  org.apache.cassandra.avro.SlicePredicate apred = 
 amut.deletion.predicate;
 -if (amut.deletion.super_column != null)
 +if (apred == null  amut.deletion.super_column == null)
 +{
 +// epmty; delete whole row
 +}
 +else if (amut.deletion.super_column != null)
 +{
  // super column
  deletion.setSuper_column(copy(amut.deletion.super_column));
 +}
  else if (apred.column_names != null)
  {
  // column names
 {code}
 but that didn't work as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (CASSANDRA-2017) Replace ivy withmaven-ant-tasks

2011-01-25 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-2017:
-

Assignee: Stephen Connolly

 Replace ivy withmaven-ant-tasks
 ---

 Key: CASSANDRA-2017
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2017
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.1
Reporter: Stephen Connolly
Assignee: Stephen Connolly
 Fix For: 0.7.1

 Attachments: CASSANDRA-2017-initial-patch.patch, CASSANDRA-2017.patch


 Replace ivy with maven-ant-tasks.
 Three main reasons:
 1. In order to deploy cassandra to maven central, we will need to use 
 maven-ant-tasks anyway (as ivy does not generate correct poms)
 2. In order to generate gpg signatures using ivy, we need to bootstrap a 
 second ivy taskdef or use multiple get tasks to download bouncycastle. 
 Maven-ant-tasks does not require this.
 3. Allows consolidating the dependency information in one place.  Rather than 
 having duplication with the maven-ant-tasks for deploy to central

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2047) Stress --keep-going should become --keep-trying

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986394#action_12986394
 ] 

Jonathan Ellis commented on CASSANDRA-2047:
---

wait, why would we want two options?  for the purpose of just hammering w/ 
inserts it shouldn't matter if we retry the same one or skip to the next, but 
if we want to do reads later it makes a lot more sense to retry.  In other 
words, I can't think of a reason we'd want keep-going instead of keep-trying.

 Stress --keep-going should become --keep-trying
 ---

 Key: CASSANDRA-2047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2047
 Project: Cassandra
  Issue Type: Improvement
  Components: Contrib
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 0.7.1


 The --keep-going flag makes the stress tool drop messages that time out on 
 the floor.
 I think it's more realistic (esp for a stress tool) to keep trying till this 
 read/write succeeds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1951) offline local nodes

2011-01-25 Thread Gary Dusbabek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Dusbabek updated CASSANDRA-1951:
-

Reviewer: gdusbabek

 offline local nodes
 ---

 Key: CASSANDRA-1951
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1951
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Gary Dusbabek
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-Allow-to-start-and-stop-the-thrift-server-through-JM.patch

  Time Spent: 2h
  Remaining Estimate: 0h

 We'd like the ability to take a node offline (gossip, thrift, etc), but 
 without bringing down cassandra.  The main reason is so that compactions can 
 be performed completely off-line.
 CASSANDRA-1108 gets us most of the way there, but not all the way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-982) read repair on quorum consistencylevel

2011-01-25 Thread ivan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986407#action_12986407
 ] 

ivan commented on CASSANDRA-982:


rep_fix_02.patch (for dae2b22bbd0937d8cd361ce1eed9e633a55ce979)

changes

- Message::removeHeader
message.setHeader(RowMutation.FORWARD_HEADER, null) throws NullPointerException

- db/RowMutationVerbHandler::forwardToLocalNodes
set correct destination address for sendOneWay

- response(ReadResponse result) added to DatacenterReadCallback
otherwise ReadCallback will process local results and condition will be never 
signaled in DatacenterReadCallback

- FORWARD header removed in StorageProxy::sendMessages if dataCenter equals to 
localDataCenter
  (if a non local DC processed before local DC FORWARD header will be set when 
unhintedMessage used in sendToHintedEndpoints. one instance of Message used for 
unhintedMessage)


read/write endpoint list separation is not included in this patch.

bq. On the read side we always do reads from the closest/fastest replicas as 
determined by the snitch, and we don't want to change that.

if I'm right snitch determines the order of addresses not list of endpoint 
addresses.
i think result of get(Live)NaturalEndpoints and calculateNaturalEndpoints need 
to depend on type of command. if a read command processed these methods should 
return a list of addresses just from local DC. then snitch will sort these 
addresses.


 read repair on quorum consistencylevel
 --

 Key: CASSANDRA-982
 URL: https://issues.apache.org/jira/browse/CASSANDRA-982
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-better-digest-checking-for-ReadResponseResolver.patch, 
 0001-r-m-SP.weakRead-rename-strongRead-to-fetchRows.-read-r.txt, 
 0002-implement-read-repair-as-a-second-resolve-after-the-in.txt, 
 0002-quorum-only-read.txt, 
 0003-rename-QuorumResponseHandler-ReadCallback.txt, 
 982-resolve-digests-v2.txt, rep_fix_01.patch

   Original Estimate: 6h
  Remaining Estimate: 6h

 CASSANDRA-930 made read repair fuzzy optional, but this only helps with 
 ConsistencyLevel.ONE:
 - Quorum reads always send requests to all nodes
 - only the first Quorum's worth of responses get compared
 So what we'd like to do two changes:
 - only send read requests to the closest R live nodes
 - if read repair is enabled, also compare results from the other nodes in the 
 background

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-982) read repair on quorum consistencylevel

2011-01-25 Thread ivan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ivan updated CASSANDRA-982:
---

Attachment: rep_fix_02.patch

for dae2b22bbd0937d8cd361ce1eed9e633a55ce979

 read repair on quorum consistencylevel
 --

 Key: CASSANDRA-982
 URL: https://issues.apache.org/jira/browse/CASSANDRA-982
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-better-digest-checking-for-ReadResponseResolver.patch, 
 0001-r-m-SP.weakRead-rename-strongRead-to-fetchRows.-read-r.txt, 
 0002-implement-read-repair-as-a-second-resolve-after-the-in.txt, 
 0002-quorum-only-read.txt, 
 0003-rename-QuorumResponseHandler-ReadCallback.txt, 
 982-resolve-digests-v2.txt, rep_fix_01.patch, rep_fix_02.patch

   Original Estimate: 6h
  Remaining Estimate: 6h

 CASSANDRA-930 made read repair fuzzy optional, but this only helps with 
 ConsistencyLevel.ONE:
 - Quorum reads always send requests to all nodes
 - only the first Quorum's worth of responses get compared
 So what we'd like to do two changes:
 - only send read requests to the closest R live nodes
 - if read repair is enabled, also compare results from the other nodes in the 
 background

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1379) Uncached row reads may block cached reads

2011-01-25 Thread Chris Burroughs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986411#action_12986411
 ] 

Chris Burroughs commented on CASSANDRA-1379:


Is there any existing mechanism to detect and measure if this is occurring?

 Uncached row reads may block cached reads
 -

 Key: CASSANDRA-1379
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1379
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: David King
Assignee: Javier Canillas
Priority: Minor
 Fix For: 0.7.2

 Attachments: CASSANDRA-1379.patch


 The cap on the number of concurrent reads appears to cap the *total* number 
 of concurrent reads instead of just capping the reads that are bound for 
 disk. That is, given N concurrent readers if all of them are busy waiting on 
 disk, even reads that can be served from the row cache will block waiting for 
 them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2017) Replace ivy withmaven-ant-tasks

2011-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986412#action_12986412
 ] 

Hudson commented on CASSANDRA-2017:
---

Integrated in Cassandra-0.7 #204 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.7/204/])
Switch from ivy to maven-ivy-tasks to ease maven central builds.

Patch by Stephen Connolly reviewed by eevans and tjake for CASSANDRA-2017


 Replace ivy withmaven-ant-tasks
 ---

 Key: CASSANDRA-2017
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2017
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.1
Reporter: Stephen Connolly
Assignee: Stephen Connolly
 Fix For: 0.7.1

 Attachments: CASSANDRA-2017-initial-patch.patch, CASSANDRA-2017.patch


 Replace ivy with maven-ant-tasks.
 Three main reasons:
 1. In order to deploy cassandra to maven central, we will need to use 
 maven-ant-tasks anyway (as ivy does not generate correct poms)
 2. In order to generate gpg signatures using ivy, we need to bootstrap a 
 second ivy taskdef or use multiple get tasks to download bouncycastle. 
 Maven-ant-tasks does not require this.
 3. Allows consolidating the dependency information in one place.  Rather than 
 having duplication with the maven-ant-tasks for deploy to central

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-2048) cli options should match yaml directives

2011-01-25 Thread Brandon Williams (JIRA)
cli options should match yaml directives


 Key: CASSANDRA-2048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2048
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Brandon Williams
Assignee: Pavel Yaskevich
 Fix For: 0.8


Many options in the cli don't match their yaml counterparts (for example, 
placement_strategy vs replica_placement_strategy.)  This confuses a lot of 
people.  Though I hate to break the cli between releases, I think it's worth it 
in this case as I've seen (and felt) much pain due to these differences.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1971) Let Ivy manage all dependencies and create POM file

2011-01-25 Thread Folke Behrens (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Folke Behrens resolved CASSANDRA-1971.
--

   Resolution: Won't Fix
Fix Version/s: (was: 0.8)
 Reviewer:   (was: urandom)

 Let Ivy manage all dependencies and create POM file
 ---

 Key: CASSANDRA-1971
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1971
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Affects Versions: 0.8
Reporter: Folke Behrens
 Attachments: v0.7-ivy-all-dep-v2.patch.txt


 Attached patch changes the ivy configuration to manage all dependencies. The 
 patch is not complete and still very experimental.
 * ivy.xml
 *# Different configurations defined.
 *# All JARs from /lib/ as dependencies.
 *# libthrift gets fake org/module for special handling.
 * ivysettings.xml
 *# New resolver for dependencies inside the project.
 *# Module filter for libthrift to use this resolver.
 * build.xml
 *# New target: ivy-makepom create a POM file in /build/ next to the .jar and 
 -sources.jar files.
 *# New target: ivy-retrieve-libs copies dependencies back into /lib/. (For 
 IDE users without Ivy plugin.)
 Now all JARs except libthrift should be remove from /lib/.
 Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-982) read repair on quorum consistencylevel

2011-01-25 Thread ivan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ivan updated CASSANDRA-982:
---

Comment: was deleted

(was: for dae2b22bbd0937d8cd361ce1eed9e633a55ce979)

 read repair on quorum consistencylevel
 --

 Key: CASSANDRA-982
 URL: https://issues.apache.org/jira/browse/CASSANDRA-982
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-better-digest-checking-for-ReadResponseResolver.patch, 
 0001-r-m-SP.weakRead-rename-strongRead-to-fetchRows.-read-r.txt, 
 0002-implement-read-repair-as-a-second-resolve-after-the-in.txt, 
 0002-quorum-only-read.txt, 
 0003-rename-QuorumResponseHandler-ReadCallback.txt, 
 982-resolve-digests-v2.txt, rep_fix_01.patch, rep_fix_02.patch

   Original Estimate: 6h
  Remaining Estimate: 6h

 CASSANDRA-930 made read repair fuzzy optional, but this only helps with 
 ConsistencyLevel.ONE:
 - Quorum reads always send requests to all nodes
 - only the first Quorum's worth of responses get compared
 So what we'd like to do two changes:
 - only send read requests to the closest R live nodes
 - if read repair is enabled, also compare results from the other nodes in the 
 background

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1919) Add shutdownhook to flush commitlog

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986423#action_12986423
 ] 

Jonathan Ellis commented on CASSANDRA-1919:
---

bq. I'm beginning to think something was introduced in the SSL patch that 
altered the behavior the sockets

Now I'm getting CliTest failures w/o this patch, too.  I think you might be on 
to something.

 Add shutdownhook to flush commitlog
 ---

 Key: CASSANDRA-1919
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1919
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 1919-v2.txt, 1919.txt

   Original Estimate: 4h
  Time Spent: 6h
  Remaining Estimate: 4h

 this replaces the periodic_with_flush approach from CASSANDRA-1780 / 
 CASSANDRA-1917

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2048) cli options should match yaml directives

2011-01-25 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986427#action_12986427
 ] 

T Jake Luciani commented on CASSANDRA-2048:
---

wouldn't it be less pain to update the yaml to reflect the cli?  People should 
be using this now anyway.

 cli options should match yaml directives
 

 Key: CASSANDRA-2048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2048
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Brandon Williams
Assignee: Pavel Yaskevich
 Fix For: 0.8


 Many options in the cli don't match their yaml counterparts (for example, 
 placement_strategy vs replica_placement_strategy.)  This confuses a lot of 
 people.  Though I hate to break the cli between releases, I think it's worth 
 it in this case as I've seen (and felt) much pain due to these differences.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1919) Add shutdownhook to flush commitlog

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986428#action_12986428
 ] 

Jonathan Ellis commented on CASSANDRA-1919:
---

hmm, this isn't quite right, drain is similar to what we want but not the same 
(flushing every CF could take a while).

 Add shutdownhook to flush commitlog
 ---

 Key: CASSANDRA-1919
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1919
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 1919-v2.txt, 1919.txt

   Original Estimate: 4h
  Time Spent: 6h
  Remaining Estimate: 4h

 this replaces the periodic_with_flush approach from CASSANDRA-1780 / 
 CASSANDRA-1917

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-2049) On the CLI, creating or updating a keyspace to use the NetworkTopologyStrategy breaks show keyspaces;

2011-01-25 Thread Jeremy Hanna (JIRA)
On the CLI, creating or updating a keyspace to use the NetworkTopologyStrategy 
breaks show keyspaces;
---

 Key: CASSANDRA-2049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2049
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Jeremy Hanna


To reproduce:
- Start fresh.
- Run show keyspaces;
- Run create keyspace Keyspace1 with 
placement_strategy='org.apache.cassandra.locator.NetworkTopologyStrategy';
- Run show keyspaces;

Note how before it showed the system keyspace.  After it shows just:
Keyspace: Keyspace1:
  Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
null

If you have multiple keyspaces, it will hide those as well.  Also, if you 
create the keyspace and then update it with NetworkTopologyStrategy, the same 
thing will happen.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2048) cli options should match yaml directives

2011-01-25 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986430#action_12986430
 ] 

Brandon Williams commented on CASSANDRA-2048:
-

Now that I think about it, you won't be able to define keyspaces in the yaml in 
0.8, and we don't want to break existing configs in 0.7, so maybe we should 
just wait until the cli becomes the One True Way.

 cli options should match yaml directives
 

 Key: CASSANDRA-2048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2048
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Brandon Williams
Assignee: Pavel Yaskevich
 Fix For: 0.8


 Many options in the cli don't match their yaml counterparts (for example, 
 placement_strategy vs replica_placement_strategy.)  This confuses a lot of 
 people.  Though I hate to break the cli between releases, I think it's worth 
 it in this case as I've seen (and felt) much pain due to these differences.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1951) offline local nodes

2011-01-25 Thread Gary Dusbabek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986434#action_12986434
 ] 

Gary Dusbabek commented on CASSANDRA-1951:
--

As far as starting, stopping and restarting the thrift server, the code looks 
good.  However, during testing I noticed that stray RowMutations make their way 
from other nodes (about one per minute in my case) and get applied on the node 
after I have already stopped thrift and gossip.  This makes me think something 
is incorrect in our is-this-node-online code.

I'm fine with pushing this off to 0.7.2 if there are other priorities that need 
to be focused on.

 offline local nodes
 ---

 Key: CASSANDRA-1951
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1951
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Gary Dusbabek
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-Allow-to-start-and-stop-the-thrift-server-through-JM.patch

  Time Spent: 2h
  Remaining Estimate: 0h

 We'd like the ability to take a node offline (gossip, thrift, etc), but 
 without bringing down cassandra.  The main reason is so that compactions can 
 be performed completely off-line.
 CASSANDRA-1108 gets us most of the way there, but not all the way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2025) generalized way of expressing hierarchical values

2011-01-25 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986448#action_12986448
 ] 

Eric Evans commented on CASSANDRA-2025:
---

bq. I think float literals in property names is just bad form.

Float literals in property names would be, yes. :)  But a float 
comparator/validator, and hence column names/values, specified in CQL 
statements using float literals, seems entirely possible. 

Assuming the same syntax used to describe structures here (dot delim) was used 
where the members were column names and or values, (compound column names was 
the example that came to my mind), then it would need to be something that 
could be unambiguously parsed.



 generalized way of expressing hierarchical values
 -

 Key: CASSANDRA-2025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2025
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor
 Fix For: 0.8

   Original Estimate: 0h
  Remaining Estimate: 0h

 While hashing out {{CREATE KEYSPACE}}, it became obvious that we needed a 
 syntax for expressing hierarchical values.  Properties like 
 {{replication_factor}} can be expressed simply using keyword arguments like 
 ({{replication_factor = 3}}), but {{strategy_options}} is a map of strings.
 The solution I took in CASSANDRA-1709 was to dot-delimit map name and 
 option key, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options.DC1 = 1 ...
 {code}
 This led me to wonder if this was a general enough approach for any future 
 cases that might come up.  One example might be compound/composite column 
 names.  Dot-delimiting is a bad choice here since it rules out ever 
 introducing a float literal.
 One suggestion would be to colon-delimit, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options:DC1 = 1 ...
 {code}
 Or in the case of composite column names:
 {code:style=SQL}
 SELECT columnA:columnB,column1:column2 FROM Standard2 USING 
 CONSISTENCY.QUORUM WHERE KEY = key;
 UPDATE Standard2 SET columnA:columnB = valueC, column1:column2 = value3 WHERE 
 KEY = key;
 {code}
 As an aside, this also led me to the conclusion that {{CONSISTENCY.LEVEL}} 
 is probably a bad choice for consistency level specification.  It mirrors the 
 underlying enum for no good reason and should probably be changed to 
 {{CONSISTENCY LEVEL}} (i.e. omitting the separator).  For example:
 {code:style=SQL}
 SELECT column FROM Standard2 USING CONSISTENCY QUORUM WHERE KEY = key;
 {code}
 Thoughts?
 *Edit: improved final example*
 *Edit: restore final example, create new one (gah).*

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1919) Add shutdownhook to flush commitlog

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1919:
--

Attachment: 1919-v3.txt

v3 just shuts down mutation stage + commitlog in the hook

 Add shutdownhook to flush commitlog
 ---

 Key: CASSANDRA-1919
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1919
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 1919-v2.txt, 1919-v3.txt, 1919.txt

   Original Estimate: 4h
  Time Spent: 6h
  Remaining Estimate: 4h

 this replaces the periodic_with_flush approach from CASSANDRA-1780 / 
 CASSANDRA-1917

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1900) Make removetoken force always work

2011-01-25 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-1900:


Attachment: 1900-v4.txt

v4 initializes replicatingNodes to avoid NPEs.

 Make removetoken force always work
 --

 Key: CASSANDRA-1900
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1900
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 0.7.0 rc 2, 0.7.0 rc 3
Reporter: Nick Bailey
Assignee: Brandon Williams
 Fix For: 0.7.1

 Attachments: 1900-v2.txt, 1900-v3.txt, 1900-v4.txt, 1900.txt


 The 'removetoken force' command was intended for forcing a removal to 
 complete when the removal stalls for some reason. The most likely reason 
 being a streaming failure causing the node to wait forever for streams to 
 complete.
 The command should be updated so that it can force a removal in any case. For 
 example a node that was decommissioned but killed before a LEFT status was 
 broadcasted. This leaves the node in a permanent 'leaving' state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2025) generalized way of expressing hierarchical values

2011-01-25 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986467#action_12986467
 ] 

Eric Evans commented on CASSANDRA-2025:
---

Examples that have come up in other discussions:

{{/}} as delimiter (2 different people, independently fwiw):

{code:style=SQL}
-- keyword arguments
CREATE KEYSPACE keyspace WITH ... AND strategy_options/DC1 = 1 ...

-- compound columns
SELECT columnA/columnB/column1/column2 FROM Standard2 USING CONSISTENCY.QUORUM 
WHERE KEY = key;
UPDATE Standard2 SET columnA/columnB = valueC, column1/column2 = value3 WHERE 
KEY = key;
{code}

Sticking with dots and using single quotes to escape problematic literals:

{code:style=SQL}
-- keyword arguments
CREATE KEYSPACE keyspace WITH ... AND strategy_options.DC1 = 1 ...

-- compound columns
SELECT '10.5'.string.9.5 FROM Standard2 USING CONSISTENCY.QUORUM WHERE KEY 
= key;
UPDATE Standard2 SET '1.2'.1 = valueC, 14.9 = value3 WHERE KEY = key;
{code}

I'm least fond of this last one though, it strikes me as something that you'd 
need to do after painting yourself into a corner (which is what I'm trying to 
avoid here :)). 

It also seems the least readable, and most prone to confusion. For example, the 
14.9 above, is that a float with value fourteen point nine, or two integer 
columns of 14 and 9? Do you have to escape every column name that appears on 
the left of assignment?

 generalized way of expressing hierarchical values
 -

 Key: CASSANDRA-2025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2025
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor
 Fix For: 0.8

   Original Estimate: 0h
  Remaining Estimate: 0h

 While hashing out {{CREATE KEYSPACE}}, it became obvious that we needed a 
 syntax for expressing hierarchical values.  Properties like 
 {{replication_factor}} can be expressed simply using keyword arguments like 
 ({{replication_factor = 3}}), but {{strategy_options}} is a map of strings.
 The solution I took in CASSANDRA-1709 was to dot-delimit map name and 
 option key, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options.DC1 = 1 ...
 {code}
 This led me to wonder if this was a general enough approach for any future 
 cases that might come up.  One example might be compound/composite column 
 names.  Dot-delimiting is a bad choice here since it rules out ever 
 introducing a float literal.
 One suggestion would be to colon-delimit, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options:DC1 = 1 ...
 {code}
 Or in the case of composite column names:
 {code:style=SQL}
 SELECT columnA:columnB,column1:column2 FROM Standard2 USING 
 CONSISTENCY.QUORUM WHERE KEY = key;
 UPDATE Standard2 SET columnA:columnB = valueC, column1:column2 = value3 WHERE 
 KEY = key;
 {code}
 As an aside, this also led me to the conclusion that {{CONSISTENCY.LEVEL}} 
 is probably a bad choice for consistency level specification.  It mirrors the 
 underlying enum for no good reason and should probably be changed to 
 {{CONSISTENCY LEVEL}} (i.e. omitting the separator).  For example:
 {code:style=SQL}
 SELECT column FROM Standard2 USING CONSISTENCY QUORUM WHERE KEY = key;
 {code}
 Thoughts?
 *Edit: improved final example*
 *Edit: restore final example, create new one (gah).*

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2048) cli options should match yaml directives

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986468#action_12986468
 ] 

Jonathan Ellis commented on CASSANDRA-2048:
---

Agreed that moving this stuff out of the yaml is the right way to go.

 cli options should match yaml directives
 

 Key: CASSANDRA-2048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2048
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Brandon Williams
Assignee: Pavel Yaskevich
 Fix For: 0.8


 Many options in the cli don't match their yaml counterparts (for example, 
 placement_strategy vs replica_placement_strategy.)  This confuses a lot of 
 people.  Though I hate to break the cli between releases, I think it's worth 
 it in this case as I've seen (and felt) much pain due to these differences.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-2042) Attribute Names Differ Between cassandra.yaml And cassandra-cli

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2042.
---

Resolution: Duplicate

more discussion on CASSANDRA-2048

 Attribute Names Differ Between cassandra.yaml And cassandra-cli
 ---

 Key: CASSANDRA-2042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2042
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 0.7.0
 Environment: Unix
Reporter: Jake Eakle
Priority: Minor

 One annoying example is that the cli recognizes gc_grace whereas the yaml 
 expects gc_grace_seconds. Other users in irc complained that other 
 discrepancies exist as well, and agreed that they can be headache-causing.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2025) generalized way of expressing hierarchical values

2011-01-25 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986474#action_12986474
 ] 

Eric Evans commented on CASSANDRA-2025:
---

FTR. I think I'm good with either colon {{:}} or slash {{/}}, both seem 
future-proof, and I'm not sure one is more or less readable to my eyes.

 generalized way of expressing hierarchical values
 -

 Key: CASSANDRA-2025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2025
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor
 Fix For: 0.8

   Original Estimate: 0h
  Remaining Estimate: 0h

 While hashing out {{CREATE KEYSPACE}}, it became obvious that we needed a 
 syntax for expressing hierarchical values.  Properties like 
 {{replication_factor}} can be expressed simply using keyword arguments like 
 ({{replication_factor = 3}}), but {{strategy_options}} is a map of strings.
 The solution I took in CASSANDRA-1709 was to dot-delimit map name and 
 option key, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options.DC1 = 1 ...
 {code}
 This led me to wonder if this was a general enough approach for any future 
 cases that might come up.  One example might be compound/composite column 
 names.  Dot-delimiting is a bad choice here since it rules out ever 
 introducing a float literal.
 One suggestion would be to colon-delimit, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options:DC1 = 1 ...
 {code}
 Or in the case of composite column names:
 {code:style=SQL}
 SELECT columnA:columnB,column1:column2 FROM Standard2 USING 
 CONSISTENCY.QUORUM WHERE KEY = key;
 UPDATE Standard2 SET columnA:columnB = valueC, column1:column2 = value3 WHERE 
 KEY = key;
 {code}
 As an aside, this also led me to the conclusion that {{CONSISTENCY.LEVEL}} 
 is probably a bad choice for consistency level specification.  It mirrors the 
 underlying enum for no good reason and should probably be changed to 
 {{CONSISTENCY LEVEL}} (i.e. omitting the separator).  For example:
 {code:style=SQL}
 SELECT column FROM Standard2 USING CONSISTENCY QUORUM WHERE KEY = key;
 {code}
 Thoughts?
 *Edit: improved final example*
 *Edit: restore final example, create new one (gah).*

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2048) cli options should match yaml directives

2011-01-25 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986476#action_12986476
 ] 

Pavel Yaskevich commented on CASSANDRA-2048:


So what am I supposed to do - wait?

 cli options should match yaml directives
 

 Key: CASSANDRA-2048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2048
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Brandon Williams
Assignee: Pavel Yaskevich
 Fix For: 0.8


 Many options in the cli don't match their yaml counterparts (for example, 
 placement_strategy vs replica_placement_strategy.)  This confuses a lot of 
 people.  Though I hate to break the cli between releases, I think it's worth 
 it in this case as I've seen (and felt) much pain due to these differences.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1900) Make removetoken force always work

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986477#action_12986477
 ] 

Jonathan Ellis commented on CASSANDRA-1900:
---

+1

 Make removetoken force always work
 --

 Key: CASSANDRA-1900
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1900
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 0.7.0 rc 2, 0.7.0 rc 3
Reporter: Nick Bailey
Assignee: Brandon Williams
 Fix For: 0.7.1

 Attachments: 1900-v2.txt, 1900-v3.txt, 1900-v4.txt, 1900.txt


 The 'removetoken force' command was intended for forcing a removal to 
 complete when the removal stalls for some reason. The most likely reason 
 being a streaming failure causing the node to wait forever for streams to 
 complete.
 The command should be updated so that it can force a removal in any case. For 
 example a node that was decommissioned but killed before a LEFT status was 
 broadcasted. This leaves the node in a permanent 'leaving' state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1900) Make removetoken force always work

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986479#action_12986479
 ] 

Jonathan Ellis commented on CASSANDRA-1900:
---

(it would still be good to comment handleStateRemoving + forceRemoveCompletion)

 Make removetoken force always work
 --

 Key: CASSANDRA-1900
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1900
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 0.7.0 rc 2, 0.7.0 rc 3
Reporter: Nick Bailey
Assignee: Brandon Williams
 Fix For: 0.7.1

 Attachments: 1900-v2.txt, 1900-v3.txt, 1900-v4.txt, 1900.txt


 The 'removetoken force' command was intended for forcing a removal to 
 complete when the removal stalls for some reason. The most likely reason 
 being a streaming failure causing the node to wait forever for streams to 
 complete.
 The command should be updated so that it can force a removal in any case. For 
 example a node that was decommissioned but killed before a LEFT status was 
 broadcasted. This leaves the node in a permanent 'leaving' state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (CASSANDRA-1108) ability to forcibly mark machines failed

2011-01-25 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reopened CASSANDRA-1108:
-


Turns out this isn't good enough.  We've shutdown the Gossiper's timer on node 
A, but node B will call gossipToUnreachableEndpoints, choose A, and A will 
still reply.

 ability to forcibly mark machines failed
 

 Key: CASSANDRA-1108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1108
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.7.1

 Attachments: 1108.txt

   Original Estimate: 8h
  Remaining Estimate: 8h

 For when a node is failing but not yet so badly that it can't participate in 
 gossip (e.g. hard disk failing but not dead yet) we should give operators the 
 power to forcibly mark a node as dead.
 I think we'd need to add an extra flag in gossip to say this deadness is 
 operator-imposed or the next heartbeat will flip it back to live.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1951) offline local nodes

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1951:
--

Fix Version/s: (was: 0.7.1)
   0.7.2

 offline local nodes
 ---

 Key: CASSANDRA-1951
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1951
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Gary Dusbabek
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 0.7.2

 Attachments: 
 0001-Allow-to-start-and-stop-the-thrift-server-through-JM.patch

  Time Spent: 2h
  Remaining Estimate: 0h

 We'd like the ability to take a node offline (gossip, thrift, etc), but 
 without bringing down cassandra.  The main reason is so that compactions can 
 be performed completely off-line.
 CASSANDRA-1108 gets us most of the way there, but not all the way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2025) generalized way of expressing hierarchical values

2011-01-25 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986493#action_12986493
 ] 

T Jake Luciani commented on CASSANDRA-2025:
---

. feels the best

one option would be to escape them in the case of a float

10\.5.string.foo

 generalized way of expressing hierarchical values
 -

 Key: CASSANDRA-2025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2025
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor
 Fix For: 0.8

   Original Estimate: 0h
  Remaining Estimate: 0h

 While hashing out {{CREATE KEYSPACE}}, it became obvious that we needed a 
 syntax for expressing hierarchical values.  Properties like 
 {{replication_factor}} can be expressed simply using keyword arguments like 
 ({{replication_factor = 3}}), but {{strategy_options}} is a map of strings.
 The solution I took in CASSANDRA-1709 was to dot-delimit map name and 
 option key, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options.DC1 = 1 ...
 {code}
 This led me to wonder if this was a general enough approach for any future 
 cases that might come up.  One example might be compound/composite column 
 names.  Dot-delimiting is a bad choice here since it rules out ever 
 introducing a float literal.
 One suggestion would be to colon-delimit, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options:DC1 = 1 ...
 {code}
 Or in the case of composite column names:
 {code:style=SQL}
 SELECT columnA:columnB,column1:column2 FROM Standard2 USING 
 CONSISTENCY.QUORUM WHERE KEY = key;
 UPDATE Standard2 SET columnA:columnB = valueC, column1:column2 = value3 WHERE 
 KEY = key;
 {code}
 As an aside, this also led me to the conclusion that {{CONSISTENCY.LEVEL}} 
 is probably a bad choice for consistency level specification.  It mirrors the 
 underlying enum for no good reason and should probably be changed to 
 {{CONSISTENCY LEVEL}} (i.e. omitting the separator).  For example:
 {code:style=SQL}
 SELECT column FROM Standard2 USING CONSISTENCY QUORUM WHERE KEY = key;
 {code}
 Thoughts?
 *Edit: improved final example*
 *Edit: restore final example, create new one (gah).*

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-2050) AbstractDaemon unnecessarily uses jetty inteface

2011-01-25 Thread Nate McCall (JIRA)
AbstractDaemon unnecessarily uses jetty inteface


 Key: CASSANDRA-2050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2050
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.1, 0.7.2, 0.8
Reporter: Nate McCall
Assignee: Nate McCall


AbstractDaemon's CleaningThreadPool need not implement this jetty interface. 
Removing this would allow us to remove jetty dependency altogether. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2050) AbstractDaemon unnecessarily uses jetty interface

2011-01-25 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-2050:
---

Summary: AbstractDaemon unnecessarily uses jetty interface  (was: 
AbstractDaemon unnecessarily uses jetty inteface)

 AbstractDaemon unnecessarily uses jetty interface
 -

 Key: CASSANDRA-2050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2050
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.1, 0.7.2, 0.8
Reporter: Nate McCall
Assignee: Nate McCall

 AbstractDaemon's CleaningThreadPool need not implement this jetty interface. 
 Removing this would allow us to remove jetty dependency altogether. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2047) Stress --keep-going should become --keep-trying

2011-01-25 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986499#action_12986499
 ] 

Pavel Yaskevich commented on CASSANDRA-2047:


I think that number of retry times needs to be configurable and be set to like 
10 times by default, what do you think? So on insert/read fail we will retry 
current operation X times and then continue with next or should we exit and 
report to user after those X retries failed?

 Stress --keep-going should become --keep-trying
 ---

 Key: CASSANDRA-2047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2047
 Project: Cassandra
  Issue Type: Improvement
  Components: Contrib
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 0.7.1


 The --keep-going flag makes the stress tool drop messages that time out on 
 the floor.
 I think it's more realistic (esp for a stress tool) to keep trying till this 
 read/write succeeds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2050) AbstractDaemon unnecessarily uses jetty interface

2011-01-25 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-2050:
---

Attachment: 2050.txt

Remove jetty import and interface impl on AbstractDaemon.

I think this was only needed for avro plumbing. 

 AbstractDaemon unnecessarily uses jetty interface
 -

 Key: CASSANDRA-2050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2050
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.1, 0.7.2, 0.8
Reporter: Nate McCall
Assignee: Nate McCall
 Attachments: 2050.txt


 AbstractDaemon's CleaningThreadPool need not implement this jetty interface. 
 Removing this would allow us to remove jetty dependency altogether. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[Cassandra Wiki] Update of ScribeToCassandra by mck

2011-01-25 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The ScribeToCassandra page has been changed by mck.
http://wiki.apache.org/cassandra/ScribeToCassandra?action=diffrev1=3rev2=4

--

  }
  }}}
  
+ If you are running an embedded cassandra you can directly use the 
StorageProxy for faster performance, 
+ {{{
+ #!java
+ //replace lines11-13 with
+ ListRowMutation mutations = new ArrayListRowMutation();
+ 
+ // replace lines 20-27 with
+ RowMutation change = new RowMutation(keyspaceName, 
UUIDSerializer.get().toByteBuffer(uuid));
+ ColumnPath cp = new ColumnPath(cfName).setColumn(C_NAME.getBytes());
+ change.add(new QueryPath(cp), ByteBuffer.wrap(payloadBytes), 
HFactory.createClock(), THRIFT_TTL);
+ mutations.add(change);
+ 
+ // replace line 37 with
+ StorageProxy.mutate(mutations, ConsistencyLevel.ONE);
+ }}}
+ 


[jira] Updated: (CASSANDRA-2050) AbstractDaemon unnecessarily uses jetty interface

2011-01-25 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-2050:
---

Priority: Minor  (was: Major)

 AbstractDaemon unnecessarily uses jetty interface
 -

 Key: CASSANDRA-2050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2050
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.1, 0.7.2, 0.8
Reporter: Nate McCall
Assignee: Nate McCall
Priority: Minor
 Attachments: 2050.txt


 AbstractDaemon's CleaningThreadPool need not implement this jetty interface. 
 Removing this would allow us to remove jetty dependency altogether. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1063345 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java

2011-01-25 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Jan 25 16:49:31 2011
New Revision: 1063345

URL: http://svn.apache.org/viewvc?rev=1063345view=rev
Log:
Allow removetoken to be called on nodes already leaving the ring.
Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-1900

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java?rev=1063345r1=1063344r2=1063345view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
 Tue Jan 25 16:49:31 2011
@@ -169,7 +169,7 @@ public class StorageService implements I
 /* This abstraction maintains the token/endpoint metadata information */
 private TokenMetadata tokenMetadata_ = new TokenMetadata();
 
-private SetInetAddress replicatingNodes;
+private SetInetAddress replicatingNodes = new 
Collections.synchronizedSet(new HashSetInetAddress());
 private InetAddress removingNode;
 
 /* Are we starting this node in bootstrap mode? */
@@ -734,9 +734,10 @@ public class StorageService implements I
 }
 
 /**
- * Handle node being actively removed from the ring.
+ * Handle notification that a node being actively removed from the ring 
via 'removetoken'
  *
  * @param endpoint node
+ * @param state either REMOVED_TOKEN (node is gone) or REMOVING_TOKEN 
(replicas need to be restored)
  */
 private void handleStateRemoving(InetAddress endpoint, Token removeToken, 
String state)
 {
@@ -1676,17 +1677,28 @@ public class StorageService implements I
 
 /**
  * Force a remove operation to complete. This may be necessary if a remove 
operation
- * blocks forever due to node/stream failure.
+ * blocks forever due to node/stream failure. removeToken() must be called
+ * first, this is a last resort measure.  No further attempt will be made 
to restore replicas.
  */
 public void forceRemoveCompletion()
 {
 if (!replicatingNodes.isEmpty())
+{
 logger_.warn(Removal not confirmed for for  + 
StringUtils.join(this.replicatingNodes, ,));
-replicatingNodes.clear();
+replicatingNodes.clear();
+}
+else
+{
+throw new UnsupportedOperationException(No tokens to force 
removal on, call 'removetoken' first);
+}
 }
 
 /**
- * Remove a node that has died.
+ * Remove a node that has died, attempting to restore the replica count.
+ * If the node is alive, decommission should be attempted.  If decommission
+ * fails, then removeToken should be called.  If we fail while trying to
+ * restore the replica count, finally forceRemoveCompleteion should be
+ * called to forcibly remove the node without regard to replica count.
  *
  * @param tokenString token for the node
  */
@@ -1707,14 +1719,13 @@ public class StorageService implements I
 throw new UnsupportedOperationException(Node  + endpoint +  is 
alive and owns this token. Use decommission command to remove it from the 
ring);
 
 // A leaving endpoint that is dead is already being removed.
-if (tokenMetadata_.isLeaving(endpoint)) 
-throw new UnsupportedOperationException(Node  + endpoint +  is 
already being removed.);
+if (tokenMetadata_.isLeaving(endpoint))
+logger_.warn(Node  + endpoint +  is already being removed, 
continuing removal anyway);
 
 if (replicatingNodes != null)
-throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete.);
+throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete, or use 'removetoken force' if 
this has failed.);
 
 // Find the endpoints that are going to become responsible for data
-replicatingNodes = Collections.synchronizedSet(new 
HashSetInetAddress());
 for (String table : DatabaseDescriptor.getNonSystemTables())
 {
 // if the replication factor is 1 the data is lost so we shouldn't 
wait for confirmation




[Cassandra Wiki] Update of ScribeToCassandra by mck

2011-01-25 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The ScribeToCassandra page has been changed by mck.
http://wiki.apache.org/cassandra/ScribeToCassandra?action=diffrev1=4rev2=5

--

  If you are running an embedded cassandra you can directly use the 
StorageProxy for faster performance, 
  {{{
  #!java
- //replace lines11-13 with
+ // replace above lines 11-13 with
  ListRowMutation mutations = new ArrayListRowMutation();
  
- // replace lines 20-27 with
+ // replace above lines 20-27 with
+ UUID uuid = 
UUID.fromString(UUIDGenerator.getInstance().generateTimeBasedUUID().toString());
  RowMutation change = new RowMutation(keyspaceName, 
UUIDSerializer.get().toByteBuffer(uuid));
- ColumnPath cp = new ColumnPath(cfName).setColumn(C_NAME.getBytes());
+ ColumnPath cp = new ColumnPath(category).setColumn(COLUMN.getBytes());
- change.add(new QueryPath(cp), ByteBuffer.wrap(payloadBytes), 
HFactory.createClock(), THRIFT_TTL);
+ change.add(new QueryPath(cp), ByteBuffer.wrap(payloadBytes), 
HFactory.createClock());
  mutations.add(change);
  
- // replace line 37 with
+ // replace above line 37 with
  StorageProxy.mutate(mutations, ConsistencyLevel.ONE);
  }}}
  


[jira] Commented: (CASSANDRA-2047) Stress --keep-going should become --keep-trying

2011-01-25 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986511#action_12986511
 ] 

T Jake Luciani commented on CASSANDRA-2047:
---

Seems like if a insert keeps timing out we have a bigger problem, the only 
trick might be to try on another node assuming -d has many hosts.

This only applies to TimeoutExeption or UnavailibleException 

 Stress --keep-going should become --keep-trying
 ---

 Key: CASSANDRA-2047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2047
 Project: Cassandra
  Issue Type: Improvement
  Components: Contrib
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 0.7.1


 The --keep-going flag makes the stress tool drop messages that time out on 
 the floor.
 I think it's more realistic (esp for a stress tool) to keep trying till this 
 read/write succeeds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2047) Stress --keep-going should become --keep-trying

2011-01-25 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986513#action_12986513
 ] 

Pavel Yaskevich commented on CASSANDRA-2047:


+1

 Stress --keep-going should become --keep-trying
 ---

 Key: CASSANDRA-2047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2047
 Project: Cassandra
  Issue Type: Improvement
  Components: Contrib
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 0.7.1


 The --keep-going flag makes the stress tool drop messages that time out on 
 the floor.
 I think it's more realistic (esp for a stress tool) to keep trying till this 
 read/write succeeds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2051:
--

Attachment: rep_fix_02.patch

Ivan's patch

 Fixes for multi-datacenter writes
 -

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
 Fix For: 0.7.1

 Attachments: rep_fix_02.patch


 Copied from CASSANDRA-982:
 * Message::removeHeader
   message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
 NullPointerException
 * db/RowMutationVerbHandler::forwardToLocalNodes
   set correct destination address for sendOneWay
 * response(ReadResponse result) added to DatacenterReadCallback
   otherwise ReadCallback will process local results and condition will be 
 never signaled in DatacenterReadCallback
 * FORWARD header removed in StorageProxy::sendMessages if dataCenter 
 equals to localDataCenter
   (if a non local DC processed before local DC FORWARD header will be set 
 when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
 used for unhintedMessage)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread Jonathan Ellis (JIRA)
Fixes for multi-datacenter writes
-

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
 Fix For: 0.7.1
 Attachments: rep_fix_02.patch

Copied from CASSANDRA-982:

* Message::removeHeader
  message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
NullPointerException

* db/RowMutationVerbHandler::forwardToLocalNodes
  set correct destination address for sendOneWay

* response(ReadResponse result) added to DatacenterReadCallback
  otherwise ReadCallback will process local results and condition will be 
never signaled in DatacenterReadCallback

* FORWARD header removed in StorageProxy::sendMessages if dataCenter equals 
to localDataCenter
  (if a non local DC processed before local DC FORWARD header will be set 
when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
used for unhintedMessage)


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-982) read repair on quorum consistencylevel

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986516#action_12986516
 ] 

Jonathan Ellis commented on CASSANDRA-982:
--

Moved to CASSANDRA-2051 since this is not related to -982.

 read repair on quorum consistencylevel
 --

 Key: CASSANDRA-982
 URL: https://issues.apache.org/jira/browse/CASSANDRA-982
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-better-digest-checking-for-ReadResponseResolver.patch, 
 0001-r-m-SP.weakRead-rename-strongRead-to-fetchRows.-read-r.txt, 
 0002-implement-read-repair-as-a-second-resolve-after-the-in.txt, 
 0002-quorum-only-read.txt, 
 0003-rename-QuorumResponseHandler-ReadCallback.txt, 
 982-resolve-digests-v2.txt, rep_fix_01.patch, rep_fix_02.patch

   Original Estimate: 6h
  Remaining Estimate: 6h

 CASSANDRA-930 made read repair fuzzy optional, but this only helps with 
 ConsistencyLevel.ONE:
 - Quorum reads always send requests to all nodes
 - only the first Quorum's worth of responses get compared
 So what we'd like to do two changes:
 - only send read requests to the closest R live nodes
 - if read repair is enabled, also compare results from the other nodes in the 
 background

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1063355 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java

2011-01-25 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Jan 25 17:03:33 2011
New Revision: 1063355

URL: http://svn.apache.org/viewvc?rev=1063355view=rev
Log:
Fix broken build from 1900

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java?rev=1063355r1=1063354r2=1063355view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
 Tue Jan 25 17:03:33 2011
@@ -169,7 +169,7 @@ public class StorageService implements I
 /* This abstraction maintains the token/endpoint metadata information */
 private TokenMetadata tokenMetadata_ = new TokenMetadata();
 
-private SetInetAddress replicatingNodes = new 
Collections.synchronizedSet(new HashSetInetAddress());
+private SetInetAddress replicatingNodes = 
Collections.synchronizedSet(new HashSetInetAddress());
 private InetAddress removingNode;
 
 /* Are we starting this node in bootstrap mode? */




svn commit: r1063361 - in /cassandra/branches/cassandra-0.7: ./ src/java/org/apache/cassandra/db/ src/java/org/apache/cassandra/net/ src/java/org/apache/cassandra/service/

2011-01-25 Thread jbellis
Author: jbellis
Date: Tue Jan 25 17:09:43 2011
New Revision: 1063361

URL: http://svn.apache.org/viewvc?rev=1063361view=rev
Log:
fix bugs in multi-DC replication
patch by ivancso; reviewed by jbellis for CASSANDRA-2051

Modified:
cassandra/branches/cassandra-0.7/CHANGES.txt

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/RowMutationVerbHandler.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Header.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Message.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/DatacenterReadCallback.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageProxy.java

Modified: cassandra/branches/cassandra-0.7/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/CHANGES.txt?rev=1063361r1=1063360r2=1063361view=diff
==
--- cassandra/branches/cassandra-0.7/CHANGES.txt (original)
+++ cassandra/branches/cassandra-0.7/CHANGES.txt Tue Jan 25 17:09:43 2011
@@ -2,7 +2,7 @@
  * buffer network stack to avoid inefficient small TCP messages while avoiding
the nagle/delayed ack problem (CASSANDRA-1896)
  * check log4j configuration for changes every 10s (CASSANDRA-1525, 1907)
- * more-efficient cross-DC replication (CASSANDRA-1530)
+ * more-efficient cross-DC replication (CASSANDRA-1530, -2051)
  * upgrade to TFastFramedTransport (CASSANDRA-1743)
  * avoid polluting page cache with commitlog or sstable writes
and seq scan operations (CASSANDRA-1470)

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/RowMutationVerbHandler.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/RowMutationVerbHandler.java?rev=1063361r1=1063360r2=1063361view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/RowMutationVerbHandler.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/RowMutationVerbHandler.java
 Tue Jan 25 17:09:43 2011
@@ -90,7 +90,7 @@ public class RowMutationVerbHandler impl
 private void forwardToLocalNodes(Message message, byte[] forwardBytes) 
throws UnknownHostException
 {
 // remove fwds from message to avoid infinite loop
-message.setHeader(RowMutation.FORWARD_HEADER, null);
+message.removeHeader(RowMutation.FORWARD_HEADER);
 
 int bytesPerInetAddress = 
FBUtilities.getLocalAddress().getAddress().length;
 assert forwardBytes.length = bytesPerInetAddress;
@@ -110,7 +110,7 @@ public class RowMutationVerbHandler impl
 
 // Send the original message to the address specified by the 
FORWARD_HINT
 // Let the response go back to the coordinator
-MessagingService.instance().sendOneWay(message, message.getFrom());
+MessagingService.instance().sendOneWay(message, address);
 
 offset += bytesPerInetAddress;
 }

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Header.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Header.java?rev=1063361r1=1063360r2=1063361view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Header.java 
(original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Header.java 
Tue Jan 25 17:09:43 2011
@@ -97,6 +97,11 @@ public class Header
 {
 details_.put(key, value);
 }
+
+void removeDetail(String key)
+{
+details_.remove(key);
+}
 }
 
 class HeaderSerializer implements ICompactSerializerHeader

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Message.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Message.java?rev=1063361r1=1063360r2=1063361view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Message.java 
(original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/net/Message.java 
Tue Jan 25 17:09:43 2011
@@ -68,6 +68,11 @@ public class Message
 {
 header_.setDetail(key, value);
 }
+
+public void removeHeader(String key)
+{
+header_.removeDetail(key);
+}
 
 public byte[] getMessageBody()
 {

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/DatacenterReadCallback.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/DatacenterReadCallback.java?rev=1063361r1=1063360r2=1063361view=diff

[jira] Updated: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2051:
--

Attachment: 2051-2.txt

patch 2 re-organizes the loops in SP::sendMessages to make ivan's fix a little 
more clear.  (with the side benefit that we now call String.equals once per DC 
instead of once per message.)

 Fixes for multi-datacenter writes
 -

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
 Fix For: 0.7.1

 Attachments: 2051-2.txt, rep_fix_02.patch


 Copied from CASSANDRA-982:
 * Message::removeHeader
   message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
 NullPointerException
 * db/RowMutationVerbHandler::forwardToLocalNodes
   set correct destination address for sendOneWay
 * response(ReadResponse result) added to DatacenterReadCallback
   otherwise ReadCallback will process local results and condition will be 
 never signaled in DatacenterReadCallback
 * FORWARD header removed in StorageProxy::sendMessages if dataCenter 
 equals to localDataCenter
   (if a non local DC processed before local DC FORWARD header will be set 
 when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
 used for unhintedMessage)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986526#action_12986526
 ] 

Jonathan Ellis commented on CASSANDRA-2051:
---

committed everything else from ivan's patch in r1063361

 Fixes for multi-datacenter writes
 -

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
Assignee: ivan
 Fix For: 0.7.1

 Attachments: 2051-2.txt, rep_fix_02.patch


 Copied from CASSANDRA-982:
 * Message::removeHeader
   message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
 NullPointerException
 * db/RowMutationVerbHandler::forwardToLocalNodes
   set correct destination address for sendOneWay
 * response(ReadResponse result) added to DatacenterReadCallback
   otherwise ReadCallback will process local results and condition will be 
 never signaled in DatacenterReadCallback
 * FORWARD header removed in StorageProxy::sendMessages if dataCenter 
 equals to localDataCenter
   (if a non local DC processed before local DC FORWARD header will be set 
 when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
 used for unhintedMessage)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-2051:
-

Assignee: ivan

 Fixes for multi-datacenter writes
 -

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
Assignee: ivan
 Fix For: 0.7.1

 Attachments: 2051-2.txt, rep_fix_02.patch


 Copied from CASSANDRA-982:
 * Message::removeHeader
   message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
 NullPointerException
 * db/RowMutationVerbHandler::forwardToLocalNodes
   set correct destination address for sendOneWay
 * response(ReadResponse result) added to DatacenterReadCallback
   otherwise ReadCallback will process local results and condition will be 
 never signaled in DatacenterReadCallback
 * FORWARD header removed in StorageProxy::sendMessages if dataCenter 
 equals to localDataCenter
   (if a non local DC processed before local DC FORWARD header will be set 
 when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
 used for unhintedMessage)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2025) generalized way of expressing hierarchical values

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986528#action_12986528
 ] 

Jonathan Ellis commented on CASSANDRA-2025:
---

i would vastly prefer an option that doesn't require escaping.

maybe a slight preference for : over / in case we implement SQL arithmetic in 
the distant future :)

 generalized way of expressing hierarchical values
 -

 Key: CASSANDRA-2025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2025
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor
 Fix For: 0.8

   Original Estimate: 0h
  Remaining Estimate: 0h

 While hashing out {{CREATE KEYSPACE}}, it became obvious that we needed a 
 syntax for expressing hierarchical values.  Properties like 
 {{replication_factor}} can be expressed simply using keyword arguments like 
 ({{replication_factor = 3}}), but {{strategy_options}} is a map of strings.
 The solution I took in CASSANDRA-1709 was to dot-delimit map name and 
 option key, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options.DC1 = 1 ...
 {code}
 This led me to wonder if this was a general enough approach for any future 
 cases that might come up.  One example might be compound/composite column 
 names.  Dot-delimiting is a bad choice here since it rules out ever 
 introducing a float literal.
 One suggestion would be to colon-delimit, so for example:
 {code:style=SQL}
 CREATE KEYSPACE keyspace WITH ... AND strategy_options:DC1 = 1 ...
 {code}
 Or in the case of composite column names:
 {code:style=SQL}
 SELECT columnA:columnB,column1:column2 FROM Standard2 USING 
 CONSISTENCY.QUORUM WHERE KEY = key;
 UPDATE Standard2 SET columnA:columnB = valueC, column1:column2 = value3 WHERE 
 KEY = key;
 {code}
 As an aside, this also led me to the conclusion that {{CONSISTENCY.LEVEL}} 
 is probably a bad choice for consistency level specification.  It mirrors the 
 underlying enum for no good reason and should probably be changed to 
 {{CONSISTENCY LEVEL}} (i.e. omitting the separator).  For example:
 {code:style=SQL}
 SELECT column FROM Standard2 USING CONSISTENCY QUORUM WHERE KEY = key;
 {code}
 Thoughts?
 *Edit: improved final example*
 *Edit: restore final example, create new one (gah).*

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2049) On the CLI, creating or updating a keyspace to use the NetworkTopologyStrategy breaks show keyspaces;

2011-01-25 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-2049:


Fix Version/s: 0.7.1

 On the CLI, creating or updating a keyspace to use the 
 NetworkTopologyStrategy breaks show keyspaces;
 ---

 Key: CASSANDRA-2049
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2049
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Jeremy Hanna
 Fix For: 0.7.1


 To reproduce:
 - Start fresh.
 - Run show keyspaces;
 - Run create keyspace Keyspace1 with 
 placement_strategy='org.apache.cassandra.locator.NetworkTopologyStrategy';
 - Run show keyspaces;
 Note how before it showed the system keyspace.  After it shows just:
 Keyspace: Keyspace1:
   Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
 null
 If you have multiple keyspaces, it will hide those as well.  Also, if you 
 create the keyspace and then update it with NetworkTopologyStrategy, the same 
 thing will happen.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2047) Stress --keep-going should become --keep-trying

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986532#action_12986532
 ] 

Jonathan Ellis commented on CASSANDRA-2047:
---

bq. if a insert keeps timing out we have a bigger problem

right.

no need to get fancy, i vote for retry a reasonable number of times and then 
throw an exception (I did my best to keep-trying but you need to fix your 
server first.)

 Stress --keep-going should become --keep-trying
 ---

 Key: CASSANDRA-2047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2047
 Project: Cassandra
  Issue Type: Improvement
  Components: Contrib
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
Priority: Trivial
 Fix For: 0.7.1


 The --keep-going flag makes the stress tool drop messages that time out on 
 the floor.
 I think it's more realistic (esp for a stress tool) to keep trying till this 
 read/write succeeds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-2052) Add configurable retry count to Hadoop code

2011-01-25 Thread Jeremy Hanna (JIRA)
Add configurable retry count to Hadoop code
---

 Key: CASSANDRA-2052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2052
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 0.7.0, 0.6.9
Reporter: Jeremy Hanna


The Hadoop integration code doesn't do a retry if it times out.  Often people 
have to tune the batch size and the rpc timeout in order to get it to work.  We 
should probably have a configurable timeout in there (credit to Jairam - 
http://www.mail-archive.com/user@cassandra.apache.org/msg08938.html) so that it 
doesn't just fail the job with a timeout exception.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2037) Unsafe Multimap Access in MessagingService

2011-01-25 Thread Thibaut (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibaut updated CASSANDRA-2037:
---

Attachment: jstackerror.txt

Jstack shortly after node returned to normal state

 Unsafe Multimap Access in MessagingService
 --

 Key: CASSANDRA-2037
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2037
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Erik Onnen
Priority: Critical
 Attachments: jstackerror.txt


 MessagingSerice is a system singleton with a static Multimap field targets. 
 Multimaps are not thread safe but no attempt is made to synchronize access to 
 that field. Multimap ultimately uses the standard java HashMap which is 
 susceptible to a race condition where threads will get stuck during a get 
 operation yielding multiple threads similar to the following stack:
 pool-1-thread-6451 prio=10 tid=0x7fa5242c9000 nid=0x10f4 runnable 
 [0x7fa52fde4000]
java.lang.Thread.State: RUNNABLE
   at java.util.HashMap.get(HashMap.java:303)
   at 
 com.google.common.collect.AbstractMultimap.getOrCreateCollection(AbstractMultimap.java:205)
   at 
 com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:194)
   at 
 com.google.common.collect.AbstractListMultimap.put(AbstractListMultimap.java:72)
   at 
 com.google.common.collect.ArrayListMultimap.put(ArrayListMultimap.java:60)
   at 
 org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:303)
   at 
 org.apache.cassandra.service.StorageProxy.strongRead(StorageProxy.java:353)
   at 
 org.apache.cassandra.service.StorageProxy.readProtocol(StorageProxy.java:229)
   at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:98)
   at 
 org.apache.cassandra.thrift.CassandraServer.get(CassandraServer.java:289)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$get.process(Cassandra.java:2655)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2555)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2050) AbstractDaemon unnecessarily uses jetty interface

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2050:
--

 Reviewer: urandom
  Component/s: API
Affects Version/s: (was: 0.7.2)
   (was: 0.7.1)
   (was: 0.8)
Fix Version/s: 0.7.1
   Issue Type: Task  (was: Bug)

 AbstractDaemon unnecessarily uses jetty interface
 -

 Key: CASSANDRA-2050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2050
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Nate McCall
Assignee: Nate McCall
Priority: Minor
 Fix For: 0.7.1

 Attachments: 2050.txt


 AbstractDaemon's CleaningThreadPool need not implement this jetty interface. 
 Removing this would allow us to remove jetty dependency altogether. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2052) Add configurable retry count to Hadoop code

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986540#action_12986540
 ] 

Jonathan Ellis commented on CASSANDRA-2052:
---

As discussed in CASSANDRA-919 and CASSANDRA-959, I think this is the Wrong Fix. 
 Timeouts indicate an overload scenario, so retrying might get you your results 
eventually, but it's more likely to perpetuate the over-capacity problem, 
causing more timeouts, causing more retries, ...

 Add configurable retry count to Hadoop code
 ---

 Key: CASSANDRA-2052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2052
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 0.6.9, 0.7.0
Reporter: Jeremy Hanna

 The Hadoop integration code doesn't do a retry if it times out.  Often people 
 have to tune the batch size and the rpc timeout in order to get it to work.  
 We should probably have a configurable timeout in there (credit to Jairam - 
 http://www.mail-archive.com/user@cassandra.apache.org/msg08938.html) so that 
 it doesn't just fail the job with a timeout exception.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2037) Unsafe Multimap Access in MessagingService

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986543#action_12986543
 ] 

Jonathan Ellis commented on CASSANDRA-2037:
---

thibaut, can you create a new ticket for this?  I don't think it's related to 
the original multimap problem here.

(next thing to check: is the cpu maxing related to JVM GC?  uncomment the 
verbose GC logging from cassandra-env.sh.)

 Unsafe Multimap Access in MessagingService
 --

 Key: CASSANDRA-2037
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2037
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Erik Onnen
Priority: Critical
 Attachments: jstackerror.txt


 MessagingSerice is a system singleton with a static Multimap field targets. 
 Multimaps are not thread safe but no attempt is made to synchronize access to 
 that field. Multimap ultimately uses the standard java HashMap which is 
 susceptible to a race condition where threads will get stuck during a get 
 operation yielding multiple threads similar to the following stack:
 pool-1-thread-6451 prio=10 tid=0x7fa5242c9000 nid=0x10f4 runnable 
 [0x7fa52fde4000]
java.lang.Thread.State: RUNNABLE
   at java.util.HashMap.get(HashMap.java:303)
   at 
 com.google.common.collect.AbstractMultimap.getOrCreateCollection(AbstractMultimap.java:205)
   at 
 com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:194)
   at 
 com.google.common.collect.AbstractListMultimap.put(AbstractListMultimap.java:72)
   at 
 com.google.common.collect.ArrayListMultimap.put(ArrayListMultimap.java:60)
   at 
 org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:303)
   at 
 org.apache.cassandra.service.StorageProxy.strongRead(StorageProxy.java:353)
   at 
 org.apache.cassandra.service.StorageProxy.readProtocol(StorageProxy.java:229)
   at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:98)
   at 
 org.apache.cassandra.thrift.CassandraServer.get(CassandraServer.java:289)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$get.process(Cassandra.java:2655)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2555)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1108) ability to forcibly mark machines failed

2011-01-25 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-1108:


Attachment: 0002_do_not_respond_to_gossip_when_disabled.txt

Patch to ignore gossip messages when the gossiper is disabled.

 ability to forcibly mark machines failed
 

 Key: CASSANDRA-1108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1108
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.7.1

 Attachments: 0002_do_not_respond_to_gossip_when_disabled.txt, 1108.txt

   Original Estimate: 8h
  Remaining Estimate: 8h

 For when a node is failing but not yet so badly that it can't participate in 
 gossip (e.g. hard disk failing but not dead yet) we should give operators the 
 power to forcibly mark a node as dead.
 I think we'd need to add an extra flag in gossip to say this deadness is 
 operator-imposed or the next heartbeat will flip it back to live.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1063386 - in /cassandra/branches/cassandra-0.7: ivy.xml ivysettings.xml

2011-01-25 Thread jbellis
Author: jbellis
Date: Tue Jan 25 18:14:30 2011
New Revision: 1063386

URL: http://svn.apache.org/viewvc?rev=1063386view=rev
Log:
r/m empty files

Removed:
cassandra/branches/cassandra-0.7/ivy.xml
cassandra/branches/cassandra-0.7/ivysettings.xml



svn commit: r1063388 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java

2011-01-25 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Jan 25 18:20:14 2011
New Revision: 1063388

URL: http://svn.apache.org/viewvc?rev=1063388view=rev
Log:
Remove null checks missed from 1900

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java?rev=1063388r1=1063387r2=1063388view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageService.java
 Tue Jan 25 18:20:14 2011
@@ -1722,7 +1722,7 @@ public class StorageService implements I
 if (tokenMetadata_.isLeaving(endpoint))
 logger_.warn(Node  + endpoint +  is already being removed, 
continuing removal anyway);
 
-if (replicatingNodes != null)
+if (!replicatingNodes.isEmpty())
 throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete, or use 'removetoken force' if 
this has failed.);
 
 // Find the endpoints that are going to become responsible for data
@@ -1773,13 +1773,13 @@ public class StorageService implements I
 // indicate the token has left
 Gossiper.instance.addLocalApplicationState(ApplicationState.STATUS, 
valueFactory.removedNonlocal(localToken, token));
 
-replicatingNodes = null;
+replicatingNodes.clear();
 removingNode = null;
 }
 
 public void confirmReplication(InetAddress node)
 {
-assert replicatingNodes != null;
+assert !replicatingNodes.isEmpty();
 replicatingNodes.remove(node);
 }
 




[jira] Commented: (CASSANDRA-2052) Add configurable retry count to Hadoop code

2011-01-25 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986574#action_12986574
 ] 

Jeremy Hanna commented on CASSANDRA-2052:
-

Based on discussion in IRC, it sounds like we just need to communicate good 
settings to users for when they're configuring and trying out their cluster.  
So a combination of batch size and rpc timeout tuning along with potentially 
separating out the nodes into a virtual datacenter with the network topology 
strategy so that you don't hit your nodes with batch processing jobs that 
really need to be responsive for oltp.

I can update the docs on the wiki and we should probably update the readme in 
the various hadoop contrib modules to point to the hadoop support wiki page as 
well.

 Add configurable retry count to Hadoop code
 ---

 Key: CASSANDRA-2052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2052
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 0.6.9, 0.7.0
Reporter: Jeremy Hanna

 The Hadoop integration code doesn't do a retry if it times out.  Often people 
 have to tune the batch size and the rpc timeout in order to get it to work.  
 We should probably have a configurable timeout in there (credit to Jairam - 
 http://www.mail-archive.com/user@cassandra.apache.org/msg08938.html) so that 
 it doesn't just fail the job with a timeout exception.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-2052) Add configurable retry count to Hadoop code

2011-01-25 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna resolved CASSANDRA-2052.
-

Resolution: Won't Fix

like Jonathan mentioned, the problems with timing out should be handled with 
the existing configuration options.

 Add configurable retry count to Hadoop code
 ---

 Key: CASSANDRA-2052
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2052
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 0.6.9, 0.7.0
Reporter: Jeremy Hanna

 The Hadoop integration code doesn't do a retry if it times out.  Often people 
 have to tune the batch size and the rpc timeout in order to get it to work.  
 We should probably have a configurable timeout in there (credit to Jairam - 
 http://www.mail-archive.com/user@cassandra.apache.org/msg08938.html) so that 
 it doesn't just fail the job with a timeout exception.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1108) ability to forcibly mark machines failed

2011-01-25 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-1108:


Attachment: 0002_do_not_respond_to_gossip_when_disabled-v2.txt

There isn't an isAlive, but there is an isCancelled. v2 uses it but it untested.

 ability to forcibly mark machines failed
 

 Key: CASSANDRA-1108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1108
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.7.1

 Attachments: 0002_do_not_respond_to_gossip_when_disabled-v2.txt, 
 0002_do_not_respond_to_gossip_when_disabled.txt, 1108.txt

   Original Estimate: 8h
  Remaining Estimate: 8h

 For when a node is failing but not yet so badly that it can't participate in 
 gossip (e.g. hard disk failing but not dead yet) we should give operators the 
 power to forcibly mark a node as dead.
 I think we'd need to add an extra flag in gossip to say this deadness is 
 operator-imposed or the next heartbeat will flip it back to live.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2007) Move demo Keyspace1 definition from casandra.yaml to an input file for cassandra-cli

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2007:
--

Reviewer: brandon.williams

 Move demo Keyspace1 definition from casandra.yaml to an input file for 
 cassandra-cli
 

 Key: CASSANDRA-2007
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2007
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Trivial
 Fix For: 0.7.2

 Attachments: 2007-1.patch


 Th suggested way to make schema changes is through cassandra-cli but we do 
 not have an example of how to do it. Additionally, to get the demo keyspace 
 created users have to use a different process. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1063397 - in /cassandra/trunk: ./ conf/ interface/thrift/gen-java/org/apache/cassandra/thrift/ src/java/org/apache/cassandra/db/ src/java/org/apache/cassandra/net/ src/java/org/apache/cas

2011-01-25 Thread jbellis
Author: jbellis
Date: Tue Jan 25 18:46:57 2011
New Revision: 1063397

URL: http://svn.apache.org/viewvc?rev=1063397view=rev
Log:
merge from 0.7

Removed:
cassandra/trunk/ivy.xml
cassandra/trunk/ivysettings.xml
Modified:
cassandra/trunk/   (props changed)
cassandra/trunk/CHANGES.txt
cassandra/trunk/build.xml
cassandra/trunk/conf/cassandra-env.sh

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/SuperColumn.java
   (props changed)
cassandra/trunk/src/java/org/apache/cassandra/db/RowMutationVerbHandler.java
cassandra/trunk/src/java/org/apache/cassandra/net/Header.java
cassandra/trunk/src/java/org/apache/cassandra/net/Message.java

cassandra/trunk/src/java/org/apache/cassandra/service/DatacenterReadCallback.java
cassandra/trunk/src/java/org/apache/cassandra/service/StorageProxy.java
cassandra/trunk/src/java/org/apache/cassandra/service/StorageService.java
cassandra/trunk/test/distributed/ivy.xml

Propchange: cassandra/trunk/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Jan 25 18:46:57 2011
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6:922689-1052356,1052358-1053452,1053454,1053456-1055311,1056121,1057932
-/cassandra/branches/cassandra-0.7:1026516-1062958
+/cassandra/branches/cassandra-0.7:1026516-1063389
 /cassandra/branches/cassandra-0.7.0:1053690-1055654
 /cassandra/tags/cassandra-0.7.0-rc3:1051699-1053689
 /incubator/cassandra/branches/cassandra-0.3:774578-796573

Modified: cassandra/trunk/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/cassandra/trunk/CHANGES.txt?rev=1063397r1=1063396r2=1063397view=diff
==
--- cassandra/trunk/CHANGES.txt (original)
+++ cassandra/trunk/CHANGES.txt Tue Jan 25 18:46:57 2011
@@ -11,7 +11,7 @@
  * buffer network stack to avoid inefficient small TCP messages while avoiding
the nagle/delayed ack problem (CASSANDRA-1896)
  * check log4j configuration for changes every 10s (CASSANDRA-1525, 1907)
- * more-efficient cross-DC replication (CASSANDRA-1530)
+ * more-efficient cross-DC replication (CASSANDRA-1530, -2051)
  * upgrade to TFastFramedTransport (CASSANDRA-1743)
  * avoid polluting page cache with commitlog or sstable writes
and seq scan operations (CASSANDRA-1470)
@@ -46,7 +46,7 @@
  * add CLI verbose option in file mode (CASSANDRA-2030)
  * add single-line -- comments to CLI (CASSANDRA-2032)
  * message serialization tests (CASSANDRA-1923)
-
+ * switch from ivy to maven-ant-tasks (CASSANDRA-2017)
 
 0.7.0-final
  * fix offsets to ByteBuffer.get (CASSANDRA-1939)

Modified: cassandra/trunk/build.xml
URL: 
http://svn.apache.org/viewvc/cassandra/trunk/build.xml?rev=1063397r1=1063396r2=1063397view=diff
==
--- cassandra/trunk/build.xml (original)
+++ cassandra/trunk/build.xml Tue Jan 25 18:46:57 2011
@@ -18,7 +18,7 @@
  ~ under the License.
  --
 project basedir=. default=build name=apache-cassandra
- xmlns:ivy=antlib:org.apache.ivy.ant
+ xmlns:artifact=antlib:org.apache.maven.artifact.ant
 property environment=env/
 property file=build.properties /
 property name=debuglevel value=source,lines,vars/
@@ -56,9 +56,9 @@
 property name=version value=${base.version}-SNAPSHOT/
 property name=version.properties.dir 
value=${build.classes}/org/apache/cassandra/config//
 property name=final.name value=${ant.project.name}-${version}/
-property name=ivy.version value=2.1.0 /
-property name=ivy.url
-  value=http://repo2.maven.org/maven2/org/apache/ivy/ivy; /
+property name=maven-ant-tasks.version value=2.1.1 /
+property name=maven-ant-tasks.url
+  
value=http://repo2.maven.org/maven2/org/apache/maven/maven-ant-tasks; /
 property name=test.timeout value=6 /
 property name=test.long.timeout value=30 /
 
@@ -70,8 +70,8 @@
 property name=cobertura.classes.dir 
value=${cobertura.build.dir}/classes/
 property name=cobertura.datafile 
value=${cobertura.build.dir}/cobertura.ser/
 
-condition property=ivy.jar.exists
-  available file=${build.dir}/ivy-${ivy.version}.jar /
+condition property=maven-ant-tasks.jar.exists
+  available 
file=${build.dir}/maven-ant-tasks-${maven-ant-tasks.version}.jar /
 /condition
 
 condition property=is.source.artifact
@@ -81,25 +81,16 @@
 !--
  Add all the 

svn commit: r1063398 - in /cassandra/trunk: ./ interface/thrift/gen-java/org/apache/cassandra/thrift/

2011-01-25 Thread eevans
Author: eevans
Date: Tue Jan 25 18:49:56 2011
New Revision: 1063398

URL: http://svn.apache.org/viewvc?rev=1063398view=rev
Log:
merge w/ 0.7 (?)

Modified:
cassandra/trunk/   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/SuperColumn.java
   (props changed)

Propchange: cassandra/trunk/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Jan 25 18:49:56 2011
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6:922689-1052356,1052358-1053452,1053454,1053456-1055311,1056121,1057932
-/cassandra/branches/cassandra-0.7:1026516-1063389
+/cassandra/branches/cassandra-0.7:1026516-1063394
 /cassandra/branches/cassandra-0.7.0:1053690-1055654
 /cassandra/tags/cassandra-0.7.0-rc3:1051699-1053689
 /incubator/cassandra/branches/cassandra-0.3:774578-796573

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Jan 25 18:49:56 2011
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:922689-1052356,1052358-1053452,1053454,1053456-1055311,1056121,1057932
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1026516-1063389
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1026516-1063394
 
/cassandra/branches/cassandra-0.7.0/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1053690-1055654
 
/cassandra/tags/cassandra-0.7.0-rc3/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1051699-1053689
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/Cassandra.java:774578-796573

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Jan 25 18:49:56 2011
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:922689-1052356,1052358-1053452,1053454,1053456-1055311,1056121,1057932
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1026516-1063389
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1026516-1063394
 
/cassandra/branches/cassandra-0.7.0/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1053690-1055654
 
/cassandra/tags/cassandra-0.7.0-rc3/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1051699-1053689
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/column_t.java:774578-792198

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Jan 25 18:49:56 2011
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:922689-1052356,1052358-1053452,1053454,1053456-1055311,1056121,1057932
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1026516-1063389
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1026516-1063394
 
/cassandra/branches/cassandra-0.7.0/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1053690-1055654
 
/cassandra/tags/cassandra-0.7.0-rc3/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1051699-1053689
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/InvalidRequestException.java:774578-796573

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Jan 25 18:49:56 2011
@@ -1,5 +1,5 @@
 

[jira] Commented: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986598#action_12986598
 ] 

Hudson commented on CASSANDRA-2051:
---

Integrated in Cassandra-0.7 #207 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.7/207/])


 Fixes for multi-datacenter writes
 -

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
Assignee: ivan
 Fix For: 0.7.1

 Attachments: 2051-2.txt, rep_fix_02.patch


 Copied from CASSANDRA-982:
 * Message::removeHeader
   message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
 NullPointerException
 * db/RowMutationVerbHandler::forwardToLocalNodes
   set correct destination address for sendOneWay
 * response(ReadResponse result) added to DatacenterReadCallback
   otherwise ReadCallback will process local results and condition will be 
 never signaled in DatacenterReadCallback
 * FORWARD header removed in StorageProxy::sendMessages if dataCenter 
 equals to localDataCenter
   (if a non local DC processed before local DC FORWARD header will be set 
 when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
 used for unhintedMessage)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1848) Separate thrift and avro classes from cassandra's jar

2011-01-25 Thread Eric Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Evans updated CASSANDRA-1848:
--

Attachment: 
v1-0003-adapt-scripts-for-build-classes-build-main-classes-mov.txt
v1-0002-adapt-build-for-src-java-src-main-java-move.txt
v1-0001-CASSANDRA-1848-mv-src-java-src-main-java.txt

 Separate thrift and avro classes from cassandra's jar
 -

 Key: CASSANDRA-1848
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1848
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Affects Versions: 0.7.0 rc 2
Reporter: Tristan Tarrant
Assignee: Eric Evans
Priority: Trivial
 Fix For: 0.8

 Attachments: CASSANDRA-1848.patch, CASSANDRA-1848_with_hadoop.patch, 
 v1-0001-CASSANDRA-1848-mv-src-java-src-main-java.txt, 
 v1-0002-adapt-build-for-src-java-src-main-java-move.txt, 
 v1-0003-adapt-scripts-for-build-classes-build-main-classes-mov.txt

   Original Estimate: 0h
  Remaining Estimate: 0h

 Most client applications written in Java include the full 
 apache-cassandra-x.y.z.jar in their classpath. I propose to separate the avro 
 and thrift classes into separate jars.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1848) Separate thrift and avro classes from cassandra's jar

2011-01-25 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986605#action_12986605
 ] 

Eric Evans commented on CASSANDRA-1848:
---

* 0001 moves src/java to src/main/java (it's enormous)
* 0002 updates the build for the new location, including compiling bytecode to 
build/classes/main
* 0003 updates the classpaths in the scripts

If someone could give this the once over soon, it would be appreciated.  I 
don't want to think about the possible rebase scenarios.

 Separate thrift and avro classes from cassandra's jar
 -

 Key: CASSANDRA-1848
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1848
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Affects Versions: 0.7.0 rc 2
Reporter: Tristan Tarrant
Assignee: Eric Evans
Priority: Trivial
 Fix For: 0.8

 Attachments: CASSANDRA-1848.patch, CASSANDRA-1848_with_hadoop.patch, 
 v1-0001-CASSANDRA-1848-mv-src-java-src-main-java.txt, 
 v1-0002-adapt-build-for-src-java-src-main-java-move.txt, 
 v1-0003-adapt-scripts-for-build-classes-build-main-classes-mov.txt

   Original Estimate: 0h
  Remaining Estimate: 0h

 Most client applications written in Java include the full 
 apache-cassandra-x.y.z.jar in their classpath. I propose to separate the avro 
 and thrift classes into separate jars.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-2053) Make cache saving less contentious

2011-01-25 Thread Nick Bailey (JIRA)
Make cache saving less contentious
--

 Key: CASSANDRA-2053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2053
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Nick Bailey
 Fix For: 0.7.2


The current default for saving key caches is every hour.  Additionally the 
default timeout for flushing memtables is every hour.  I've seen situations 
where both of these occuring at the same time every hour causes enough pressure 
on the node to have it drop messages and other nodes mark it dead.  This 
happens across the cluster and results in flapping.

We should do something to spread this out. Perhaps staggering cache 
saves/flushes that occur due to timeouts.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1108) ability to forcibly mark machines failed

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986632#action_12986632
 ] 

Jonathan Ellis commented on CASSANDRA-1108:
---

+1

 ability to forcibly mark machines failed
 

 Key: CASSANDRA-1108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1108
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.7.1

 Attachments: 0002_do_not_respond_to_gossip_when_disabled-v2.txt, 
 0002_do_not_respond_to_gossip_when_disabled.txt, 1108.txt

   Original Estimate: 8h
  Remaining Estimate: 8h

 For when a node is failing but not yet so badly that it can't participate in 
 gossip (e.g. hard disk failing but not dead yet) we should give operators the 
 power to forcibly mark a node as dead.
 I think we'd need to add an extra flag in gossip to say this deadness is 
 operator-imposed or the next heartbeat will flip it back to live.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1063418 - in /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms: GossipDigestAckVerbHandler.java GossipDigestSynVerbHandler.java Gossiper.java

2011-01-25 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Jan 25 19:37:22 2011
New Revision: 1063418

URL: http://svn.apache.org/viewvc?rev=1063418view=rev
Log:
Do not respond to gossip when gossip is disabled.
Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-1108

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestSynVerbHandler.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/Gossiper.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java?rev=1063418r1=1063417r2=1063418view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestAckVerbHandler.java
 Tue Jan 25 19:37:22 2011
@@ -45,6 +45,12 @@ public class GossipDigestAckVerbHandler 
 InetAddress from = message.getFrom();
 if (logger_.isTraceEnabled())
 logger_.trace(Received a GossipDigestAckMessage from {}, from);
+if (!Gossiper.instance.isEnabled())
+{
+if (logger_.isTraceEnabled())
+logger_.trace(Ignoring GossipDigestAckMessage because gossip 
is disabled);
+return;
+}
 
 byte[] bytes = message.getMessageBody();
 DataInputStream dis = new DataInputStream( new 
ByteArrayInputStream(bytes) );

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestSynVerbHandler.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestSynVerbHandler.java?rev=1063418r1=1063417r2=1063418view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestSynVerbHandler.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/GossipDigestSynVerbHandler.java
 Tue Jan 25 19:37:22 2011
@@ -44,6 +44,12 @@ public class GossipDigestSynVerbHandler 
 InetAddress from = message.getFrom();
 if (logger_.isTraceEnabled())
 logger_.trace(Received a GossipDigestSynMessage from {}, from);
+if (!Gossiper.instance.isEnabled())
+{
+if (logger_.isTraceEnabled())
+logger_.trace(Ignoring GossipDigestSynMessage because gossip 
is disabled);
+return;
+}
 
 byte[] bytes = message.getMessageBody();
 DataInputStream dis = new DataInputStream( new 
ByteArrayInputStream(bytes) );

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/Gossiper.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/Gossiper.java?rev=1063418r1=1063417r2=1063418view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/Gossiper.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/gms/Gossiper.java
 Tue Jan 25 19:37:22 2011
@@ -897,6 +897,11 @@ public class Gossiper implements IFailur
 scheduledGossipTask.cancel(false);
 }
 
+public boolean isEnabled()
+{
+return !scheduledGossipTask.isCancelled();
+}
+
 /**
  * This should *only* be used for testing purposes.
  */




[jira] Resolved: (CASSANDRA-2048) cli options should match yaml directives

2011-01-25 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-2048.
-

Resolution: Not A Problem

CASSANDRA-2007 solves this afaict.

 cli options should match yaml directives
 

 Key: CASSANDRA-2048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2048
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Brandon Williams
Assignee: Pavel Yaskevich
 Fix For: 0.8


 Many options in the cli don't match their yaml counterparts (for example, 
 placement_strategy vs replica_placement_strategy.)  This confuses a lot of 
 people.  Though I hate to break the cli between releases, I think it's worth 
 it in this case as I've seen (and felt) much pain due to these differences.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1063431 - in /cassandra/branches/cassandra-0.7: CHANGES.txt src/java/org/apache/cassandra/cli/CliClient.java src/java/org/apache/cassandra/cli/CliOptions.java src/java/org/apache/cassandr

2011-01-25 Thread jbellis
Author: jbellis
Date: Tue Jan 25 19:57:03 2011
New Revision: 1063431

URL: http://svn.apache.org/viewvc?rev=1063431view=rev
Log:
CLI attemptsto block for new schemato propagate
patch by Pavel Yaskevich; reviewed by jbellis for CASSANDRA-2044

Modified:
cassandra/branches/cassandra-0.7/CHANGES.txt

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliOptions.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliSessionState.java

Modified: cassandra/branches/cassandra-0.7/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/CHANGES.txt?rev=1063431r1=1063430r2=1063431view=diff
==
--- cassandra/branches/cassandra-0.7/CHANGES.txt (original)
+++ cassandra/branches/cassandra-0.7/CHANGES.txt Tue Jan 25 19:57:03 2011
@@ -38,6 +38,8 @@
  * add single-line -- comments to CLI (CASSANDRA-2032)
  * message serialization tests (CASSANDRA-1923)
  * switch from ivy to maven-ant-tasks (CASSANDRA-2017)
+ * CLI attempts to block for new schema to propagate (CASSANDRA-2044)
+
 
 0.7.0-final
  * fix offsets to ByteBuffer.get (CASSANDRA-1939)

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java?rev=1063431r1=1063430r2=1063431view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java
 Tue Jan 25 19:57:03 2011
@@ -669,7 +669,10 @@ public class CliClient extends CliUserHe
 
 try
 {
-
sessionState.out.println(thriftClient.system_add_keyspace(updateKsDefAttributes(statement,
 ksDef)));
+String mySchemaVersion = 
thriftClient.system_add_keyspace(updateKsDefAttributes(statement, ksDef));
+sessionState.out.println(mySchemaVersion);
+validateSchemaIsSettled(mySchemaVersion);
+
 keyspacesMap.put(keyspaceName, 
thriftClient.describe_keyspace(keyspaceName));
 }
 catch (InvalidRequestException e)
@@ -697,7 +700,9 @@ public class CliClient extends CliUserHe
 
 try
 {
-
sessionState.out.println(thriftClient.system_add_column_family(updateCfDefAttributes(statement,
 cfDef)));
+String mySchemaVersion = 
thriftClient.system_add_column_family(updateCfDefAttributes(statement, cfDef));
+sessionState.out.println(mySchemaVersion);
+validateSchemaIsSettled(mySchemaVersion);
 keyspacesMap.put(keySpace, 
thriftClient.describe_keyspace(keySpace));
 }
 catch (InvalidRequestException e)
@@ -726,7 +731,9 @@ public class CliClient extends CliUserHe
 KsDef currentKsDef = getKSMetaData(keyspaceName);
 KsDef updatedKsDef = updateKsDefAttributes(statement, 
currentKsDef);
 
-
sessionState.out.println(thriftClient.system_update_keyspace(updatedKsDef));
+String mySchemaVersion = 
thriftClient.system_update_keyspace(updatedKsDef);
+validateSchemaIsSettled(mySchemaVersion);
+sessionState.out.println(mySchemaVersion);
 keyspacesMap.put(keyspaceName, 
thriftClient.describe_keyspace(keyspaceName));
 }
 catch (InvalidRequestException e)
@@ -754,7 +761,9 @@ public class CliClient extends CliUserHe
 
 try
 {
-
sessionState.out.println(thriftClient.system_update_column_family(updateCfDefAttributes(statement,
 cfDef)));
+String mySchemaVersion = 
thriftClient.system_update_column_family(updateCfDefAttributes(statement, 
cfDef));
+sessionState.out.println(mySchemaVersion);
+validateSchemaIsSettled(mySchemaVersion);
 keyspacesMap.put(keySpace, 
thriftClient.describe_keyspace(keySpace));
 }
 catch (InvalidRequestException e)
@@ -902,7 +911,9 @@ public class CliClient extends CliUserHe
 return;
 
 String keyspaceName = CliCompiler.getKeySpace(statement, 
thriftClient.describe_keyspaces());
-
sessionState.out.println(thriftClient.system_drop_keyspace(keyspaceName));
+String version = thriftClient.system_drop_keyspace(keyspaceName);
+sessionState.out.println(version);
+validateSchemaIsSettled(version);
 }
 
 /**
@@ -919,7 +930,9 @@ public class CliClient extends CliUserHe
 return;
 
 String cfName = CliCompiler.getColumnFamily(statement, 
keyspacesMap.get(keySpace).cf_defs);
-
sessionState.out.println(thriftClient.system_drop_column_family(cfName));
+String mySchemaVersion = 

[jira] Commented: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread ivan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986648#action_12986648
 ] 

ivan commented on CASSANDRA-2051:
-

patch 2 much better. thanks Jonathan. ;)

i pulled latest trunk and applied patch in 2051-2.txt.

it seems that communication between nodes and DCs works as expected.

 Fixes for multi-datacenter writes
 -

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
Assignee: ivan
 Fix For: 0.7.1

 Attachments: 2051-2.txt, rep_fix_02.patch


 Copied from CASSANDRA-982:
 * Message::removeHeader
   message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
 NullPointerException
 * db/RowMutationVerbHandler::forwardToLocalNodes
   set correct destination address for sendOneWay
 * response(ReadResponse result) added to DatacenterReadCallback
   otherwise ReadCallback will process local results and condition will be 
 never signaled in DatacenterReadCallback
 * FORWARD header removed in StorageProxy::sendMessages if dataCenter 
 equals to localDataCenter
   (if a non local DC processed before local DC FORWARD header will be set 
 when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
 used for unhintedMessage)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1108) ability to forcibly mark machines failed

2011-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986649#action_12986649
 ] 

Hudson commented on CASSANDRA-1108:
---

Integrated in Cassandra-0.7 #208 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.7/208/])
Do not respond to gossip when gossip is disabled.
Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-1108


 ability to forcibly mark machines failed
 

 Key: CASSANDRA-1108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1108
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.7.1

 Attachments: 0002_do_not_respond_to_gossip_when_disabled-v2.txt, 
 0002_do_not_respond_to_gossip_when_disabled.txt, 1108.txt

   Original Estimate: 8h
  Remaining Estimate: 8h

 For when a node is failing but not yet so badly that it can't participate in 
 gossip (e.g. hard disk failing but not dead yet) we should give operators the 
 power to forcibly mark a node as dead.
 I think we'd need to add an extra flag in gossip to say this deadness is 
 operator-imposed or the next heartbeat will flip it back to live.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2050) AbstractDaemon unnecessarily uses jetty interface

2011-01-25 Thread Stu Hood (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986652#action_12986652
 ] 

Stu Hood commented on CASSANDRA-2050:
-

 I think this was only needed for avro plumbing.
Yea: this can be blamed back to when we wanted to better support auth in Avro. 
Should be safe to remove.

The patch should probably also remove the dependency, right?

 AbstractDaemon unnecessarily uses jetty interface
 -

 Key: CASSANDRA-2050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2050
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Nate McCall
Assignee: Nate McCall
Priority: Minor
 Fix For: 0.7.1

 Attachments: 2050.txt


 AbstractDaemon's CleaningThreadPool need not implement this jetty interface. 
 Removing this would allow us to remove jetty dependency altogether. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986654#action_12986654
 ] 

T Jake Luciani commented on CASSANDRA-2051:
---

Looks good +1,  thanks for testing this ivan

 Fixes for multi-datacenter writes
 -

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
Assignee: ivan
 Fix For: 0.7.1

 Attachments: 2051-2.txt, rep_fix_02.patch


 Copied from CASSANDRA-982:
 * Message::removeHeader
   message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
 NullPointerException
 * db/RowMutationVerbHandler::forwardToLocalNodes
   set correct destination address for sendOneWay
 * response(ReadResponse result) added to DatacenterReadCallback
   otherwise ReadCallback will process local results and condition will be 
 never signaled in DatacenterReadCallback
 * FORWARD header removed in StorageProxy::sendMessages if dataCenter 
 equals to localDataCenter
   (if a non local DC processed before local DC FORWARD header will be set 
 when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
 used for unhintedMessage)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2039) LazilyCompactedRow doesn't add CFInfo to digest

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2039:
--

 Reviewer: jbellis
Fix Version/s: 0.7.2

this looks okay eyeballing it, but can you add a check to 
LazilyCompactedRowTest similar to assertBytes to make sure this stays fixed?

 LazilyCompactedRow doesn't add CFInfo to digest
 ---

 Key: CASSANDRA-2039
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2039
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Richard Low
Priority: Minor
 Fix For: 0.7.2

 Attachments: trunk-2038.txt


 LazilyCompactedRow.update doesn't add the CFInfo or columnCount to the 
 digest, so the hash value in the Merkle tree does not include this data.  
 However, PrecompactedRow does include this.  Two consequences of this are:
 * Row-level tombstones are not compared when using LazilyCompactedRow so 
 could remain inconsistent
 * LazilyCompactedRow and PrecompactedRow produce different hashes of the same 
 row, so if two nodes have differing in_memory_compaction_limit_in_mb values, 
 rows of size in between the two limits will have different hashes so will 
 always be repaired even when they are the same.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1063434 - in /cassandra/branches/cassandra-0.7/conf: Keyspace1.txt cassandra.yaml

2011-01-25 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Jan 25 20:06:37 2011
New Revision: 1063434

URL: http://svn.apache.org/viewvc?rev=1063434view=rev
Log:
Move demo Keyspace1 definition from casandra.yaml to an input file for
cassandra-cli.
Patch by Aaron Morton, reviewed by brandonwilliams for CASSANDRA-2007

Added:
cassandra/branches/cassandra-0.7/conf/Keyspace1.txt
Modified:
cassandra/branches/cassandra-0.7/conf/cassandra.yaml

Added: cassandra/branches/cassandra-0.7/conf/Keyspace1.txt
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/conf/Keyspace1.txt?rev=1063434view=auto
==
--- cassandra/branches/cassandra-0.7/conf/Keyspace1.txt (added)
+++ cassandra/branches/cassandra-0.7/conf/Keyspace1.txt Tue Jan 25 20:06:37 2011
@@ -0,0 +1,184 @@
+/*This file contains an example Keyspace that can be created using the
+cassandra-cli command line interface as follows.
+
+bin/cassandra-cli -host localhost --file conf/Keyspace1.txt
+
+The cassandra-cli includes online help which you can accessed without needing
+to connect to a running cassandra instance by starting the client and typing 
help;
+
+Keyspaces have ColumnFamilies.(Usually 1 KS per application.)
+ColumnFamilies have Rows. (Dozens of CFs per KS.)
+Rows contain Columns. (Many per CF.)
+Columns contain name:value:timestamp. (Many per Row.)
+
+A KS is most similar to a schema, and a CF is most similar to a relational 
table.
+
+Keyspaces, ColumnFamilies, and Columns may carry additional
+metadata that change their behavior. These are as follows:
+
+Keyspace required parameters:
+- name: name of the keyspace; system is
+  reserved for Cassandra Internals.
+- placement_strategy: the class that determines how replicas
+  are distributed among nodes. Contains both the class as well as
+  configuration information.  Must extend AbstractReplicationStrategy.
+  Out of the box, Cassandra provides
+- org.apache.cassandra.locator.SimpleStrategy
+- org.apache.cassandra.locator.NetworkTopologyStrategy
+- org.apache.cassandra.locator.OldNetworkTopologyStrategy
+
+  SimpleStrategy merely places the first
+  replica at the node whose token is closest to the key (as determined
+  by the Partitioner), and additional replicas on subsequent nodes
+  along the ring in increasing Token order.
+
+  With NetworkTopologyStrategy,
+  for each datacenter, you can specify how many replicas you want
+  on a per-keyspace basis.  Replicas are placed on different racks
+  within each DC, if possible. This strategy also requires rack aware
+  snitch, such as RackInferringSnitch or PropertyFileSnitch.
+  An example:
+  create keyspace Keyspace1
+ with replication_factor = 3
+ and placement_strategy = 
'org.apache.cassandra.locator.NetworkTopologyStrategy'
+ strategy_options:
+   DC1 : 3
+   DC2 : 2
+   DC3 : 1
+
+  OldNetworkToplogyStrategy [formerly RackAwareStrategy]
+  places one replica in each of two datacenters, and the third on a
+  different rack in in the first.  Additional datacenters are not
+  guaranteed to get a replica.  Additional replicas after three are placed
+  in ring order after the third without regard to rack or datacenter.
+- replication_factor: Number of replicas of each row
+
+Keyspace optional paramaters:
+- strategy_options: Additional information for the placement strategy.
+
+ColumnFamily required parameters:
+- name: name of the ColumnFamily.  Must not contain the character -.
+- comparator: tells Cassandra how to sort the columns for slicing
+  operations. The default is BytesType, which is a straightforward
+  lexical comparison of the bytes in each column.  Other options are
+  AsciiType, UTF8Type, LexicalUUIDType, TimeUUIDType, LongType,
+  and IntegerType (a generic variable-length integer type).
+  You can also specify the fully-qualified class name to a class of
+  your choice extending org.apache.cassandra.db.marshal.AbstractType.
+
+ColumnFamily optional parameters:
+- column_type: Super or Standard, defaults to Standard.
+- subcomparator: Comparator for sorting subcolumn names, for Super Columns 
only.
+- keys_cached: specifies the number of keys per sstable whose
+   locations we keep in memory in mostly LRU order.  (JUST the key
+   locations, NOT any column values.) Specify a fraction (value less
+   than 1) or an absolute number of keys to cache.  Defaults to 20
+   keys.
+- rows_cached: specifies the number of rows whose entire contents we
+   cache in memory. Do not use this on ColumnFamilies with large rows,
+   or ColumnFamilies with high write:read ratios. Specify a fraction
+   (value less than 1) or an absolute number of rows to cache.
+   Defaults to 0. (i.e. row caching is off by default)
+- comment: used to attach additional human-readable information about
+   the column family to its definition.
+- read_repair_chance: specifies the probability with which read
+   repairs 

svn commit: r1063435 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageProxy.java

2011-01-25 Thread jbellis
Author: jbellis
Date: Tue Jan 25 20:12:15 2011
New Revision: 1063435

URL: http://svn.apache.org/viewvc?rev=1063435view=rev
Log:
clean out forward headers from message between loops
patch by ivancso and jbellis; reviewed by tjake for CASSANDRA-2051

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageProxy.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageProxy.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageProxy.java?rev=1063435r1=1063434r2=1063435view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageProxy.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/StorageProxy.java
 Tue Jan 25 20:12:15 2011
@@ -226,31 +226,29 @@ public class StorageProxy implements Sto
 {
 String dataCenter = entry.getKey();
 
-// Grab a set of all the messages bound for this dataCenter and 
create an iterator over this set.
-MapMessage, CollectionInetAddress messagesForDataCenter = 
entry.getValue().asMap();
-
-for (Map.EntryMessage, CollectionInetAddress messages: 
messagesForDataCenter.entrySet())
+// send the messages corresponding to this datacenter
+for (Map.EntryMessage, CollectionInetAddress messages: 
entry.getValue().asMap().entrySet())
 {
 Message message = messages.getKey();
-IteratorInetAddress iter = messages.getValue().iterator();
-assert iter.hasNext();
-
-// First endpoint in list is the destination for this group
-InetAddress target = iter.next();
+// a single message object is used for unhinted writes, so 
clean out any forwards
+// from previous loop iterations
+message.removeHeader(RowMutation.FORWARD_HEADER);
 
-// Add all the other destinations that are bound for the same 
dataCenter as a header in the primary message.
-while (iter.hasNext())
+if (dataCenter.equals(localDataCenter))
 {
-InetAddress destination = iter.next();
-
-if (dataCenter.equals(localDataCenter))
-{
-// direct write to local DC
-assert message.getHeader(RowMutation.FORWARD_HEADER) 
== null;
+// direct writes to local DC
+for (InetAddress destination : messages.getValue())
 MessagingService.instance().sendOneWay(message, 
destination);
-}
-else
+}
+else
+{
+// Non-local DC. First endpoint in list is the destination 
for this group
+IteratorInetAddress iter = 
messages.getValue().iterator();
+InetAddress target = iter.next();
+// Add all the other destinations of the same message as a 
header in the primary message.
+while (iter.hasNext())
 {
+InetAddress destination = iter.next();
 // group all nodes in this DC as forward headers on 
the primary message
 ByteArrayOutputStream bos = new 
ByteArrayOutputStream();
 DataOutputStream dos = new DataOutputStream(bos);
@@ -263,9 +261,9 @@ public class StorageProxy implements Sto
 dos.write(destination.getAddress());
 message.setHeader(RowMutation.FORWARD_HEADER, 
bos.toByteArray());
 }
+// send the combined message + forward headers
+MessagingService.instance().sendOneWay(message, target);
 }
-
-MessagingService.instance().sendOneWay(message, target);
 }
 }
 }




[jira] Resolved: (CASSANDRA-2051) Fixes for multi-datacenter writes

2011-01-25 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2051.
---

Resolution: Fixed
  Reviewer: jbellis

committed.  thanks Ivan and Jake!

 Fixes for multi-datacenter writes
 -

 Key: CASSANDRA-2051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2051
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.1
Reporter: Jonathan Ellis
Assignee: ivan
 Fix For: 0.7.1

 Attachments: 2051-2.txt, rep_fix_02.patch


 Copied from CASSANDRA-982:
 * Message::removeHeader
   message.setHeader(RowMutation.FORWARD_HEADER, null) throws 
 NullPointerException
 * db/RowMutationVerbHandler::forwardToLocalNodes
   set correct destination address for sendOneWay
 * response(ReadResponse result) added to DatacenterReadCallback
   otherwise ReadCallback will process local results and condition will be 
 never signaled in DatacenterReadCallback
 * FORWARD header removed in StorageProxy::sendMessages if dataCenter 
 equals to localDataCenter
   (if a non local DC processed before local DC FORWARD header will be set 
 when unhintedMessage used in sendToHintedEndpoints. one instance of Message 
 used for unhintedMessage)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2044) CLI should loop on describe_schema until agreement or fatel exit with stacktrace/message if no agreement after X seconds

2011-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986658#action_12986658
 ] 

Hudson commented on CASSANDRA-2044:
---

Integrated in Cassandra-0.7 #209 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.7/209/])
CLI attemptsto block for new schemato propagate
patch by Pavel Yaskevich; reviewed by jbellis for CASSANDRA-2044


 CLI should loop on describe_schema until agreement or fatel exit with 
 stacktrace/message if no agreement after X seconds
 

 Key: CASSANDRA-2044
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2044
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.0
Reporter: Matthew F. Dennis
Assignee: Pavel Yaskevich
 Fix For: 0.7.1

 Attachments: CASSANDRA-2044.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 see CASSANDRA-2026 for brief background.
 It's easy to enter statements into the CLI before the schema has settled, 
 often causing problems where it is no longer possible to get the nodes in 
 agreement about the schema without removing the system directory.
 The alleviate the most common problems with this, the CLI should issue the 
 modification statement and loop on describe_schema until all nodes agree or 
 until X seconds has passed.  If the timeout has been exceeded, the CLI should 
 exit with an error and inform the user that the schema has not settled and 
 further migrations are ill-advised until it does.
 number_of_nodes/2+1 seconds seems like a decent wait time for schema 
 migrations to start with.
 Bonus points for making the value configurable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-982) read repair on quorum consistencylevel

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986661#action_12986661
 ] 

Jonathan Ellis commented on CASSANDRA-982:
--

Ivan, if you want to create a ticket for the read side of your patches, that 
would be great.

 read repair on quorum consistencylevel
 --

 Key: CASSANDRA-982
 URL: https://issues.apache.org/jira/browse/CASSANDRA-982
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-better-digest-checking-for-ReadResponseResolver.patch, 
 0001-r-m-SP.weakRead-rename-strongRead-to-fetchRows.-read-r.txt, 
 0002-implement-read-repair-as-a-second-resolve-after-the-in.txt, 
 0002-quorum-only-read.txt, 
 0003-rename-QuorumResponseHandler-ReadCallback.txt, 
 982-resolve-digests-v2.txt, rep_fix_01.patch, rep_fix_02.patch

   Original Estimate: 6h
  Remaining Estimate: 6h

 CASSANDRA-930 made read repair fuzzy optional, but this only helps with 
 ConsistencyLevel.ONE:
 - Quorum reads always send requests to all nodes
 - only the first Quorum's worth of responses get compared
 So what we'd like to do two changes:
 - only send read requests to the closest R live nodes
 - if read repair is enabled, also compare results from the other nodes in the 
 background

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (CASSANDRA-2007) Move demo Keyspace1 definition from casandra.yaml to an input file for cassandra-cli

2011-01-25 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reopened CASSANDRA-2007:
-


Reverted due to impending 0.7.1 release, we need to put more help text from 
cassandra.yaml into the cli.

 Move demo Keyspace1 definition from casandra.yaml to an input file for 
 cassandra-cli
 

 Key: CASSANDRA-2007
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2007
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Trivial
 Fix For: 0.7.1

 Attachments: 2007-1.patch


 Th suggested way to make schema changes is through cassandra-cli but we do 
 not have an example of how to do it. Additionally, to get the demo keyspace 
 created users have to use a different process. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2007) Move demo Keyspace1 definition from casandra.yaml to an input file for cassandra-cli

2011-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986670#action_12986670
 ] 

Hudson commented on CASSANDRA-2007:
---

Integrated in Cassandra-0.7 #210 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.7/210/])
Move demo Keyspace1 definition from casandra.yaml to an input file for
cassandra-cli.
Patch by Aaron Morton, reviewed by brandonwilliams for CASSANDRA-2007


 Move demo Keyspace1 definition from casandra.yaml to an input file for 
 cassandra-cli
 

 Key: CASSANDRA-2007
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2007
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Trivial
 Fix For: 0.7.2

 Attachments: 2007-1.patch


 Th suggested way to make schema changes is through cassandra-cli but we do 
 not have an example of how to do it. Additionally, to get the demo keyspace 
 created users have to use a different process. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2007) Move demo Keyspace1 definition from casandra.yaml to an input file for cassandra-cli

2011-01-25 Thread Aaron Morton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986672#action_12986672
 ] 

Aaron Morton commented on CASSANDRA-2007:
-

will wait for 0.7.1 and try to do this and CASSANDRA-2008 together  

 Move demo Keyspace1 definition from casandra.yaml to an input file for 
 cassandra-cli
 

 Key: CASSANDRA-2007
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2007
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Trivial
 Fix For: 0.7.2

 Attachments: 2007-1.patch


 Th suggested way to make schema changes is through cassandra-cli but we do 
 not have an example of how to do it. Additionally, to get the demo keyspace 
 created users have to use a different process. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2013) Add CL.TWO, CL.THREE; tweak CL documentation

2011-01-25 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986675#action_12986675
 ] 

T Jake Luciani commented on CASSANDRA-2013:
---

I can see how someone would want this with RF  6  but who does that?

 Add CL.TWO, CL.THREE; tweak CL documentation
 

 Key: CASSANDRA-2013
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2013
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
Priority: Minor
 Fix For: 0.8

 Attachments: 2013.txt


 Attaching draft patch to add CL.TWO and CL.THREE.
 Motivation for adding is that having to select between either ONE or QUORUM 
 is too narrow a choice for clusters with RF  3. In such a case, it makes 
 particular sense to want to do writes at e.g. CL.TWO for durability purposes 
 even though you are not looking to get strong consistency with QUORUM. 
 CL.THREE is the same argument. TWO and THREE felt reasonable; there is no 
 objective reason why stopping at THREE is the obvious choice.
 Technically one would want to specify an arbitrary number, but that is a much 
 more significant change. 
 Two open questions:
 (1) I adjusted the documentation of ConsistencyLevel to be more consistent 
 and also to reflect what I believe to be reality (for example, as far as I 
 can tell QUORUM doesn't send requests to all nodes as claimed in the .thrift 
 file). I'm not terribly confident that I have not missed something though.
 (2) There is at least one unresolved issue, which is this assertion check 
 WriteResponseHandler:
 assert 1 = blockFor  blockFor = 2 * 
 Table.open(table).getReplicationStrategy().getReplicationFactor()
 : String.format(invalid response count %d for replication factor 
 %d,
 blockFor, 
 Table.open(table).getReplicationStrategy().getReplicationFactor());
 At THREE, this causes an assertion failure on keyspace with RF=1. I would, as 
 a user, expect UnavailableException. However I am uncertain as to what to do 
 about this assertion. I think this highlights one TWO/THREE are different 
 from previously existing CL:s, in that they essentially hard-code replicate 
 counts rather than expressing them in terms that can by definition be served 
 by the cluster at any RF.
 Given that with THREE (and not TWO, but that is only due to the 
 implementation detail that bootstrapping is involved) implies a replicate 
 count that is independent of the replication factor, there is essentially a 
 new failure mode. It is suddenly possible for a consistency level to be 
 fundamentally incompatible with the RF. My gut reaction is to want 
 UnavailableException still, and that the assertion check can essentially be 
 removed (other than the = 1 part).
 If a different failure mode is desired, presumably it would not be an 
 assertion failure (which should indicate a Cassandra bug).  Maybe 
 UnstisfiableConsistencyLevel? I propose just adjusting the assertion (which 
 has no equivalent in ReadCallback btw); giving a friendlier error message in 
 case of a CL/RF mismatch would be good, but doesn't feel worth introducing 
 extra complexity to deal with it.
 'ant test' passes. I have tested w/ py_stress with a three-node cluster and 
 an RF=3 keyspace and with 1 and 2 nodes down, and get expected behavior 
 (available or unavailable as a function of nodes that are up).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1379) Uncached row reads may block cached reads

2011-01-25 Thread Javier Canillas (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986686#action_12986686
 ] 

Javier Canillas commented on CASSANDRA-1379:


Actually, on the patch both reads times will impact on the only read counter. 
Maybe creating a separate counter for cache reads and physical reads would be 
good. On the other hand, i dont remember if there is a miss counter over cache, 
if there is, it would be surely impacted

 Uncached row reads may block cached reads
 -

 Key: CASSANDRA-1379
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1379
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: David King
Assignee: Javier Canillas
Priority: Minor
 Fix For: 0.7.2

 Attachments: CASSANDRA-1379.patch


 The cap on the number of concurrent reads appears to cap the *total* number 
 of concurrent reads instead of just capping the reads that are bound for 
 disk. That is, given N concurrent readers if all of them are busy waiting on 
 disk, even reads that can be served from the row cache will block waiting for 
 them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-2054) Cpu Spike to 100%.

2011-01-25 Thread Thibaut (JIRA)
Cpu Spike to  100%. 
-

 Key: CASSANDRA-2054
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2054
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Thibaut


I see sudden spikes of cpu usage where cassandra will take up an enormous 
amount of cpu (uptime load  1000). 

My application executes both reads and writes.

I tested this with 
https://hudson.apache.org/hudson/job/Cassandra-0.7/193/artifact/cassandra/build/apache-cassandra-2011-01-24_06-01-26-bin.tar.gz.

I disabled JNA, but this didn't help.

Jstack won't work anymore when this happens:

-bash-4.1# jstack 27699  /tmp/jstackerror
27699: Unable to open socket file: target process not responding or HotSpot VM 
not loaded
The -F option can be used when the target process is not responding

Also, my entire application comes to a halt as long as the node is in this 
state, as the node is still marked as up, but won't respond (cassandra is 
taking up all the cpu on the first node) to any requests.

/software/cassandra/bin/nodetool -h localhost ring
Address Status State Load Owns Token

192.168.0.1 Up Normal 3.48 GB 5.00% 0cc
192.168.0.2 Up Normal 3.48 GB 5.00% 199
192.168.0.3 Up Normal 3.67 GB 5.00% 266
192.168.0.4 Up Normal 2.55 GB 5.00% 333
192.168.0.5 Up Normal 2.58 GB 5.00% 400
192.168.0.6 Up Normal 2.54 GB 5.00% 4cc
192.168.0.7 Up Normal 2.59 GB 5.00% 599
192.168.0.8 Up Normal 2.58 GB 5.00% 666
192.168.0.9 Up Normal 2.33 GB 5.00% 733
192.168.0.10 Down Normal 2.39 GB 5.00% 7ff
192.168.0.11 Up Normal 2.4 GB 5.00% 8cc
192.168.0.12 Up Normal 2.74 GB 5.00% 999
192.168.0.13 Up Normal 3.17 GB 5.00% a66
192.168.0.14 Up Normal 3.25 GB 5.00% b33
192.168.0.15 Up Normal 3.01 GB 5.00% c00
192.168.0.16 Up Normal 2.48 GB 5.00% ccc
192.168.0.17 Up Normal 2.41 GB 5.00% d99
192.168.0.18 Up Normal 2.3 GB 5.00% e66
192.168.0.19 Up Normal 2.27 GB 5.00% f33
192.168.0.20 Up Normal 2.32 GB 5.00% 

The interesting part is that after a while (seconds or minutes), I have seen 
cassandra nodes return to a normal state again (without restart). I have also 
never seen this happen at 2 nodes at the same time in the cluster (the node 
where it happens differes, but there seems to be scheme for it to happen on the 
first node most of the times).

In the above case, I restarted node 192.168.0.10 and the first node returned to 
normal state. (I don't know if there is a correlation)

I attached the jstack of the node in trouble (as soon as I could access it with 
jstack, but I suspect this is the jstack when the node was running normal 
again).

The heap usage is still moderate:

/software/cassandra/bin/nodetool -h localhost info
0cc
Gossip active: true
Load : 3.49 GB
Generation No: 1295949691
Uptime (seconds) : 42843
Heap Memory (MB) : 1570.58 / 3005.38


I will enable the GC logging tomorrow.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-2054) Cpu Spike to 100%.

2011-01-25 Thread Thibaut (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibaut updated CASSANDRA-2054:
---

Attachment: jstackerror.txt

 Cpu Spike to  100%. 
 -

 Key: CASSANDRA-2054
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2054
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Thibaut
 Attachments: jstackerror.txt


 I see sudden spikes of cpu usage where cassandra will take up an enormous 
 amount of cpu (uptime load  1000). 
 My application executes both reads and writes.
 I tested this with 
 https://hudson.apache.org/hudson/job/Cassandra-0.7/193/artifact/cassandra/build/apache-cassandra-2011-01-24_06-01-26-bin.tar.gz.
 I disabled JNA, but this didn't help.
 Jstack won't work anymore when this happens:
 -bash-4.1# jstack 27699  /tmp/jstackerror
 27699: Unable to open socket file: target process not responding or HotSpot 
 VM not loaded
 The -F option can be used when the target process is not responding
 Also, my entire application comes to a halt as long as the node is in this 
 state, as the node is still marked as up, but won't respond (cassandra is 
 taking up all the cpu on the first node) to any requests.
 /software/cassandra/bin/nodetool -h localhost ring
 Address Status State Load Owns Token
 
 192.168.0.1 Up Normal 3.48 GB 5.00% 0cc
 192.168.0.2 Up Normal 3.48 GB 5.00% 199
 192.168.0.3 Up Normal 3.67 GB 5.00% 266
 192.168.0.4 Up Normal 2.55 GB 5.00% 333
 192.168.0.5 Up Normal 2.58 GB 5.00% 400
 192.168.0.6 Up Normal 2.54 GB 5.00% 4cc
 192.168.0.7 Up Normal 2.59 GB 5.00% 599
 192.168.0.8 Up Normal 2.58 GB 5.00% 666
 192.168.0.9 Up Normal 2.33 GB 5.00% 733
 192.168.0.10 Down Normal 2.39 GB 5.00% 7ff
 192.168.0.11 Up Normal 2.4 GB 5.00% 8cc
 192.168.0.12 Up Normal 2.74 GB 5.00% 999
 192.168.0.13 Up Normal 3.17 GB 5.00% a66
 192.168.0.14 Up Normal 3.25 GB 5.00% b33
 192.168.0.15 Up Normal 3.01 GB 5.00% c00
 192.168.0.16 Up Normal 2.48 GB 5.00% ccc
 192.168.0.17 Up Normal 2.41 GB 5.00% d99
 192.168.0.18 Up Normal 2.3 GB 5.00% e66
 192.168.0.19 Up Normal 2.27 GB 5.00% f33
 192.168.0.20 Up Normal 2.32 GB 5.00% 
 The interesting part is that after a while (seconds or minutes), I have seen 
 cassandra nodes return to a normal state again (without restart). I have also 
 never seen this happen at 2 nodes at the same time in the cluster (the node 
 where it happens differes, but there seems to be scheme for it to happen on 
 the first node most of the times).
 In the above case, I restarted node 192.168.0.10 and the first node returned 
 to normal state. (I don't know if there is a correlation)
 I attached the jstack of the node in trouble (as soon as I could access it 
 with jstack, but I suspect this is the jstack when the node was running 
 normal again).
 The heap usage is still moderate:
 /software/cassandra/bin/nodetool -h localhost info
 0cc
 Gossip active: true
 Load : 3.49 GB
 Generation No: 1295949691
 Uptime (seconds) : 42843
 Heap Memory (MB) : 1570.58 / 3005.38
 I will enable the GC logging tomorrow.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2054) Cpu Spike to 100%.

2011-01-25 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986711#action_12986711
 ] 

Jonathan Ellis commented on CASSANDRA-2054:
---

What JVM version?

 Cpu Spike to  100%. 
 -

 Key: CASSANDRA-2054
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2054
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Thibaut
 Attachments: jstackerror.txt


 I see sudden spikes of cpu usage where cassandra will take up an enormous 
 amount of cpu (uptime load  1000). 
 My application executes both reads and writes.
 I tested this with 
 https://hudson.apache.org/hudson/job/Cassandra-0.7/193/artifact/cassandra/build/apache-cassandra-2011-01-24_06-01-26-bin.tar.gz.
 I disabled JNA, but this didn't help.
 Jstack won't work anymore when this happens:
 -bash-4.1# jstack 27699  /tmp/jstackerror
 27699: Unable to open socket file: target process not responding or HotSpot 
 VM not loaded
 The -F option can be used when the target process is not responding
 Also, my entire application comes to a halt as long as the node is in this 
 state, as the node is still marked as up, but won't respond (cassandra is 
 taking up all the cpu on the first node) to any requests.
 /software/cassandra/bin/nodetool -h localhost ring
 Address Status State Load Owns Token
 
 192.168.0.1 Up Normal 3.48 GB 5.00% 0cc
 192.168.0.2 Up Normal 3.48 GB 5.00% 199
 192.168.0.3 Up Normal 3.67 GB 5.00% 266
 192.168.0.4 Up Normal 2.55 GB 5.00% 333
 192.168.0.5 Up Normal 2.58 GB 5.00% 400
 192.168.0.6 Up Normal 2.54 GB 5.00% 4cc
 192.168.0.7 Up Normal 2.59 GB 5.00% 599
 192.168.0.8 Up Normal 2.58 GB 5.00% 666
 192.168.0.9 Up Normal 2.33 GB 5.00% 733
 192.168.0.10 Down Normal 2.39 GB 5.00% 7ff
 192.168.0.11 Up Normal 2.4 GB 5.00% 8cc
 192.168.0.12 Up Normal 2.74 GB 5.00% 999
 192.168.0.13 Up Normal 3.17 GB 5.00% a66
 192.168.0.14 Up Normal 3.25 GB 5.00% b33
 192.168.0.15 Up Normal 3.01 GB 5.00% c00
 192.168.0.16 Up Normal 2.48 GB 5.00% ccc
 192.168.0.17 Up Normal 2.41 GB 5.00% d99
 192.168.0.18 Up Normal 2.3 GB 5.00% e66
 192.168.0.19 Up Normal 2.27 GB 5.00% f33
 192.168.0.20 Up Normal 2.32 GB 5.00% 
 The interesting part is that after a while (seconds or minutes), I have seen 
 cassandra nodes return to a normal state again (without restart). I have also 
 never seen this happen at 2 nodes at the same time in the cluster (the node 
 where it happens differes, but there seems to be scheme for it to happen on 
 the first node most of the times).
 In the above case, I restarted node 192.168.0.10 and the first node returned 
 to normal state. (I don't know if there is a correlation)
 I attached the jstack of the node in trouble (as soon as I could access it 
 with jstack, but I suspect this is the jstack when the node was running 
 normal again).
 The heap usage is still moderate:
 /software/cassandra/bin/nodetool -h localhost info
 0cc
 Gossip active: true
 Load : 3.49 GB
 Generation No: 1295949691
 Uptime (seconds) : 42843
 Heap Memory (MB) : 1570.58 / 3005.38
 I will enable the GC logging tomorrow.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-2037) Unsafe Multimap Access in MessagingService

2011-01-25 Thread Thibaut (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12986713#action_12986713
 ] 

Thibaut commented on CASSANDRA-2037:


Created https://issues.apache.org/jira/browse/CASSANDRA-2054

 Unsafe Multimap Access in MessagingService
 --

 Key: CASSANDRA-2037
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2037
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0
Reporter: Erik Onnen
Priority: Critical
 Attachments: jstackerror.txt


 MessagingSerice is a system singleton with a static Multimap field targets. 
 Multimaps are not thread safe but no attempt is made to synchronize access to 
 that field. Multimap ultimately uses the standard java HashMap which is 
 susceptible to a race condition where threads will get stuck during a get 
 operation yielding multiple threads similar to the following stack:
 pool-1-thread-6451 prio=10 tid=0x7fa5242c9000 nid=0x10f4 runnable 
 [0x7fa52fde4000]
java.lang.Thread.State: RUNNABLE
   at java.util.HashMap.get(HashMap.java:303)
   at 
 com.google.common.collect.AbstractMultimap.getOrCreateCollection(AbstractMultimap.java:205)
   at 
 com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:194)
   at 
 com.google.common.collect.AbstractListMultimap.put(AbstractListMultimap.java:72)
   at 
 com.google.common.collect.ArrayListMultimap.put(ArrayListMultimap.java:60)
   at 
 org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:303)
   at 
 org.apache.cassandra.service.StorageProxy.strongRead(StorageProxy.java:353)
   at 
 org.apache.cassandra.service.StorageProxy.readProtocol(StorageProxy.java:229)
   at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:98)
   at 
 org.apache.cassandra.thrift.CassandraServer.get(CassandraServer.java:289)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$get.process(Cassandra.java:2655)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2555)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



  1   2   >