[jira] [Updated] (CASSANDRA-6688) Avoid possible sstable overlaps with leveled compaction

2014-02-12 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-6688:
---

Attachment: 6688-v3.patch

v3 attached that removes skipLevels

 Avoid possible sstable overlaps with leveled compaction
 ---

 Key: CASSANDRA-6688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6688
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.0.6

 Attachments: 0001-6688.patch, 6688-v2.txt, 6688-v3.patch


 Two cases where we can end up with overlapping sstables in the leveled 
 manifest;
 FIrst one is when we skip levels during compaction. Here we need to make sure 
 we are not compacting in newLevel - 1 since if, for example, we are doing a 
 L1 - L2 compaction and then start a new L0 compaction where we decide to 
 skip L1, we could have overlapping sstables in L2 when the compactions are 
 done. This case is new in 2.0 since we check if we skip levels before the 
 compaction starts.
 Second case is where we try to include as many overlapping L0 sstables as 
 possible, here we could add sstables that are not compacting, but overlap 
 sstables that are.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6688) Avoid possible sstable overlaps with leveled compaction

2014-02-12 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-6688:
---

Since Version: 2.0 beta 1
Fix Version/s: 2.0.6

1.2 is not affected since it repairs the level right after it adds the 
compacted files

 Avoid possible sstable overlaps with leveled compaction
 ---

 Key: CASSANDRA-6688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6688
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.0.6

 Attachments: 0001-6688.patch, 6688-v2.txt, 6688-v3.patch


 Two cases where we can end up with overlapping sstables in the leveled 
 manifest;
 FIrst one is when we skip levels during compaction. Here we need to make sure 
 we are not compacting in newLevel - 1 since if, for example, we are doing a 
 L1 - L2 compaction and then start a new L0 compaction where we decide to 
 skip L1, we could have overlapping sstables in L2 when the compactions are 
 done. This case is new in 2.0 since we check if we skip levels before the 
 compaction starts.
 Second case is where we try to include as many overlapping L0 sstables as 
 possible, here we could add sstables that are not compacting, but overlap 
 sstables that are.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6683) BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch

2014-02-12 Thread Kirill Bogdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13898928#comment-13898928
 ] 

Kirill Bogdanov commented on CASSANDRA-6683:


Thank you for your answer.

I have came across this part of code in DES because I have observed suboptimal 
choice of nodes in my configuration and started to investigate it. 

This is my config: 
* PropertyFileSnitch
* dynamic_snitch_badness_threshold 0.1, 
* 4 DCs, 
* keyspace with replication quota 1 for each DC. 
* Read repair and speculative_retry are disabled for my tables.
* Performing read operations with consistency TWO

I am observing that local DC that serves read request has about the same 
probability of asking any of the 3 remote replicas to confirm consistency TWO 
regardless of their score (is that correct?). 
Since all nodes are in different DCs {{subsnitch.sortByProximity}} places local 
node at the start of the list (first) but does not sort other remote DCs.
After {{subsnitch.sortByProximity}} addresses list with scores may look 
something like that:
- DC1: 0.1 (first)
- DC2: 0.7 
- DC3: 0.2
- DC4: 0.2

Since we are not calling {{sortByProximityWithScore}} we returning this list to 
{{AbstractReadExecutor getReadExecutor}} where 
{{consistencyLevel.filterForQuery}} (based on consistency TWO) picks up first 2 
addresses from the list. As a result we are sending read request to suboptimal 
DC2.

By implementing my change ({{Math.abs()}}) I am seeing ~15% read throughput 
improvement in my setup with cassandra stress tool.

Due to my limited knowledge of Cassandra internals I am probably wrong to blame 
DES and BADNESS_THRESHOLD, but I would greatly appreciate if you could point 
out what is the correct behaviour in the situation above and which module is 
responsible for sorting nodes by the scores.

Thank you.

 BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch
 ---

 Key: CASSANDRA-6683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6683
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux 3.8.0-33-generic
Reporter: Kirill Bogdanov
  Labels: snitch
 Fix For: 2.0.6


 There is a problem in *DynamicEndpointSnitch.java* in 
 sortByProximityWithBadness()
 Before calling sortByProximityWithScore we comparing each nodes score ratios 
 to the badness threshold.
 {code}
 if ((first - next) / first   BADNESS_THRESHOLD)
 {
 sortByProximityWithScore(address, addresses);
 return;
 }
 {code}
 This is not always the correct comparison because *first* score can be less 
 than *next*  score and in that case we will compare a negative number with 
 positive.
 The solution is to compute absolute value of the ratio:
 {code}
 if (Math.abs((first - next) / first)  BADNESS_THRESHOLD)
 {code}
 This issue causing an incorrect sorting of DCs based on their performance and 
 affects performance of the snitch.
 Thanks.
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-4911) Lift limitation that order by columns must be selected for IN queries

2014-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4911:


Attachment: 4911-v2.txt

Ok, the logic in that first patch was a bit confused. Attaching v2 that fixes 
that. I've also pushed [a 
dtest|https://github.com/riptano/cassandra-dtest/blob/fa7d63092807bb9c2100d0608414082c4fe7a843/cql_tests.py#L3721-3744].

 Lift limitation that order by columns must be selected for IN queries
 -

 Key: CASSANDRA-4911
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4911
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.2.0 beta 1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1

 Attachments: 4911-v2.txt, 4911.txt


 This is the followup of CASSANDRA-4645. We should remove the limitation that 
 for IN queries, you must have columns on which you have an ORDER BY in the 
 select clause.
 For that, we'll need to automatically add the columns on which we have an 
 ORDER BY to the one queried internally, and remove it afterwards (once the 
 sorting is done) from the resultSet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6561) Static columns in CQL3

2014-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13898963#comment-13898963
 ] 

Sylvain Lebresne commented on CASSANDRA-6561:
-

bq. ALTER TABLE ADD should support adding a static column, but doesn't

Right, pushed an additional commit for that on the same branch than before 
(https://github.com/pcmanus/cassandra/commits/6561-3). I updated the dtest for 
that too.

bq. dropping a static column doesn't work fully (it won't be compacted away)

You might have to be more specific than that. As far as I can tell, there is 
nothing special that should be done for static columns outside of making sure 
the column name does get added to droppedColumns and that's the case. I 
confirmed that with a quick manual test too: unless sstable2json is lying to 
me, the dropped static columns does got compacted away.

 Static columns in CQL3
 --

 Key: CASSANDRA-6561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6561
 Project: Cassandra
  Issue Type: New Feature
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.6


 I'd like to suggest the following idea for adding static columns to CQL3.  
 I'll note that the basic idea has been suggested by jhalliday on irc but the 
 rest of the details are mine and I should be blamed for anything stupid in 
 what follows.
 Let me start with a rational: there is 2 main family of CF that have been 
 historically used in Thrift: static ones and dynamic ones. CQL3 handles both 
 family through the presence or not of clustering columns. There is however 
 some cases where mixing both behavior has its use. I like to think of those 
 use cases as 3 broad category:
 # to denormalize small amounts of not-entirely-static data in otherwise 
 static entities. It's say tags for a product or custom properties in a 
 user profile. This is why we've added CQL3 collections. Importantly, this is 
 the *only* use case for which collections are meant (which doesn't diminishes 
 their usefulness imo, and I wouldn't disagree that we've maybe not 
 communicated this too well).
 # to optimize fetching both a static entity and related dynamic ones. Say you 
 have blog posts, and each post has associated comments (chronologically 
 ordered). *And* say that a very common query is fetch a post and its 50 last 
 comments. In that case, it *might* be beneficial to store a blog post 
 (static entity) in the same underlying CF than it's comments for performance 
 reason.  So that fetch a post and it's 50 last comments is just one slice 
 internally.
 # you want to CAS rows of a dynamic partition based on some partition 
 condition. This is the same use case than why CASSANDRA-5633 exists for.
 As said above, 1) is already covered by collections, but 2) and 3) are not 
 (and
 I strongly believe collections are not the right fit, API wise, for those).
 Also, note that I don't want to underestimate the usefulness of 2). In most 
 cases, using a separate table for the blog posts and the comments is The 
 Right Solution, and trying to do 2) is premature optimisation. Yet, when used 
 properly, that kind of optimisation can make a difference, so I think having 
 a relatively native solution for it in CQL3 could make sense.
 Regarding 3), though CASSANDRA-5633 would provide one solution for it, I have 
 the feeling that static columns actually are a more natural approach (in term 
 of API). That's arguably more of a personal opinion/feeling though.
 So long story short, CQL3 lacks a way to mix both some static and dynamic 
 rows in the same partition of the same CQL3 table, and I think such a tool 
 could have it's use.
 The proposal is thus to allow static columns. Static columns would only 
 make sense in table with clustering columns (the dynamic ones). A static 
 column value would be static to the partition (all rows of the partition 
 would share the value for such column). The syntax would just be:
 {noformat}
 CREATE TABLE t (
   k text,
   s text static,
   i int,
   v text,
   PRIMARY KEY (k, i)
 )
 {noformat}
 then you'd get:
 {noformat}
 INSERT INTO t(k, s, i, v) VALUES (k0, I'm shared,   0, foo);
 INSERT INTO t(k, s, i, v) VALUES (k0, I'm still shared, 1, bar);
 SELECT * FROM t;
  k |  s | i |v
 
 k0 | I'm still shared | 0 | bar
 k0 | I'm still shared | 1 | foo
 {noformat}
 There would be a few semantic details to decide on regarding deletions, ttl, 
 etc. but let's see if we agree it's a good idea first before ironing those 
 out.
 One last point is the implementation. Though I do think this idea has merits, 
 it's definitively not useful enough to justify rewriting the storage engine 
 for it. But I think we can support this relatively easily (emphasis on 
 relatively :)), which is 

[jira] [Created] (CASSANDRA-6691) Improvements and FIxes to Stress

2014-02-12 Thread Benedict (JIRA)
Benedict created CASSANDRA-6691:
---

 Summary: Improvements and FIxes to Stress
 Key: CASSANDRA-6691
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6691
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1


There were a couple of minor issues with the new stress:

1) The warmup period did not scale up as the cluster size increased
2) The mixed workload did not work with CQL

At the same time, I have introduced a change in behaviour in the way the 
default column values are generated so that they are deterministically based on 
the key. I have then modified read operations to verify that the data they 
fetch is the same as should have been inserted, so that stress does some degree 
of data quality checking at the same time. For the moment the values generated 
never vary for a given key, so this does nothing to test consistency, it only 
tests for corruption.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6692) AtomicBTreeColumns Improvements

2014-02-12 Thread Benedict (JIRA)
Benedict created CASSANDRA-6692:
---

 Summary: AtomicBTreeColumns Improvements
 Key: CASSANDRA-6692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6692
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1


There are two improvements to make to the BTree code that should help:

1) It turns out Stack Allocation is more rubbish than we had hoped, and so the 
fast route actually allocates garbage. It's unlikely this reduces throughput, 
but the increased young-gen pressure is probably unwelcome. I propose to remove 
the fast route for now.

2) It is not uncommon to race to perform an update, so that the new values are 
actually out-of-date when we come to modify the tree. In this case the update 
should recognise that the original (portion of) the tree has not been modified, 
and simply return it, without allocating a new one.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-02-12 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899081#comment-13899081
 ] 

Marcus Eriksson commented on CASSANDRA-6689:


[~jasobrown] [~xedin] I wouldn't mind more sets of eyes on this patch, so, if 
you have time, please take a look!

 Partially Off Heap Memtables
 

 Key: CASSANDRA-6689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1


 Move the contents of ByteBuffers off-heap for records written to a memtable.
 (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-02-12 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899096#comment-13899096
 ] 

Benedict commented on CASSANDRA-6689:
-

There are some natural boundaries if you want to share the burden. Everything 
inside of utils.concurrent is pretty isolated from everything outside, so could 
easily be vetted independently.

Also, the utilisation of Referrer/RefAction is probably going to be a 
painstaking thing to vet (that's what makes the majority of small touches 
outside of the main changes), and quite independent of their declarations. We 
just need to be certain we always use the correct type of RefAction, and never 
let one disappear somewhere - OutboundTCPConnection and native transport 
writing are the two danger areas here (also Memtable flushing needs a bit of 
care, but is definitely less scary).

The most difficult thing to review is going to be the main body of work inside 
of utils.memory, however. This is pretty hardcore lock-free stuff, and the 
thing we're looking for is _unintended_ race conditions (there are lots of 
intended races) - in particular pay attention to the way in which we now 
asynchronously manage the subpool and suballocator (ledgers of how much 
we've allocated / claimed / are reclaiming), and obviously most importantly 
that we never accidentally overwrite data that is being read elsewhere. This 
should all hopefully be very clearly documented both at the level of 
abstraction and the individual points where interesting / dangerous things 
happen. But try to figure it out for yourself as well, in case I and my tests 
missed something. I will be doing further tests in the near future, but I much 
prefer to catch things by eye if possible.

Always feel free to throw up a this bit isn't well explained flag and I'll 
try to improve it. I want this stuff to be as clearly self documenting as 
possible.



 Partially Off Heap Memtables
 

 Key: CASSANDRA-6689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1


 Move the contents of ByteBuffers off-heap for records written to a memtable.
 (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6693) cqlsh fails to insert row with huge blob

2014-02-12 Thread Aleksander Stasiak (JIRA)
Aleksander Stasiak created CASSANDRA-6693:
-

 Summary: cqlsh fails to insert row with huge blob
 Key: CASSANDRA-6693
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6693
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Linux x64, cassandra 2.0.5 python 2.7
Reporter: Aleksander Stasiak


cqlsh throws: 
Traceback (most recent call last):
  File /usr/bin/cqlsh, line 903, in perform_statement_untraced
self.cursor.execute(statement, decoder=decoder)
  File 
/usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cursor.py, 
line 80, in execute
response = self.get_response(prepared_q, cl)
  File 
/usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/thrifteries.py,
 line 77, in get_response
return self.handle_cql_execution_errors(doquery, compressed_q, compress, cl)
  File 
/usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/thrifteries.py,
 line 96, in handle_cql_execution_errors
return executor(*args, **kwargs)
  File 
/usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py,
 line 1830, in execute_cql3_query
self.send_execute_cql3_query(query, compression, consistency)
  File 
/usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py,
 line 1841, in send_execute_cql3_query
self._oprot.trans.flush()
  File 
/usr/share/cassandra/lib/thrift-python-internal-only-0.9.1.zip/thrift/transport/TTransport.py,
 line 292, in flush
self.__trans.write(buf)
  File 
/usr/share/cassandra/lib/thrift-python-internal-only-0.9.1.zip/thrift/transport/TSocket.py,
 line 128, in write
plus = self.handle.send(buff)
error: [Errno 104] Connection reset by peer
while inserting row with blob of size ca 30M. cqlsh then disconnects and 
prevents from sending any other query until not restarted. 
I haven't tested what is minimal blob size, that brakes connection. The same 
cql can be easily executed with java driver:
session.execute(cql)
of version 2.0.0-rc2



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6683) BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch

2014-02-12 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899130#comment-13899130
 ] 

Brandon Williams commented on CASSANDRA-6683:
-

You could test by disabling the dynamic snitch with {{noformat}}dynamic_snitch: 
false{{noformat}} so the sorting is always the same.

 BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch
 ---

 Key: CASSANDRA-6683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6683
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux 3.8.0-33-generic
Reporter: Kirill Bogdanov
  Labels: snitch
 Fix For: 2.0.6


 There is a problem in *DynamicEndpointSnitch.java* in 
 sortByProximityWithBadness()
 Before calling sortByProximityWithScore we comparing each nodes score ratios 
 to the badness threshold.
 {code}
 if ((first - next) / first   BADNESS_THRESHOLD)
 {
 sortByProximityWithScore(address, addresses);
 return;
 }
 {code}
 This is not always the correct comparison because *first* score can be less 
 than *next*  score and in that case we will compare a negative number with 
 positive.
 The solution is to compute absolute value of the ratio:
 {code}
 if (Math.abs((first - next) / first)  BADNESS_THRESHOLD)
 {code}
 This issue causing an incorrect sorting of DCs based on their performance and 
 affects performance of the snitch.
 Thanks.
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6683) BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch

2014-02-12 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899130#comment-13899130
 ] 

Brandon Williams edited comment on CASSANDRA-6683 at 2/12/14 2:12 PM:
--

You could test by disabling the dynamic snitch with {noformat}dynamic_snitch: 
false{noformat} so the sorting is always the same.


was (Author: brandon.williams):
You could test by disabling the dynamic snitch with {{noformat}}dynamic_snitch: 
false{{noformat}} so the sorting is always the same.

 BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch
 ---

 Key: CASSANDRA-6683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6683
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux 3.8.0-33-generic
Reporter: Kirill Bogdanov
  Labels: snitch
 Fix For: 2.0.6


 There is a problem in *DynamicEndpointSnitch.java* in 
 sortByProximityWithBadness()
 Before calling sortByProximityWithScore we comparing each nodes score ratios 
 to the badness threshold.
 {code}
 if ((first - next) / first   BADNESS_THRESHOLD)
 {
 sortByProximityWithScore(address, addresses);
 return;
 }
 {code}
 This is not always the correct comparison because *first* score can be less 
 than *next*  score and in that case we will compare a negative number with 
 positive.
 The solution is to compute absolute value of the ratio:
 {code}
 if (Math.abs((first - next) / first)  BADNESS_THRESHOLD)
 {code}
 This issue causing an incorrect sorting of DCs based on their performance and 
 affects performance of the snitch.
 Thanks.
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6693) cqlsh fails to insert row with huge blob

2014-02-12 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6693.
-

Resolution: Invalid

You're probably exceeding your thrift_framed_transport_size_in_mb setting which 
defaults to 15MB.  The java driver isn't using thrift, which is why it works.

 cqlsh fails to insert row with huge blob
 

 Key: CASSANDRA-6693
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6693
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Linux x64, cassandra 2.0.5 python 2.7
Reporter: Aleksander Stasiak

 cqlsh throws: 
 Traceback (most recent call last):
   File /usr/bin/cqlsh, line 903, in perform_statement_untraced
 self.cursor.execute(statement, decoder=decoder)
   File 
 /usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cursor.py,
  line 80, in execute
 response = self.get_response(prepared_q, cl)
   File 
 /usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/thrifteries.py,
  line 77, in get_response
 return self.handle_cql_execution_errors(doquery, compressed_q, compress, 
 cl)
   File 
 /usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/thrifteries.py,
  line 96, in handle_cql_execution_errors
 return executor(*args, **kwargs)
   File 
 /usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py,
  line 1830, in execute_cql3_query
 self.send_execute_cql3_query(query, compression, consistency)
   File 
 /usr/share/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py,
  line 1841, in send_execute_cql3_query
 self._oprot.trans.flush()
   File 
 /usr/share/cassandra/lib/thrift-python-internal-only-0.9.1.zip/thrift/transport/TTransport.py,
  line 292, in flush
 self.__trans.write(buf)
   File 
 /usr/share/cassandra/lib/thrift-python-internal-only-0.9.1.zip/thrift/transport/TSocket.py,
  line 128, in write
 plus = self.handle.send(buff)
 error: [Errno 104] Connection reset by peer
 while inserting row with blob of size ca 30M. cqlsh then disconnects and 
 prevents from sending any other query until not restarted. 
 I haven't tested what is minimal blob size, that brakes connection. The same 
 cql can be easily executed with java driver:
 session.execute(cql)
 of version 2.0.0-rc2



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-02-12 Thread Benedict (JIRA)
Benedict created CASSANDRA-6694:
---

 Summary: Slightly More Off-Heap Memtables
 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1


The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as the 
on-heap overhead is still very large. It should not be tremendously difficult 
to extend these changes so that we allocate entire Cells off-heap, instead of 
multiple BBs per Cell (with all their associated overhead).

The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
bytes per cell on average for the btree overhead, for a total overhead of 
around 20-22 bytes). This translates to 8-byte object overhead, 4-byte address 
(we will do alignment tricks like the VM to allow us to address a reasonably 
large memory space, although this trick is unlikely to last us forever, at 
which point we will have to bite the bullet and accept a 24-byte per cell 
overhead), and 4-byte object reference for maintaining our internal list of 
allocations, which is unfortunately necessary since we cannot safely (and 
cheaply) walk the object graph we allocate otherwise, which is necessary for 
(allocation-) compaction and pointer rewriting.

The ugliest thing here is going to be implementing the various CellName 
instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6695) Cassandra should not push schema updates to nodes with unknown version

2014-02-12 Thread JIRA
Piotr Kołaczkowski created CASSANDRA-6695:
-

 Summary: Cassandra should not push schema updates to nodes with 
unknown version
 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski


MigrationManager#announce() must not send schema to nodes with unknown version, 
because they might be older ones.





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6695) Cassandra should not push schema updates to nodes with unknown version

2014-02-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-6695:
--

Description: 
MigrationManager#announce() must not send schema to nodes it doesn't know the 
version of, because they might be older ones.



  was:
MigrationManager#announce() must not send schema to nodes with unknown version, 
because they might be older ones.




 Cassandra should not push schema updates to nodes with unknown version
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski

 MigrationManager#announce() must not send schema to nodes it doesn't know the 
 version of, because they might be older ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6695) Cassandra should not push schema updates to nodes with unknown version

2014-02-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-6695:
--

Attachment: CASSANDRA-6695-2.0.patch

 Cassandra should not push schema updates to nodes with unknown version
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
 Attachments: CASSANDRA-6695-2.0.patch


 MigrationManager#announce() must not send schema to nodes it doesn't know the 
 version of, because they might be older ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6695) Cassandra should not push schema updates to nodes with unknown version

2014-02-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-6695:
--

Attachment: 6695.patch

 Cassandra should not push schema updates to nodes with unknown version
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
 Attachments: 6695.patch


 MigrationManager#announce() must not send schema to nodes it doesn't know the 
 version of, because they might be older ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6695) Cassandra should not push schema updates to nodes with unknown version

2014-02-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Kołaczkowski updated CASSANDRA-6695:
--

Attachment: (was: CASSANDRA-6695-2.0.patch)

 Cassandra should not push schema updates to nodes with unknown version
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
 Attachments: 6695.patch


 MigrationManager#announce() must not send schema to nodes it doesn't know the 
 version of, because they might be older ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6695) Cassandra should not push schema updates to nodes with unknown version

2014-02-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6695:
--

Reviewer: Aleksey Yeschenko

 Cassandra should not push schema updates to nodes with unknown version
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
 Attachments: 6695.patch


 MigrationManager#announce() must not send schema to nodes it doesn't know the 
 version of, because they might be older ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6688) Avoid possible sstable overlaps with leveled compaction

2014-02-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899212#comment-13899212
 ] 

Jonathan Ellis commented on CASSANDRA-6688:
---

+1

 Avoid possible sstable overlaps with leveled compaction
 ---

 Key: CASSANDRA-6688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6688
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.0.6

 Attachments: 0001-6688.patch, 6688-v2.txt, 6688-v3.patch


 Two cases where we can end up with overlapping sstables in the leveled 
 manifest;
 FIrst one is when we skip levels during compaction. Here we need to make sure 
 we are not compacting in newLevel - 1 since if, for example, we are doing a 
 L1 - L2 compaction and then start a new L0 compaction where we decide to 
 skip L1, we could have overlapping sstables in L2 when the compactions are 
 done. This case is new in 2.0 since we check if we skip levels before the 
 compaction starts.
 Second case is where we try to include as many overlapping L0 sstables as 
 possible, here we could add sstables that are not compacting, but overlap 
 sstables that are.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6596) Split out outgoing stream throughput within a DC and inter-DC

2014-02-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6596:
--

Reviewer: Benedict

 Split out outgoing stream throughput within a DC and inter-DC
 -

 Key: CASSANDRA-6596
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6596
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremy Hanna
Assignee: Vijay
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-CASSANDRA-6596.patch


 Currently the outgoing stream throughput setting doesn't differentiate 
 between when it goes to another node in the same DC and when it goes to 
 another DC across a potentially bandwidth limited link.  It would be nice to 
 have that split out so that it could be tuned for each type of link.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6596) Split out outgoing stream throughput within a DC and inter-DC

2014-02-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899215#comment-13899215
 ] 

Jonathan Ellis commented on CASSANDRA-6596:
---

Adding [~benedict] as reviewer

 Split out outgoing stream throughput within a DC and inter-DC
 -

 Key: CASSANDRA-6596
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6596
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremy Hanna
Assignee: Vijay
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-CASSANDRA-6596.patch


 Currently the outgoing stream throughput setting doesn't differentiate 
 between when it goes to another node in the same DC and when it goes to 
 another DC across a potentially bandwidth limited link.  It would be nice to 
 have that split out so that it could be tuned for each type of link.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6575) By default, Cassandra should refuse to start if JNA can't be initialized properly

2014-02-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6575:
--

Reviewer: Joshua McKenzie  (was: Dave Brosius)

 By default, Cassandra should refuse to start if JNA can't be initialized 
 properly
 -

 Key: CASSANDRA-6575
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6575
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Tupshin Harper
Assignee: Clément Lardeur
Priority: Minor
  Labels: lhf
 Fix For: 2.1

 Attachments: trunk-6575-v2.patch, trunk-6575-v3.patch, 
 trunk-6575.patch


 Failure to have JNA working properly is such a common undetected problem that 
 it would be far preferable to have Cassandra refuse to startup unless JNA is 
 initialized. In theory, this should be much less of a problem with Cassandra 
 2.1 due to CASSANDRA-5872, but even there, it might fail due to native lib 
 problems, or might otherwise be misconfigured. A yaml override, such as 
 boot_without_jna would allow the deliberate overriding of this policy.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-02-12 Thread sankalp kohli (JIRA)
sankalp kohli created CASSANDRA-6696:


 Summary: Drive replacement in JBOD can cause data to reappear. 
 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Bug
Reporter: sankalp kohli
Priority: Minor


In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
empty one and repair is run. 
This can cause deleted data to come back in some cases. Also this is true for 
corrupt stables in which we delete the corrupt stable and run repair. 
Here is an example:
Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
row=sankalp col=sankalp is written 20 days back and successfully went to all 
three nodes. 
Then a delete/tombstone was written successfully for the same row column 15 
days back. 
Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
since it got compacted with the actual data. So there is no trace of this row 
column in node A and B.
Now in node C, say the original data is in drive1 and tombstone is in drive2. 
Compaction has not yet reclaimed the data and tombstone.  
Drive2 becomes corrupt and was replaced with new empty drive. 
Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
has come back to life. 
Now after replacing the drive we run repair. This data will be propagated to 
all nodes. 

Note: This is still a problem even if we run repair every gc grace. 
 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6695) Cassandra should not push schema updates to nodes with unknown version

2014-02-12 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6695:
-

Attachment: 6695-v2.txt

 Cassandra should not push schema updates to nodes with unknown version
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
 Attachments: 6695-v2.txt, 6695.patch


 MigrationManager#announce() must not send schema to nodes it doesn't know the 
 version of, because they might be older ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6695) Don't exchange schema between nodes with different versions (no pull, no push)

2014-02-12 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6695:
-

Reproduced In:   (was: 2.0.5)
  Description: 
Subject. Don't push schema to unknown-, or differently major-versioned nodes, 
and don't pull schema from them, either.

Since we don't support schema altering during upgrade, and adding nodes during 
cluster upgrades is also a non-recommended thing, this is what we are going to 
do.

Until CASSANDRA-6038, that is.


  was:
MigrationManager#announce() must not send schema to nodes it doesn't know the 
version of, because they might be older ones.



Fix Version/s: 2.1
   2.0.6
   1.2.16
 Assignee: Aleksey Yeschenko  (was: Piotr Kołaczkowski)
   Issue Type: Improvement  (was: Bug)
  Summary: Don't exchange schema between nodes with different versions 
(no pull, no push)  (was: Cassandra should not push schema updates to nodes 
with unknown version)

Hijacking this issue to make broader changes, and apply them to 1.2 as well. 
Also reclassifying this as an improvement, just because.

 Don't exchange schema between nodes with different versions (no pull, no push)
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Aleksey Yeschenko
 Fix For: 1.2.16, 2.0.6, 2.1

 Attachments: 6695-v2.txt, 6695.patch


 Subject. Don't push schema to unknown-, or differently major-versioned nodes, 
 and don't pull schema from them, either.
 Since we don't support schema altering during upgrade, and adding nodes 
 during cluster upgrades is also a non-recommended thing, this is what we are 
 going to do.
 Until CASSANDRA-6038, that is.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6695) Don't exchange schema between nodes with different versions (no pull, no push)

2014-02-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899243#comment-13899243
 ] 

Piotr Kołaczkowski commented on CASSANDRA-6695:
---

LGTM :)

 Don't exchange schema between nodes with different versions (no pull, no push)
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Aleksey Yeschenko
 Fix For: 1.2.16, 2.0.6, 2.1

 Attachments: 6695-v2.txt, 6695.patch


 Subject. Don't push schema to unknown-, or differently major-versioned nodes, 
 and don't pull schema from them, either.
 Since we don't support schema altering during upgrade, and adding nodes 
 during cluster upgrades is also a non-recommended thing, this is what we are 
 going to do.
 Until CASSANDRA-6038, that is.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6695) Don't exchange schema between nodes with different versions (no pull, no push)

2014-02-12 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6695:
-

Reviewer: Piotr Kołaczkowski  (was: Aleksey Yeschenko)

 Don't exchange schema between nodes with different versions (no pull, no push)
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Aleksey Yeschenko
 Fix For: 1.2.16, 2.0.6, 2.1

 Attachments: 6695-v2.txt, 6695.patch


 Subject. Don't push schema to unknown-, or differently major-versioned nodes, 
 and don't pull schema from them, either.
 Since we don't support schema altering during upgrade, and adding nodes 
 during cluster upgrades is also a non-recommended thing, this is what we are 
 going to do.
 Until CASSANDRA-6038, that is.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6695) Don't exchange schema between nodes with different versions (no pull, no push)

2014-02-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899243#comment-13899243
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-6695 at 2/12/14 4:34 PM:


+1 LGTM :)


was (Author: pkolaczk):
LGTM :)

 Don't exchange schema between nodes with different versions (no pull, no push)
 --

 Key: CASSANDRA-6695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6695
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Piotr Kołaczkowski
Assignee: Aleksey Yeschenko
 Fix For: 1.2.16, 2.0.6, 2.1

 Attachments: 6695-v2.txt, 6695.patch


 Subject. Don't push schema to unknown-, or differently major-versioned nodes, 
 and don't pull schema from them, either.
 Since we don't support schema altering during upgrade, and adding nodes 
 during cluster upgrades is also a non-recommended thing, this is what we are 
 going to do.
 Until CASSANDRA-6038, that is.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Don't exchange schema between nodes with different versions

2014-02-12 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 00a8b1e6e - b2dfaed31


Don't exchange schema between nodes with different versions

patch by Aleksey Yeschenko; reviewed by Piotr Kołaczkowski for
CASSANDRA-6695


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b2dfaed3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b2dfaed3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b2dfaed3

Branch: refs/heads/cassandra-1.2
Commit: b2dfaed3170c8b5b96a7ea8e7df6129490ead3be
Parents: 00a8b1e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 12 19:37:34 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 12 19:37:34 2014 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/service/MigrationManager.java | 20 
 2 files changed, 9 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dfaed3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0674dde..de7c307 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,6 +5,7 @@
  * Fix mean cells and mean row size per sstable calculations (CASSANDRA-6667)
  * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
  * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
+ * Don't exchange schema between nodes with different versions (CASSANDRA-6695)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dfaed3/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index 5a02e3b..68d0bad 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -134,13 +134,11 @@ public class MigrationManager
 private static boolean shouldPullSchemaFrom(InetAddress endpoint)
 {
 /*
- * Don't request schema from nodes with versions younger than 1.1.7 
(timestamps in versions prior to 1.1.7 are broken)
- * Don't request schema from nodes with a higher major (may have 
incompatible schema)
+ * Don't request schema from nodes with a differnt or unknonw major 
version (may have incompatible schema)
  * Don't request schema from fat clients
  */
 return MessagingService.instance().knowsVersion(endpoint)
- MessagingService.instance().getVersion(endpoint) = 
MessagingService.VERSION_117
- MessagingService.instance().getVersion(endpoint) = 
MessagingService.current_version
+ MessagingService.instance().getVersion(endpoint) == 
MessagingService.current_version
  !Gossiper.instance.isFatClient(endpoint);
 }
 
@@ -291,15 +289,13 @@ public class MigrationManager
 
 for (InetAddress endpoint : Gossiper.instance.getLiveMembers())
 {
-if (endpoint.equals(FBUtilities.getBroadcastAddress()))
-continue; // we've dealt with localhost already
-
-// don't send schema to the nodes with the versions older than 
current major
-if (MessagingService.instance().getVersion(endpoint)  
MessagingService.current_version)
-continue;
-
-pushSchemaMutation(endpoint, schema);
+// only push schema to nodes with known and equal versions
+if (!endpoint.equals(FBUtilities.getBroadcastAddress()) 
+MessagingService.instance().knowsVersion(endpoint) 
+MessagingService.instance().getVersion(endpoint) == 
MessagingService.current_version)
+pushSchemaMutation(endpoint, schema);
 }
+
 return f;
 }
 



[1/2] git commit: Don't exchange schema between nodes with different versions

2014-02-12 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 80cebec5d - babc2de3e


Don't exchange schema between nodes with different versions

patch by Aleksey Yeschenko; reviewed by Piotr Kołaczkowski for
CASSANDRA-6695


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b2dfaed3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b2dfaed3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b2dfaed3

Branch: refs/heads/cassandra-2.0
Commit: b2dfaed3170c8b5b96a7ea8e7df6129490ead3be
Parents: 00a8b1e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 12 19:37:34 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 12 19:37:34 2014 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/service/MigrationManager.java | 20 
 2 files changed, 9 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dfaed3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0674dde..de7c307 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,6 +5,7 @@
  * Fix mean cells and mean row size per sstable calculations (CASSANDRA-6667)
  * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
  * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
+ * Don't exchange schema between nodes with different versions (CASSANDRA-6695)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dfaed3/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index 5a02e3b..68d0bad 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -134,13 +134,11 @@ public class MigrationManager
 private static boolean shouldPullSchemaFrom(InetAddress endpoint)
 {
 /*
- * Don't request schema from nodes with versions younger than 1.1.7 
(timestamps in versions prior to 1.1.7 are broken)
- * Don't request schema from nodes with a higher major (may have 
incompatible schema)
+ * Don't request schema from nodes with a differnt or unknonw major 
version (may have incompatible schema)
  * Don't request schema from fat clients
  */
 return MessagingService.instance().knowsVersion(endpoint)
- MessagingService.instance().getVersion(endpoint) = 
MessagingService.VERSION_117
- MessagingService.instance().getVersion(endpoint) = 
MessagingService.current_version
+ MessagingService.instance().getVersion(endpoint) == 
MessagingService.current_version
  !Gossiper.instance.isFatClient(endpoint);
 }
 
@@ -291,15 +289,13 @@ public class MigrationManager
 
 for (InetAddress endpoint : Gossiper.instance.getLiveMembers())
 {
-if (endpoint.equals(FBUtilities.getBroadcastAddress()))
-continue; // we've dealt with localhost already
-
-// don't send schema to the nodes with the versions older than 
current major
-if (MessagingService.instance().getVersion(endpoint)  
MessagingService.current_version)
-continue;
-
-pushSchemaMutation(endpoint, schema);
+// only push schema to nodes with known and equal versions
+if (!endpoint.equals(FBUtilities.getBroadcastAddress()) 
+MessagingService.instance().knowsVersion(endpoint) 
+MessagingService.instance().getVersion(endpoint) == 
MessagingService.current_version)
+pushSchemaMutation(endpoint, schema);
 }
+
 return f;
 }
 



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-12 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/service/MigrationManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/babc2de3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/babc2de3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/babc2de3

Branch: refs/heads/cassandra-2.0
Commit: babc2de3e58e41b0dc2b9534c4514adfdd54be37
Parents: 80cebec b2dfaed
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 12 19:40:46 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 12 19:40:46 2014 +0300

--
 CHANGES.txt   |  1 +
 .../cassandra/service/MigrationManager.java   | 18 --
 2 files changed, 9 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/babc2de3/CHANGES.txt
--
diff --cc CHANGES.txt
index f9b2032,de7c307..a4dc8fd
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -17,33 -5,24 +17,34 @@@ Merged from 1.2
   * Fix mean cells and mean row size per sstable calculations (CASSANDRA-6667)
   * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
   * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
+  * Don't exchange schema between nodes with different versions 
(CASSANDRA-6695)
  
 -
 -1.2.15
 - * Move handling of migration event source to solve bootstrap race 
(CASSANDRA-6648)
 - * Make sure compaction throughput value doesn't overflow with int math 
(CASSANDRA-6647)
 -
 -
 -1.2.14
 - * Reverted code to limit CQL prepared statement cache by size 
(CASSANDRA-6592)
 - * add cassandra.default_messaging_version property to allow easier
 -   upgrading from 1.1 (CASSANDRA-6619)
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a hint (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 + * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
 + * Add ks.cf names to tombstone logging (CASSANDRA-6597)
 + * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
 + * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
 + * Delete unfinished compaction incrementally (CASSANDRA-6086)
 + * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
 + * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
 + * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 + * Don't special case received counts in CAS timeout exceptions 
(CASSANDRA-6595)
 + * Add support for 2.1 global counter shards (CASSANDRA-6505)
 + * Fix NPE when streaming connection is not yet established (CASSANDRA-6210)
 + * Avoid rare duplicate read repair triggering (CASSANDRA-6606)
 + * Fix paging discardFirst (CASSANDRA-6555)
 + * Fix ArrayIndexOutOfBoundsException in 2ndary index query (CASSANDRA-6470)
 + * Release sstables upon rebuilding 2i (CASSANDRA-6635)
 + * Add AbstractCompactionStrategy.startup() method (CASSANDRA-6637)
 + * SSTableScanner may skip rows during cleanup (CASSANDRA-6638)
 + * sstables from stalled repair sessions can resurrect deleted data 
(CASSANDRA-6503)
 + * Switch stress to use ITransportFactory (CASSANDRA-6641)
 + * Fix IllegalArgumentException during prepare (CASSANDRA-6592)
 + * Fix possible loss of 2ndary index entries during compaction 
(CASSANDRA-6517)
 + * Fix direct Memory on architectures that do not support unaligned long 
access
 +   (CASSANDRA-6628)
 + * Let scrub optionally skip broken counter partitions (CASSANDRA-5930)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
   * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
   * Add ability to throttle batchlog replay (CASSANDRA-6550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/babc2de3/src/java/org/apache/cassandra/service/MigrationManager.java
--



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-02-12 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5b3515e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5b3515e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5b3515e

Branch: refs/heads/trunk
Commit: f5b3515eec09a4609200dcfdaab8965665e57bdb
Parents: 5cf381f babc2de
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 12 19:41:24 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 12 19:41:24 2014 +0300

--
 CHANGES.txt   |  1 +
 .../cassandra/service/MigrationManager.java   | 18 --
 2 files changed, 9 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5b3515e/CHANGES.txt
--
diff --cc CHANGES.txt
index 3831b38,a4dc8fd..a45df89
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -49,8 -17,8 +49,9 @@@ Merged from 1.2
   * Fix mean cells and mean row size per sstable calculations (CASSANDRA-6667)
   * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
   * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
+  * Don't exchange schema between nodes with different versions 
(CASSANDRA-6695)
  
 +
  2.0.5
   * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
   * Add ks.cf names to tombstone logging (CASSANDRA-6597)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5b3515e/src/java/org/apache/cassandra/service/MigrationManager.java
--



[1/3] git commit: Don't exchange schema between nodes with different versions

2014-02-12 Thread aleksey
Updated Branches:
  refs/heads/trunk 5cf381f57 - f5b3515ee


Don't exchange schema between nodes with different versions

patch by Aleksey Yeschenko; reviewed by Piotr Kołaczkowski for
CASSANDRA-6695


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b2dfaed3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b2dfaed3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b2dfaed3

Branch: refs/heads/trunk
Commit: b2dfaed3170c8b5b96a7ea8e7df6129490ead3be
Parents: 00a8b1e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 12 19:37:34 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 12 19:37:34 2014 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/service/MigrationManager.java | 20 
 2 files changed, 9 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dfaed3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0674dde..de7c307 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,6 +5,7 @@
  * Fix mean cells and mean row size per sstable calculations (CASSANDRA-6667)
  * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
  * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
+ * Don't exchange schema between nodes with different versions (CASSANDRA-6695)
 
 
 1.2.15

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dfaed3/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index 5a02e3b..68d0bad 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -134,13 +134,11 @@ public class MigrationManager
 private static boolean shouldPullSchemaFrom(InetAddress endpoint)
 {
 /*
- * Don't request schema from nodes with versions younger than 1.1.7 
(timestamps in versions prior to 1.1.7 are broken)
- * Don't request schema from nodes with a higher major (may have 
incompatible schema)
+ * Don't request schema from nodes with a differnt or unknonw major 
version (may have incompatible schema)
  * Don't request schema from fat clients
  */
 return MessagingService.instance().knowsVersion(endpoint)
- MessagingService.instance().getVersion(endpoint) = 
MessagingService.VERSION_117
- MessagingService.instance().getVersion(endpoint) = 
MessagingService.current_version
+ MessagingService.instance().getVersion(endpoint) == 
MessagingService.current_version
  !Gossiper.instance.isFatClient(endpoint);
 }
 
@@ -291,15 +289,13 @@ public class MigrationManager
 
 for (InetAddress endpoint : Gossiper.instance.getLiveMembers())
 {
-if (endpoint.equals(FBUtilities.getBroadcastAddress()))
-continue; // we've dealt with localhost already
-
-// don't send schema to the nodes with the versions older than 
current major
-if (MessagingService.instance().getVersion(endpoint)  
MessagingService.current_version)
-continue;
-
-pushSchemaMutation(endpoint, schema);
+// only push schema to nodes with known and equal versions
+if (!endpoint.equals(FBUtilities.getBroadcastAddress()) 
+MessagingService.instance().knowsVersion(endpoint) 
+MessagingService.instance().getVersion(endpoint) == 
MessagingService.current_version)
+pushSchemaMutation(endpoint, schema);
 }
+
 return f;
 }
 



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-02-12 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/service/MigrationManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/babc2de3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/babc2de3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/babc2de3

Branch: refs/heads/trunk
Commit: babc2de3e58e41b0dc2b9534c4514adfdd54be37
Parents: 80cebec b2dfaed
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Feb 12 19:40:46 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Feb 12 19:40:46 2014 +0300

--
 CHANGES.txt   |  1 +
 .../cassandra/service/MigrationManager.java   | 18 --
 2 files changed, 9 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/babc2de3/CHANGES.txt
--
diff --cc CHANGES.txt
index f9b2032,de7c307..a4dc8fd
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -17,33 -5,24 +17,34 @@@ Merged from 1.2
   * Fix mean cells and mean row size per sstable calculations (CASSANDRA-6667)
   * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
   * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
+  * Don't exchange schema between nodes with different versions 
(CASSANDRA-6695)
  
 -
 -1.2.15
 - * Move handling of migration event source to solve bootstrap race 
(CASSANDRA-6648)
 - * Make sure compaction throughput value doesn't overflow with int math 
(CASSANDRA-6647)
 -
 -
 -1.2.14
 - * Reverted code to limit CQL prepared statement cache by size 
(CASSANDRA-6592)
 - * add cassandra.default_messaging_version property to allow easier
 -   upgrading from 1.1 (CASSANDRA-6619)
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a hint (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 + * Reduce garbage generated by bloom filter lookups (CASSANDRA-6609)
 + * Add ks.cf names to tombstone logging (CASSANDRA-6597)
 + * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
 + * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
 + * Delete unfinished compaction incrementally (CASSANDRA-6086)
 + * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
 + * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
 + * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 + * Don't special case received counts in CAS timeout exceptions 
(CASSANDRA-6595)
 + * Add support for 2.1 global counter shards (CASSANDRA-6505)
 + * Fix NPE when streaming connection is not yet established (CASSANDRA-6210)
 + * Avoid rare duplicate read repair triggering (CASSANDRA-6606)
 + * Fix paging discardFirst (CASSANDRA-6555)
 + * Fix ArrayIndexOutOfBoundsException in 2ndary index query (CASSANDRA-6470)
 + * Release sstables upon rebuilding 2i (CASSANDRA-6635)
 + * Add AbstractCompactionStrategy.startup() method (CASSANDRA-6637)
 + * SSTableScanner may skip rows during cleanup (CASSANDRA-6638)
 + * sstables from stalled repair sessions can resurrect deleted data 
(CASSANDRA-6503)
 + * Switch stress to use ITransportFactory (CASSANDRA-6641)
 + * Fix IllegalArgumentException during prepare (CASSANDRA-6592)
 + * Fix possible loss of 2ndary index entries during compaction 
(CASSANDRA-6517)
 + * Fix direct Memory on architectures that do not support unaligned long 
access
 +   (CASSANDRA-6628)
 + * Let scrub optionally skip broken counter partitions (CASSANDRA-5930)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
   * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
   * Add ability to throttle batchlog replay (CASSANDRA-6550)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/babc2de3/src/java/org/apache/cassandra/service/MigrationManager.java
--



[jira] [Commented] (CASSANDRA-6691) Improvements and FIxes to Stress

2014-02-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899274#comment-13899274
 ] 

Jonathan Ellis commented on CASSANDRA-6691:
---

Can you review [~xedin]?

 Improvements and FIxes to Stress
 

 Key: CASSANDRA-6691
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6691
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1


 There were a couple of minor issues with the new stress:
 1) The warmup period did not scale up as the cluster size increased
 2) The mixed workload did not work with CQL
 At the same time, I have introduced a change in behaviour in the way the 
 default column values are generated so that they are deterministically based 
 on the key. I have then modified read operations to verify that the data they 
 fetch is the same as should have been inserted, so that stress does some 
 degree of data quality checking at the same time. For the moment the values 
 generated never vary for a given key, so this does nothing to test 
 consistency, it only tests for corruption.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6691) Improvements and FIxes to Stress

2014-02-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6691:
--

Reviewer: Pavel Yaskevich

 Improvements and FIxes to Stress
 

 Key: CASSANDRA-6691
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6691
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1


 There were a couple of minor issues with the new stress:
 1) The warmup period did not scale up as the cluster size increased
 2) The mixed workload did not work with CQL
 At the same time, I have introduced a change in behaviour in the way the 
 default column values are generated so that they are deterministically based 
 on the key. I have then modified read operations to verify that the data they 
 fetch is the same as should have been inserted, so that stress does some 
 degree of data quality checking at the same time. For the moment the values 
 generated never vary for a given key, so this does nothing to test 
 consistency, it only tests for corruption.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-02-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899275#comment-13899275
 ] 

Jonathan Ellis commented on CASSANDRA-5483:
---

Can you review, Lyuben?

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: test-5483-system_traces-events.txt, 
 tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6683) BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch

2014-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899277#comment-13899277
 ] 

Tyler Hobbs commented on CASSANDRA-6683:


[~brandon.williams] was our motivation for not calling 
{{sortByProximityWithScore}} every time just the overhead of that operation? It 
seems like it shouldn't have a large impact unless the RF is high.  If we want 
to handle the high-RF case more efficiently, perhaps we could add a parameter 
that specifies how many of the replicas will be used (based on the consistency 
level) and just move the N lowest scores to the front if the first N scores 
aren't within BADNESS_THRESHOLD.

 BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch
 ---

 Key: CASSANDRA-6683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6683
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux 3.8.0-33-generic
Reporter: Kirill Bogdanov
  Labels: snitch
 Fix For: 2.0.6


 There is a problem in *DynamicEndpointSnitch.java* in 
 sortByProximityWithBadness()
 Before calling sortByProximityWithScore we comparing each nodes score ratios 
 to the badness threshold.
 {code}
 if ((first - next) / first   BADNESS_THRESHOLD)
 {
 sortByProximityWithScore(address, addresses);
 return;
 }
 {code}
 This is not always the correct comparison because *first* score can be less 
 than *next*  score and in that case we will compare a negative number with 
 positive.
 The solution is to compute absolute value of the ratio:
 {code}
 if (Math.abs((first - next) / first)  BADNESS_THRESHOLD)
 {code}
 This issue causing an incorrect sorting of DCs based on their performance and 
 affects performance of the snitch.
 Thanks.
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5483) Repair tracing

2014-02-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5483:
--

Reviewer: Lyuben Todorov  (was: Yuki Morishita)

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: test-5483-system_traces-events.txt, 
 tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-02-12 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899297#comment-13899297
 ] 

sankalp kohli commented on CASSANDRA-6696:
--

With this, the whole disk_failure_policy stuff is broken. If you blacklist a 
drive, you can potentially bring data back to life. 

One of the fixes of this is one of my JIRA which I fixed long back. 
CASSANDRA-4784
If we divide each drive with ranges, then we are sure that the data along with 
the tombstone will get blacklisted. 
Example: Say a node is handling range 1-10 and 11-20. We can have drive A 
handle 1-10 and drive B handle 11-20. 
Thought this might have problems with load balancing. 

 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Bug
Reporter: sankalp kohli
Priority: Minor

 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Fix stress

2014-02-12 Thread slebresne
Updated Branches:
  refs/heads/trunk f5b3515ee - 0d0acac6c


Fix stress


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d0acac6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d0acac6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d0acac6

Branch: refs/heads/trunk
Commit: 0d0acac6c59d3fa703a3d504f9cfd063e4d111b7
Parents: f5b3515
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 12 18:27:32 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 12 18:27:32 2014 +0100

--
 ...-2.0.0-rc2-SNAPSHOT-jar-with-dependencies.jar | Bin 5869229 - 0 bytes
 .../cassandra-driver-core-2.0.0-rc2-SNAPSHOT.jar | Bin 490145 - 0 bytes
 .../cassandra-driver-core-2.0.0-rc3-SNAPSHOT.jar | Bin 0 - 515357 bytes
 .../cassandra/stress/util/JavaDriverClient.java  |   9 +
 4 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d0acac6/tools/lib/cassandra-driver-core-2.0.0-rc2-SNAPSHOT-jar-with-dependencies.jar
--
diff --git 
a/tools/lib/cassandra-driver-core-2.0.0-rc2-SNAPSHOT-jar-with-dependencies.jar 
b/tools/lib/cassandra-driver-core-2.0.0-rc2-SNAPSHOT-jar-with-dependencies.jar
deleted file mode 100644
index 1f4dafd..000
Binary files 
a/tools/lib/cassandra-driver-core-2.0.0-rc2-SNAPSHOT-jar-with-dependencies.jar 
and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d0acac6/tools/lib/cassandra-driver-core-2.0.0-rc2-SNAPSHOT.jar
--
diff --git a/tools/lib/cassandra-driver-core-2.0.0-rc2-SNAPSHOT.jar 
b/tools/lib/cassandra-driver-core-2.0.0-rc2-SNAPSHOT.jar
deleted file mode 100644
index c0d4242..000
Binary files a/tools/lib/cassandra-driver-core-2.0.0-rc2-SNAPSHOT.jar and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d0acac6/tools/lib/cassandra-driver-core-2.0.0-rc3-SNAPSHOT.jar
--
diff --git a/tools/lib/cassandra-driver-core-2.0.0-rc3-SNAPSHOT.jar 
b/tools/lib/cassandra-driver-core-2.0.0-rc3-SNAPSHOT.jar
new file mode 100644
index 000..54a175f
Binary files /dev/null and 
b/tools/lib/cassandra-driver-core-2.0.0-rc3-SNAPSHOT.jar differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d0acac6/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java 
b/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
index cf37040..7bde900 100644
--- a/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
+++ b/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
@@ -62,7 +62,9 @@ public class JavaDriverClient
 public void connect(ProtocolOptions.Compression compression) throws 
Exception
 {
 Cluster.Builder clusterBuilder = Cluster.builder()
-.addContactPoint(host).withPort(port);
+.addContactPoint(host)
+.withPort(port)
+.withoutMetrics(); // The 
driver uses metrics 3 with conflict with our version
 clusterBuilder.withCompression(compression);
 if (encryptionOptions.enabled)
 {
@@ -142,7 +144,6 @@ public class JavaDriverClient
 
 public void disconnect()
 {
-FBUtilities.waitOnFuture(cluster.shutdown());
+cluster.close();
 }
-
-}
\ No newline at end of file
+}



[jira] [Commented] (CASSANDRA-6631) cassandra-stress failing in trunk

2014-02-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899309#comment-13899309
 ] 

Sylvain Lebresne commented on CASSANDRA-6631:
-

Committed [a 
fix|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=0d0acac6c59d3fa703a3d504f9cfd063e4d111b7]
 for that (with just the driver jar, not the whole jar-with-dependencies as we 
don't really need that). It works on my box but if someone else can confirm 
that, we'll just close this issue. 

 cassandra-stress failing in trunk
 -

 Key: CASSANDRA-6631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6631
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Debian Stable Wheezy
 Oracle JDK 1.7.0_51-b13
Reporter: Michael Shuler
 Fix For: 2.1


 Stress is failing in trunk.
 - ant clean jar
 - ./bin/cassandra -f
 - ./tools/bin/cassandra-stress write
 {noformat}
 (trunk)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress write
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Exception in thread Thread-0 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:135)
 ... 2 more
 Exception in thread Thread-19 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 

[jira] [Commented] (CASSANDRA-6631) cassandra-stress failing in trunk

2014-02-12 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899314#comment-13899314
 ] 

Michael Shuler commented on CASSANDRA-6631:
---

+1
Thanks a bunch!

 cassandra-stress failing in trunk
 -

 Key: CASSANDRA-6631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6631
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Debian Stable Wheezy
 Oracle JDK 1.7.0_51-b13
Reporter: Michael Shuler
 Fix For: 2.1


 Stress is failing in trunk.
 - ant clean jar
 - ./bin/cassandra -f
 - ./tools/bin/cassandra-stress write
 {noformat}
 (trunk)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress write
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Exception in thread Thread-0 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:135)
 ... 2 more
 Exception in thread Thread-19 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 

[jira] [Commented] (CASSANDRA-6631) cassandra-stress failing in trunk

2014-02-12 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899323#comment-13899323
 ] 

Michael Shuler commented on CASSANDRA-6631:
---

hmm.. write worked, read throws an error:
{noformat}
mshuler@hana:~$ cassandra-stress write
Created keyspaces. Sleeping 1s for propagation.
Warming up WRITE with 5 iterations...
Connected to cluster: Test Cluster
Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
Sleeping 2s...
Running with 4 threadCount
Running WRITE with 4 threads until stderr of mean  0.02
ops   ,op/s,adj op/s,   key/s,mean, med, .95, .99,
.999, max,   time,   stderr
26745 ,   26744,   26744,   26744, 0.1, 0.1, 0.3, 0.5, 
4.5,20.3,1.0,  0.0
57881 ,   31076,   33453,   31076, 0.1, 0.1, 0.2, 0.3, 
1.0,71.4,2.0,  0.0
91605 ,   33636,   36535,   33636, 0.1, 0.1, 0.2, 0.2, 
0.3,79.7,3.0,  0.07880
124152,   32496,   36416,   32496, 0.1, 0.1, 0.2, 0.2, 
0.9,   108.0,4.0,  0.07319
156085,   31879,   35501,   31879, 0.1, 0.1, 0.2, 0.2, 
1.1,   102.3,5.0,  0.05969
188464,   32319,   35774,   32319, 0.1, 0.1, 0.2, 0.2, 
0.7,96.9,6.0,  0.04857
220734,   32204,   35836,   32204, 0.1, 0.1, 0.2, 0.2, 
0.5,   101.7,7.0,  0.04110
256342,   36149,   36149,   36149, 0.1, 0.1, 0.2, 0.2, 
0.5, 2.4,8.0,  0.03562
292091,   32251,   35674,   32251, 0.1, 0.1, 0.2, 0.2, 
1.1,   106.5,9.1,  0.03157
324068,   31898,   35569,   31898, 0.1, 0.1, 0.2, 0.2, 
1.1,   103.7,   10.1,  0.02817
356260,   32107,   35942,   32107, 0.1, 0.1, 0.2, 0.2, 
1.2,   107.1,   11.1,  0.02541
387422,   31078,   34824,   31078, 0.1, 0.1, 0.2, 0.2, 
2.1,   108.0,   12.1,  0.02321
418991,   31480,   35631,   31480, 0.1, 0.1, 0.2, 0.2, 
1.1,   117.0,   13.1,  0.02128
450483,   31379,   35289,   31379, 0.1, 0.1, 0.2, 0.2, 
0.8,   111.5,   14.1,  0.01967
483436,   32627,   32810,   32627, 0.1, 0.1, 0.2, 0.4, 
1.1, 6.3,   15.1,  0.01827
514382,   30655,   34541,   30655, 0.1, 0.1, 0.2, 0.2, 
1.0,   113.7,   16.1,  0.01757
538549,   24004,   27020,   24004, 0.2, 0.1, 0.3, 0.5, 
6.6,   112.6,   17.1,  0.01649
562287,   23517,   25586,   23517, 0.2, 0.1, 0.3, 0.7, 
2.8,81.8,   18.2,  0.02035
591465,   28864,   30857,   28864, 0.1, 0.1, 0.2, 0.3, 
0.4,65.5,   19.2,  0.02398
617315,   25577,   27332,   25577, 0.2, 0.1, 0.2, 0.3, 
1.6,65.1,   20.2,  0.02327
646406,   28872,   30783,   28872, 0.1, 0.1, 0.2, 0.3, 
1.1,62.7,   21.2,  0.02417
672760,   26076,   27894,   26076, 0.1, 0.1, 0.2, 0.3, 
1.8,66.1,   22.2,  0.02339
698728,   25745,   25745,   25745, 0.2, 0.1, 0.2, 0.3, 
1.2,70.0,   23.2,  0.02362
732390,   33250,   33250,   33250, 0.1, 0.1, 0.2, 0.3, 
1.2,11.4,   24.2,  0.02469
764852,   32198,   35910,   32198, 0.1, 0.1, 0.2, 0.2, 
1.4,   104.4,   25.2,  0.02365
796679,   31590,   35180,   31590, 0.1, 0.1, 0.2, 0.2, 
2.8,   102.9,   26.2,  0.02294
828361,   31457,   35103,   31457, 0.1, 0.1, 0.2, 0.2, 
1.0,   104.8,   27.2,  0.02216
860567,   31942,   35573,   31942, 0.1, 0.1, 0.2, 0.2, 
0.4,   103.1,   28.2,  0.02142
891884,   31051,   34680,   31051, 0.1, 0.1, 0.2, 0.2, 
2.3,   105.8,   29.3,  0.02077
927193,   35447,   35548,   35447, 0.1, 0.1, 0.2, 0.2, 
0.5, 3.1,   30.3,  0.02009
927550,3017,   29011,3017, 1.3, 0.1, 0.2,   106.0,   
106.3,   106.3,   30.4,  0.01952


Results:
real op rate  : 30542
adjusted op rate  : 30649
adjusted op rate stderr   : 0
key rate  : 30542
latency mean  : 0.1
latency median: 0.1
latency 95th percentile   : 0.2
latency 99th percentile   : 0.3
latency 99.9th percentile : 1.2
latency max   : 117.0
Total operation time  : 00:00:30
Sleeping for 15s
^C
mshuler@hana:~$ cassandra-stress read
Warming up READ with 5 iterations...
Connected to cluster: Test Cluster
Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
java.io.IOException: Operation [283] retried 10 times - error executing for key 
0F1E45 

at org.apache.cassandra.stress.Operation.error(Operation.java:189)
at 

[jira] [Commented] (CASSANDRA-6663) Connecting to a Raspberry PI Cassandra Cluster crashes the node being connected to

2014-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899326#comment-13899326
 ] 

Tyler Hobbs commented on CASSANDRA-6663:


Alright, the python driver now avoids sending an OPTIONS message if it's not 
strictly needed, so that should be a usable workaround.

 Connecting to a Raspberry PI Cassandra Cluster crashes the node being 
 connected to
 --

 Key: CASSANDRA-6663
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6663
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers (now out of tree)
 Environment: 4x node Raspberry PI cluster
 Macbook running Idle 2.7
Reporter: ian mccrae
 Attachments: Python Client Log.txt, hs_err_pid6327.log


 I have a working 4x node Raspberry Pi cluster and
 # DevCenter happily connects to this (...which has an option to turn Snappy 
 compression off)
 # ...however the Python Driver fails to connect and crashes the node being 
 connected to with the errors in the error-log below.
 There appears to be a problem with Snappy compression (not supported on the 
 Raspberry Pi).  So I also tried compression = None with the same result.
 How might I fix this?
 *Python Code*
 {noformat}
  from cassandra.cluster import Cluster
  cluster = Cluster(['192.168.200.151'], compression = None)
  session = cluster.connect()
 {noformat}
 *Error Log*
 {noformat}
 Traceback (most recent call last):
   File pyshell#58, line 1, in module
 session = cluster.connect()
   File 
 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cassandra/cluster.py,
  line 471, in connect
 self.control_connection.connect()
   File 
 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cassandra/cluster.py,
  line 1351, in connect
 self._set_new_connection(self._reconnect_internal())
   File 
 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cassandra/cluster.py,
  line 1386, in _reconnect_internal
 raise NoHostAvailable(Unable to connect to any servers, errors)
 NoHostAvailable: ('Unable to connect to any servers', {'192.168.200.151': 
 ConnectionShutdown('Connection to 192.168.200.151 is closed',))
 {noformat}
 *A Dump of the cluster class attributes*
 {noformat}
  pprint(vars(cluster))
 {'_core_connections_per_host': {0: 2, 1: 1},
  '_is_setup': True,
  '_is_shutdown': True,
  '_listener_lock': thread.lock object at 0x10616d230,
  '_listeners': set([]),
  '_lock': _RLock owner=None count=0,
  '_max_connections_per_host': {0: 8, 1: 2},
  '_max_requests_per_connection': {0: 100, 1: 100},
  '_min_requests_per_connection': {0: 5, 1: 5},
  '_prepared_statements': WeakValueDictionary at 4396942904,
  'compression': None,
  'contact_points': ['192.168.200.151'],
  'control_connection': cassandra.cluster.ControlConnection object at 
 0x106168cd0,
  'control_connection_timeout': 2.0,
  'cql_version': None,
  'executor': concurrent.futures.thread.ThreadPoolExecutor object at 
 0x106148410,
  'load_balancing_policy': cassandra.policies.RoundRobinPolicy object at 
 0x104adae50,
  'max_schema_agreement_wait': 10,
  'metadata': cassandra.metadata.Metadata object at 0x1061481d0,
  'metrics_enabled': False,
  'port': 9042,
  'scheduler': cassandra.cluster._Scheduler object at 0x106148550,
  'sessions': _weakrefset.WeakSet object at 0x106148750,
  'sockopts': None,
  'ssl_options': None}
 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-4867) Add verbose option to cqlsh when using file input

2014-02-12 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899335#comment-13899335
 ] 

Cyril Scetbon commented on CASSANDRA-4867:
--

Is there any information about this ?

 Add verbose option to cqlsh when using file input
 -

 Key: CASSANDRA-4867
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4867
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Patrick McFadin
Assignee: Aleksey Yeschenko
Priority: Minor

 Add a verbose option (-v) for output when using the -f option for an external 
 CQL file. Only error output is created now.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6663) Connecting to a Raspberry PI Cassandra Cluster crashes the node being connected to

2014-02-12 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6663.
-

Resolution: Invalid

 Connecting to a Raspberry PI Cassandra Cluster crashes the node being 
 connected to
 --

 Key: CASSANDRA-6663
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6663
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers (now out of tree)
 Environment: 4x node Raspberry PI cluster
 Macbook running Idle 2.7
Reporter: ian mccrae
 Attachments: Python Client Log.txt, hs_err_pid6327.log


 I have a working 4x node Raspberry Pi cluster and
 # DevCenter happily connects to this (...which has an option to turn Snappy 
 compression off)
 # ...however the Python Driver fails to connect and crashes the node being 
 connected to with the errors in the error-log below.
 There appears to be a problem with Snappy compression (not supported on the 
 Raspberry Pi).  So I also tried compression = None with the same result.
 How might I fix this?
 *Python Code*
 {noformat}
  from cassandra.cluster import Cluster
  cluster = Cluster(['192.168.200.151'], compression = None)
  session = cluster.connect()
 {noformat}
 *Error Log*
 {noformat}
 Traceback (most recent call last):
   File pyshell#58, line 1, in module
 session = cluster.connect()
   File 
 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cassandra/cluster.py,
  line 471, in connect
 self.control_connection.connect()
   File 
 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cassandra/cluster.py,
  line 1351, in connect
 self._set_new_connection(self._reconnect_internal())
   File 
 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cassandra/cluster.py,
  line 1386, in _reconnect_internal
 raise NoHostAvailable(Unable to connect to any servers, errors)
 NoHostAvailable: ('Unable to connect to any servers', {'192.168.200.151': 
 ConnectionShutdown('Connection to 192.168.200.151 is closed',))
 {noformat}
 *A Dump of the cluster class attributes*
 {noformat}
  pprint(vars(cluster))
 {'_core_connections_per_host': {0: 2, 1: 1},
  '_is_setup': True,
  '_is_shutdown': True,
  '_listener_lock': thread.lock object at 0x10616d230,
  '_listeners': set([]),
  '_lock': _RLock owner=None count=0,
  '_max_connections_per_host': {0: 8, 1: 2},
  '_max_requests_per_connection': {0: 100, 1: 100},
  '_min_requests_per_connection': {0: 5, 1: 5},
  '_prepared_statements': WeakValueDictionary at 4396942904,
  'compression': None,
  'contact_points': ['192.168.200.151'],
  'control_connection': cassandra.cluster.ControlConnection object at 
 0x106168cd0,
  'control_connection_timeout': 2.0,
  'cql_version': None,
  'executor': concurrent.futures.thread.ThreadPoolExecutor object at 
 0x106148410,
  'load_balancing_policy': cassandra.policies.RoundRobinPolicy object at 
 0x104adae50,
  'max_schema_agreement_wait': 10,
  'metadata': cassandra.metadata.Metadata object at 0x1061481d0,
  'metrics_enabled': False,
  'port': 9042,
  'scheduler': cassandra.cluster._Scheduler object at 0x106148550,
  'sessions': _weakrefset.WeakSet object at 0x106148750,
  'sockopts': None,
  'ssl_options': None}
 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6631) cassandra-stress failing in trunk

2014-02-12 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899336#comment-13899336
 ] 

Michael Shuler commented on CASSANDRA-6631:
---

I'm running again until completion - my fault.

 cassandra-stress failing in trunk
 -

 Key: CASSANDRA-6631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6631
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Debian Stable Wheezy
 Oracle JDK 1.7.0_51-b13
Reporter: Michael Shuler
 Fix For: 2.1


 Stress is failing in trunk.
 - ant clean jar
 - ./bin/cassandra -f
 - ./tools/bin/cassandra-stress write
 {noformat}
 (trunk)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress write
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Exception in thread Thread-0 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:135)
 ... 2 more
 Exception in thread Thread-19 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 

[jira] [Commented] (CASSANDRA-6631) cassandra-stress failing in trunk

2014-02-12 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899343#comment-13899343
 ] 

Michael Shuler commented on CASSANDRA-6631:
---

+2  ;)
Working fine with a little patience.

 cassandra-stress failing in trunk
 -

 Key: CASSANDRA-6631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6631
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Debian Stable Wheezy
 Oracle JDK 1.7.0_51-b13
Reporter: Michael Shuler
 Fix For: 2.1


 Stress is failing in trunk.
 - ant clean jar
 - ./bin/cassandra -f
 - ./tools/bin/cassandra-stress write
 {noformat}
 (trunk)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress write
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Exception in thread Thread-0 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:135)
 ... 2 more
 Exception in thread Thread-19 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 

[jira] [Resolved] (CASSANDRA-6631) cassandra-stress failing in trunk

2014-02-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6631.
-

Resolution: Fixed

Let's close that then

 cassandra-stress failing in trunk
 -

 Key: CASSANDRA-6631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6631
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Debian Stable Wheezy
 Oracle JDK 1.7.0_51-b13
Reporter: Michael Shuler
 Fix For: 2.1


 Stress is failing in trunk.
 - ant clean jar
 - ./bin/cassandra -f
 - ./tools/bin/cassandra-stress write
 {noformat}
 (trunk)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress write
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Exception in thread Thread-0 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:135)
 ... 2 more
 Exception in thread Thread-19 java.lang.RuntimeException: 
 java.lang.IllegalArgumentException: replicate_on_write is not a column 
 defined in this metadata
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:142)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getSmartThriftClient(StressSettings.java:49)
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:273)
 Caused by: java.lang.IllegalArgumentException: replicate_on_write is not a 
 column defined in this metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
 at com.datastax.driver.core.Row.getBool(Row.java:117)
 at 
 com.datastax.driver.core.TableMetadata$Options.init(TableMetadata.java:474)
 at 
 com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
 at 
 com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128)
 at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89)
 at 
 com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259)
 at 
 com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214)
 at 
 com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161)
 at 
 com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890)
 at 
 com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:806)
 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:217)
 at 
 org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:75)
 at 
 org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:135)
 ... 2 

[jira] [Commented] (CASSANDRA-6683) BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch

2014-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899375#comment-13899375
 ] 

Tyler Hobbs commented on CASSANDRA-6683:


bq. I'm not sure what you mean exactly, we always end up calling it, we just 
don't check badness when it's set to zero.

I was a little confused on my last comment, but let me try again. Right now we 
only call {{sortByProximityWithScore()}} if {{BADNESS_THRESHOLD != 0}} and two 
neighbors in the list returned by the subsnitch differ by BADNESS_THRESHOLD.  I 
think it would make more sense (and fix Kirill's case) to always call 
{{sortByProximityWithScore()}} and then compare that ordering against the 
subsnitch list.  Something like this:

{noformat}
defaultOrder = subsnitch.sort(address, addresses);
scoredOrder = sortByProximityWithScore(address, addresses);  // make this 
return a new list instead of sorting in place
for (int i = 0; i  defaultOrder.size(); i++)
{
if (scores.get(defaultOrder.get(i))  scores.get(scoredOrder.get(i)) * (1 + 
BADNESS_THRESHOLD))
return scoredOrder;
}
return defaultOrder;
{noformat}

bq. Possible, but it'd be a lot of work, because it would change the snitch 
interface and we'd still need the old call because not all uses of it have a 
consistency level available.

It looks like there aren't too many callers, so it shouldn't be that much work. 
 I would just make the arg optional and default it to the length of 
{{addresses}}.

 BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch
 ---

 Key: CASSANDRA-6683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6683
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux 3.8.0-33-generic
Reporter: Kirill Bogdanov
  Labels: snitch
 Fix For: 2.0.6


 There is a problem in *DynamicEndpointSnitch.java* in 
 sortByProximityWithBadness()
 Before calling sortByProximityWithScore we comparing each nodes score ratios 
 to the badness threshold.
 {code}
 if ((first - next) / first   BADNESS_THRESHOLD)
 {
 sortByProximityWithScore(address, addresses);
 return;
 }
 {code}
 This is not always the correct comparison because *first* score can be less 
 than *next*  score and in that case we will compare a negative number with 
 positive.
 The solution is to compute absolute value of the ratio:
 {code}
 if (Math.abs((first - next) / first)  BADNESS_THRESHOLD)
 {code}
 This issue causing an incorrect sorting of DCs based on their performance and 
 affects performance of the snitch.
 Thanks.
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6683) BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch

2014-02-12 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899388#comment-13899388
 ] 

Brandon Williams commented on CASSANDRA-6683:
-

I see what you meant.  Yeah, I think it was done the way it is was as an 
optimization, though as you said it's probably not a huge one.

 BADNESS_THRESHOLD does not working correctly with DynamicEndpointSnitch
 ---

 Key: CASSANDRA-6683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6683
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux 3.8.0-33-generic
Reporter: Kirill Bogdanov
  Labels: snitch
 Fix For: 2.0.6


 There is a problem in *DynamicEndpointSnitch.java* in 
 sortByProximityWithBadness()
 Before calling sortByProximityWithScore we comparing each nodes score ratios 
 to the badness threshold.
 {code}
 if ((first - next) / first   BADNESS_THRESHOLD)
 {
 sortByProximityWithScore(address, addresses);
 return;
 }
 {code}
 This is not always the correct comparison because *first* score can be less 
 than *next*  score and in that case we will compare a negative number with 
 positive.
 The solution is to compute absolute value of the ratio:
 {code}
 if (Math.abs((first - next) / first)  BADNESS_THRESHOLD)
 {code}
 This issue causing an incorrect sorting of DCs based on their performance and 
 affects performance of the snitch.
 Thanks.
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-02-12 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899398#comment-13899398
 ] 

Benedict commented on CASSANDRA-6696:
-

One possibility here is that we could split bloom filter and metadata onto a 
separate disk to their data files, so that if/when a disk fails we have the 
option of scrubbing any records on the remaining disks that we think were 
present on the lost disk in a file with min_timestamp  gc_grace_seconds ago.

Once we've done the scrub (in fact it could probably be done instantly by 
just setting up some filter for compaction + reads until we're fully repaired 
and have compacted the old data) we can start serving reads again, and can 
start a repair from the other nodes to receive data for all of the records 
we're now missing (either through the missing disk or that we're forcefully 
trashing).

 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Bug
Reporter: sankalp kohli
Priority: Minor

 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-02-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899425#comment-13899425
 ] 

Jonathan Ellis commented on CASSANDRA-6696:
---

bq. the whole disk_failure_policy stuff is broken

I would say rather, disk_failure_policy works brilliantly so that if you're 
using tombstones you can set it to stop the server and rebuild it. :)

bq. If we divide each drive with ranges, then we are sure that the data along 
with the tombstone will get blacklisted.

That will probably work well enough as long as vnode count  disk count.  
Would have the added benefit of reducing fragmentation for STCS.

Less than zero interest in trying to add sub-vnode regions though.

bq. One possibility here is that we could split bloom filter and metadata onto 
a separate disk to their data files

Not really a fan; complicates moving data around significantly without 
generalizing well beyond a single disk failure.  Even for single disk failures 
it bifurcates the recovery process: if you lose data then you scrub/repair; 
if you lose metadata you rebuild it from data.

 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Bug
Reporter: sankalp kohli
Priority: Minor

 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-02-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6696:
--

Issue Type: Improvement  (was: Bug)

(Classifying this an Improvement since while the behavior is not optimal in 
this scenario, it's working as designed.)

 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Priority: Minor
 Fix For: 3.0


 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-02-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6696:
--

  Component/s: Core
 Priority: Major  (was: Minor)
Fix Version/s: 3.0

 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
 Fix For: 3.0


 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5962) Support trigger parametrization

2014-02-12 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5962:
-

Labels: (╯°□°)╯︵┻━┻ cql3 triggers  (was: cql3 triggers)

 Support trigger parametrization
 ---

 Key: CASSANDRA-5962
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5962
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Priority: Minor
  Labels: (╯°□°)╯︵┻━┻, cql3, triggers

 We don't have a convenient way to parametrize triggers, which limits their 
 reusability and usability in general. For any configuration you have to rely 
 on external config files.
 We already have [trigger_options maptext, text] column in 
 system.schema_triggers, all we need is to add the right syntax to CQL3 
 (CREATE TRIGGER foo ON bar USING class WITH options = {..}) and modify 
 ITrigger to support it.
 Setting fixver to 2.1, but might move to 2.0.x later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-4911) Lift limitation that order by columns must be selected for IN queries

2014-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899508#comment-13899508
 ] 

Tyler Hobbs commented on CASSANDRA-4911:


Thanks for adding the dtest.

If I change the third-to-last test from

{noformat}
assert_all(cursor, SELECT v FROM test WHERE k=0 AND c1 = 0 AND c2 IN (2, 0) 
ORDER BY c1 ASC, [[2], [0]])
{noformat}

to

{noformat}
assert_all(cursor, SELECT v FROM test WHERE k=0 AND c1 = 0 AND c2 IN (2, 0) 
ORDER BY c1 DESC, [[2], [0]])
{noformat}

The query returns no results.

 Lift limitation that order by columns must be selected for IN queries
 -

 Key: CASSANDRA-4911
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4911
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.2.0 beta 1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1

 Attachments: 4911-v2.txt, 4911.txt


 This is the followup of CASSANDRA-4645. We should remove the limitation that 
 for IN queries, you must have columns on which you have an ORDER BY in the 
 select clause.
 For that, we'll need to automatically add the columns on which we have an 
 ORDER BY to the one queried internally, and remove it afterwards (once the 
 sorting is done) from the resultSet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-02-12 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899519#comment-13899519
 ] 

Benedict commented on CASSANDRA-6696:
-

bq. if you lose data then you scrub/repair; if you lose metadata you rebuild 
it from data.

You'd always have to do both with any single disk failure. But I agree it isn't 
optimal; but it is cost-free to maintain. Simply redundantly writing out the 
metadata would change it to a more uniform process, and tolerant to more than 
one failure, but at increased cost; at which point you might as well 
redundantly write out tombstones - either as a bloom filter or an extra 
sstable. The latter could be complicated to maintain cheaply and safely though.


 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
 Fix For: 3.0


 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.

2014-02-12 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899519#comment-13899519
 ] 

Benedict edited comment on CASSANDRA-6696 at 2/12/14 8:04 PM:
--

bq. if you lose data then you scrub/repair; if you lose metadata you rebuild 
it from data.

You'd always have to do both with any single disk failure. But I agree it isn't 
optimal; but it is cost-free to maintain, so is just essentially an 
optimisation + automated process to downgrade the node in the event of failure 
without having to manually rebuild it. 

Simply redundantly writing out the metadata would change it to a more uniform 
process, and tolerant to more than one failure, but at increased cost; at which 
point you might as well redundantly write out tombstones - either as a bloom 
filter or an extra sstable. The latter could be complicated to maintain cheaply 
and safely though. For multiple disk failures I'd say, if you have configured 
the auto-downgrading to happen - it should just trash everything it has and 
(optionally) repair.



was (Author: benedict):
bq. if you lose data then you scrub/repair; if you lose metadata you rebuild 
it from data.

You'd always have to do both with any single disk failure. But I agree it isn't 
optimal; but it is cost-free to maintain. Simply redundantly writing out the 
metadata would change it to a more uniform process, and tolerant to more than 
one failure, but at increased cost; at which point you might as well 
redundantly write out tombstones - either as a bloom filter or an extra 
sstable. The latter could be complicated to maintain cheaply and safely though.


 Drive replacement in JBOD can cause data to reappear. 
 --

 Key: CASSANDRA-6696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
 Fix For: 3.0


 In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
 empty one and repair is run. 
 This can cause deleted data to come back in some cases. Also this is true for 
 corrupt stables in which we delete the corrupt stable and run repair. 
 Here is an example:
 Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
 row=sankalp col=sankalp is written 20 days back and successfully went to all 
 three nodes. 
 Then a delete/tombstone was written successfully for the same row column 15 
 days back. 
 Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
 since it got compacted with the actual data. So there is no trace of this row 
 column in node A and B.
 Now in node C, say the original data is in drive1 and tombstone is in drive2. 
 Compaction has not yet reclaimed the data and tombstone.  
 Drive2 becomes corrupt and was replaced with new empty drive. 
 Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
 has come back to life. 
 Now after replacing the drive we run repair. This data will be propagated to 
 all nodes. 
 Note: This is still a problem even if we run repair every gc grace. 
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6697) Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation where possible

2014-02-12 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899559#comment-13899559
 ] 

Benedict commented on CASSANDRA-6697:
-

I've uploaded a preliminary patch 
[here|https://github.com/belliottsmith/cassandra/tree/iss-6697] that makes the 
change for Composite / CellName only.

It would be useful to get some feedback before I take the change to completion.

 Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation 
 where possible
 --

 Key: CASSANDRA-6697
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6697
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
 Fix For: 2.1


 This is a prerequisite for CASSANDRA-6692.
 The basic idea is to, if unsafe is available, abuse it to modify preallocated 
 ByteBuffers so that when they are short lived they do not need to be 
 instantiated. Initially this will only be helpful for comparisons and lookups 
 on the BBs, but with some modifications to the read path we should be able to 
 reduce the need in CASSANDRA-6692 to construct BBs to pass to the native 
 protocol (thrift may have to continue as is)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6697) Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation where possible

2014-02-12 Thread Benedict (JIRA)
Benedict created CASSANDRA-6697:
---

 Summary: Refactor Cell and CellName ByteBuffer accessors to avoid 
garbage allocation where possible
 Key: CASSANDRA-6697
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6697
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
 Fix For: 2.1


This is a prerequisite for CASSANDRA-6692.

The basic idea is to, if unsafe is available, abuse it to modify preallocated 
ByteBuffers so that when they are short lived they do not need to be 
instantiated. Initially this will only be helpful for comparisons and lookups 
on the BBs, but with some modifications to the read path we should be able to 
reduce the need in CASSANDRA-6692 to construct BBs to pass to the native 
protocol (thrift may have to continue as is)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[1/3] git commit: log at info when gossip is fine or people think it's still waiting

2014-02-12 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 babc2de3e - cafaa8eea
  refs/heads/trunk 0d0acac6c - 1ede2967b


log at info when gossip is fine or people think it's still waiting


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cafaa8ee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cafaa8ee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cafaa8ee

Branch: refs/heads/cassandra-2.0
Commit: cafaa8eeadd8adae348234773040ef97fe712609
Parents: babc2de
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 07:44:46 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:36:04 2014 -0600

--
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cafaa8ee/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index d87f6d8..5f3c8cc 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -542,7 +542,7 @@ public class CassandraDaemon
 if (totalPolls  GOSSIP_SETTLE_POLL_SUCCESSES_REQUIRED)
 logger.info(Gossip settled after {} extra polls; proceeding, 
totalPolls - GOSSIP_SETTLE_POLL_SUCCESSES_REQUIRED);
 else
-logger.debug(Gossip settled after {} extra polls; proceeding, 
totalPolls - GOSSIP_SETTLE_POLL_SUCCESSES_REQUIRED);
+logger.info(No gossip backlog; proceeding);
 }
 
 public static void stop(String[] args)



[2/3] git commit: log at info when gossip is fine or people think it's still waiting

2014-02-12 Thread jbellis
log at info when gossip is fine or people think it's still waiting


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cafaa8ee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cafaa8ee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cafaa8ee

Branch: refs/heads/trunk
Commit: cafaa8eeadd8adae348234773040ef97fe712609
Parents: babc2de
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 07:44:46 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:36:04 2014 -0600

--
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cafaa8ee/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index d87f6d8..5f3c8cc 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -542,7 +542,7 @@ public class CassandraDaemon
 if (totalPolls  GOSSIP_SETTLE_POLL_SUCCESSES_REQUIRED)
 logger.info(Gossip settled after {} extra polls; proceeding, 
totalPolls - GOSSIP_SETTLE_POLL_SUCCESSES_REQUIRED);
 else
-logger.debug(Gossip settled after {} extra polls; proceeding, 
totalPolls - GOSSIP_SETTLE_POLL_SUCCESSES_REQUIRED);
+logger.info(No gossip backlog; proceeding);
 }
 
 public static void stop(String[] args)



[1/3] git commit: SS logging 'now serving reads' is misleading

2014-02-12 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 cafaa8eea - efec07e11
  refs/heads/trunk 1ede2967b - f78db5307


SS logging 'now serving reads' is misleading


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/efec07e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/efec07e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/efec07e1

Branch: refs/heads/cassandra-2.0
Commit: efec07e1134ea176ea5820e725be6603eb250333
Parents: cafaa8e
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:38:35 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:38:35 2014 -0600

--
 src/java/org/apache/cassandra/service/StorageService.java | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/efec07e1/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a7d02d3..c323a19 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -855,7 +855,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 if (!current.isEmpty())
 for (InetAddress existing : current)
 Gossiper.instance.replacedEndpoint(existing);
-logger.info(Startup completed! Now serving reads.);
 assert tokenMetadata.sortedTokens().size()  0;
 
 Auth.setup();



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-02-12 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f78db530
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f78db530
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f78db530

Branch: refs/heads/trunk
Commit: f78db5307335337566cd2b2d43aec042f88b8153
Parents: 1ede296 efec07e
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:38:40 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:38:40 2014 -0600

--
 src/java/org/apache/cassandra/service/StorageService.java | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f78db530/src/java/org/apache/cassandra/service/StorageService.java
--



[2/3] git commit: SS logging 'now serving reads' is misleading

2014-02-12 Thread jbellis
SS logging 'now serving reads' is misleading


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/efec07e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/efec07e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/efec07e1

Branch: refs/heads/trunk
Commit: efec07e1134ea176ea5820e725be6603eb250333
Parents: cafaa8e
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:38:35 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:38:35 2014 -0600

--
 src/java/org/apache/cassandra/service/StorageService.java | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/efec07e1/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a7d02d3..c323a19 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -855,7 +855,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 if (!current.isEmpty())
 for (InetAddress existing : current)
 Gossiper.instance.replacedEndpoint(existing);
-logger.info(Startup completed! Now serving reads.);
 assert tokenMetadata.sortedTokens().size()  0;
 
 Auth.setup();



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-02-12 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d4461f83
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d4461f83
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d4461f83

Branch: refs/heads/trunk
Commit: d4461f832e9d3a343dce76a9da8b35f538f493e4
Parents: f78db53 c6c686f
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:48:25 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:48:25 2014 -0600

--
 .../org/apache/cassandra/service/CassandraDaemon.java   |  5 +++--
 src/java/org/apache/cassandra/utils/FBUtilities.java| 12 
 2 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4461f83/src/java/org/apache/cassandra/service/CassandraDaemon.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4461f83/src/java/org/apache/cassandra/utils/FBUtilities.java
--



[2/3] git commit: only waitForGossip if we're configured for a multinode cluster

2014-02-12 Thread jbellis
only waitForGossip if we're configured for a multinode cluster


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c6c686f4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c6c686f4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c6c686f4

Branch: refs/heads/trunk
Commit: c6c686f4138e6646bad233e89a630be0aada08ae
Parents: efec07e
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:48:14 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:48:14 2014 -0600

--
 .../org/apache/cassandra/service/CassandraDaemon.java   |  5 +++--
 src/java/org/apache/cassandra/utils/FBUtilities.java| 12 
 2 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6c686f4/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index 5f3c8cc..23bf3e5 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -60,6 +60,7 @@ import org.apache.cassandra.metrics.StorageMetrics;
 import org.apache.cassandra.thrift.ThriftServer;
 import org.apache.cassandra.tracing.Tracing;
 import org.apache.cassandra.utils.CLibrary;
+import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Mx4jTool;
 import org.apache.cassandra.utils.Pair;
 
@@ -372,7 +373,8 @@ public class CassandraDaemon
 }
 }
 
-waitForGossipToSettle();
+if 
(!FBUtilities.getBroadcastAddress().equals(FBUtilities.getLoopback()))
+waitForGossipToSettle();
 
 // Thift
 InetAddress rpcAddr = DatabaseDescriptor.getRpcAddress();
@@ -498,7 +500,6 @@ public class CassandraDaemon
 destroy();
 }
 
-
 private void waitForGossipToSettle()
 {
 int forceAfter = 
Integer.getInteger(cassandra.skip_wait_for_gossip_to_settle, -1);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6c686f4/src/java/org/apache/cassandra/utils/FBUtilities.java
--
diff --git a/src/java/org/apache/cassandra/utils/FBUtilities.java 
b/src/java/org/apache/cassandra/utils/FBUtilities.java
index 579f5fa..0cacfe2 100644
--- a/src/java/org/apache/cassandra/utils/FBUtilities.java
+++ b/src/java/org/apache/cassandra/utils/FBUtilities.java
@@ -693,4 +693,16 @@ public class FBUtilities
 {
 return OPERATING_SYSTEM.contains(nix) || 
OPERATING_SYSTEM.contains(nux) || OPERATING_SYSTEM.contains(aix);
 }
+
+public static InetAddress getLoopback()
+{
+try
+{
+return InetAddress.getByName(null);
+}
+catch (UnknownHostException e)
+{
+throw new AssertionError(e);
+}
+}
 }



[1/3] git commit: only waitForGossip if we're configured for a multinode cluster

2014-02-12 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 efec07e11 - c6c686f41
  refs/heads/trunk f78db5307 - d4461f832


only waitForGossip if we're configured for a multinode cluster


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c6c686f4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c6c686f4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c6c686f4

Branch: refs/heads/cassandra-2.0
Commit: c6c686f4138e6646bad233e89a630be0aada08ae
Parents: efec07e
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:48:14 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:48:14 2014 -0600

--
 .../org/apache/cassandra/service/CassandraDaemon.java   |  5 +++--
 src/java/org/apache/cassandra/utils/FBUtilities.java| 12 
 2 files changed, 15 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6c686f4/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index 5f3c8cc..23bf3e5 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -60,6 +60,7 @@ import org.apache.cassandra.metrics.StorageMetrics;
 import org.apache.cassandra.thrift.ThriftServer;
 import org.apache.cassandra.tracing.Tracing;
 import org.apache.cassandra.utils.CLibrary;
+import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Mx4jTool;
 import org.apache.cassandra.utils.Pair;
 
@@ -372,7 +373,8 @@ public class CassandraDaemon
 }
 }
 
-waitForGossipToSettle();
+if 
(!FBUtilities.getBroadcastAddress().equals(FBUtilities.getLoopback()))
+waitForGossipToSettle();
 
 // Thift
 InetAddress rpcAddr = DatabaseDescriptor.getRpcAddress();
@@ -498,7 +500,6 @@ public class CassandraDaemon
 destroy();
 }
 
-
 private void waitForGossipToSettle()
 {
 int forceAfter = 
Integer.getInteger(cassandra.skip_wait_for_gossip_to_settle, -1);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6c686f4/src/java/org/apache/cassandra/utils/FBUtilities.java
--
diff --git a/src/java/org/apache/cassandra/utils/FBUtilities.java 
b/src/java/org/apache/cassandra/utils/FBUtilities.java
index 579f5fa..0cacfe2 100644
--- a/src/java/org/apache/cassandra/utils/FBUtilities.java
+++ b/src/java/org/apache/cassandra/utils/FBUtilities.java
@@ -693,4 +693,16 @@ public class FBUtilities
 {
 return OPERATING_SYSTEM.contains(nix) || 
OPERATING_SYSTEM.contains(nux) || OPERATING_SYSTEM.contains(aix);
 }
+
+public static InetAddress getLoopback()
+{
+try
+{
+return InetAddress.getByName(null);
+}
+catch (UnknownHostException e)
+{
+throw new AssertionError(e);
+}
+}
 }



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-02-12 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/15d60756
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/15d60756
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/15d60756

Branch: refs/heads/trunk
Commit: 15d607568ccf932e28b51544c1ee59cbff32c8eb
Parents: d4461f8 de6a74a
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:50:38 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:50:38 2014 -0600

--
 .../org/apache/cassandra/service/CassandraDaemon.java   |  2 +-
 src/java/org/apache/cassandra/utils/FBUtilities.java| 12 
 2 files changed, 1 insertion(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/15d60756/src/java/org/apache/cassandra/service/CassandraDaemon.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/15d60756/src/java/org/apache/cassandra/utils/FBUtilities.java
--



[2/3] git commit: turns out there's already a getLoopbackAddress

2014-02-12 Thread jbellis
turns out there's already a getLoopbackAddress


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de6a74a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de6a74a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de6a74a2

Branch: refs/heads/trunk
Commit: de6a74a2d5a475cdb1c3e51f65562880ed705378
Parents: c6c686f
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:50:33 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:50:33 2014 -0600

--
 .../org/apache/cassandra/service/CassandraDaemon.java   |  2 +-
 src/java/org/apache/cassandra/utils/FBUtilities.java| 12 
 2 files changed, 1 insertion(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de6a74a2/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index 23bf3e5..b3f7ff3 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -373,7 +373,7 @@ public class CassandraDaemon
 }
 }
 
-if 
(!FBUtilities.getBroadcastAddress().equals(FBUtilities.getLoopback()))
+if 
(!FBUtilities.getBroadcastAddress().equals(InetAddress.getLoopbackAddress()))
 waitForGossipToSettle();
 
 // Thift

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de6a74a2/src/java/org/apache/cassandra/utils/FBUtilities.java
--
diff --git a/src/java/org/apache/cassandra/utils/FBUtilities.java 
b/src/java/org/apache/cassandra/utils/FBUtilities.java
index 0cacfe2..579f5fa 100644
--- a/src/java/org/apache/cassandra/utils/FBUtilities.java
+++ b/src/java/org/apache/cassandra/utils/FBUtilities.java
@@ -693,16 +693,4 @@ public class FBUtilities
 {
 return OPERATING_SYSTEM.contains(nix) || 
OPERATING_SYSTEM.contains(nux) || OPERATING_SYSTEM.contains(aix);
 }
-
-public static InetAddress getLoopback()
-{
-try
-{
-return InetAddress.getByName(null);
-}
-catch (UnknownHostException e)
-{
-throw new AssertionError(e);
-}
-}
 }



[1/3] git commit: turns out there's already a getLoopbackAddress

2014-02-12 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 c6c686f41 - de6a74a2d
  refs/heads/trunk d4461f832 - 15d607568


turns out there's already a getLoopbackAddress


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de6a74a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de6a74a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de6a74a2

Branch: refs/heads/cassandra-2.0
Commit: de6a74a2d5a475cdb1c3e51f65562880ed705378
Parents: c6c686f
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 12 15:50:33 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 12 15:50:33 2014 -0600

--
 .../org/apache/cassandra/service/CassandraDaemon.java   |  2 +-
 src/java/org/apache/cassandra/utils/FBUtilities.java| 12 
 2 files changed, 1 insertion(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de6a74a2/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index 23bf3e5..b3f7ff3 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -373,7 +373,7 @@ public class CassandraDaemon
 }
 }
 
-if 
(!FBUtilities.getBroadcastAddress().equals(FBUtilities.getLoopback()))
+if 
(!FBUtilities.getBroadcastAddress().equals(InetAddress.getLoopbackAddress()))
 waitForGossipToSettle();
 
 // Thift

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de6a74a2/src/java/org/apache/cassandra/utils/FBUtilities.java
--
diff --git a/src/java/org/apache/cassandra/utils/FBUtilities.java 
b/src/java/org/apache/cassandra/utils/FBUtilities.java
index 0cacfe2..579f5fa 100644
--- a/src/java/org/apache/cassandra/utils/FBUtilities.java
+++ b/src/java/org/apache/cassandra/utils/FBUtilities.java
@@ -693,16 +693,4 @@ public class FBUtilities
 {
 return OPERATING_SYSTEM.contains(nix) || 
OPERATING_SYSTEM.contains(nux) || OPERATING_SYSTEM.contains(aix);
 }
-
-public static InetAddress getLoopback()
-{
-try
-{
-return InetAddress.getByName(null);
-}
-catch (UnknownHostException e)
-{
-throw new AssertionError(e);
-}
-}
 }



[jira] [Updated] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Skvazh updated CASSANDRA-6698:


Reproduced In: 2.0.5, 2.0.4  (was: 2.0.5)

 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Bug
Reporter: Roman Skvazh
Priority: Critical

 We are have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)
Roman Skvazh created CASSANDRA-6698:
---

 Summary: Many too small SSTables when full repair
 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Bug
Reporter: Roman Skvazh
Priority: Critical


We are have troubles when cassandra drops messages because there is too many 
(over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
too) and many pending compactions (over 700).

PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899737#comment-13899737
 ] 

Roman Skvazh edited comment on CASSANDRA-6698 at 2/12/14 11:11 PM:
---

Some of files in attached file


was (Author: rskvazh):
Some of files

 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Bug
Reporter: Roman Skvazh
Priority: Critical
 Attachments: cassa-many-small-sstables.txt


 We are have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Skvazh updated CASSANDRA-6698:


Attachment: cassa-many-small-sstables.txt

Some of files

 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Bug
Reporter: Roman Skvazh
Priority: Critical
 Attachments: cassa-many-small-sstables.txt


 We are have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Skvazh updated CASSANDRA-6698:


Description: 
We are have troubles when cassandra drops messages because there is too many 
(over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
too) and many pending compactions (over 700).

We are using Leveled compaction with 165Mb sstable size.
PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(

  was:
We are have troubles when cassandra drops messages because there is too many 
(over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
too) and many pending compactions (over 700).

We are using Leveled compaction with 160Mb sstable size.
PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(


 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Bug
Reporter: Roman Skvazh
Priority: Critical
 Attachments: cassa-many-small-sstables.txt


 We are have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 We are using Leveled compaction with 165Mb sstable size.
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Skvazh updated CASSANDRA-6698:


Description: 
We are have troubles when cassandra drops messages because there is too many 
(over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
too) and many pending compactions (over 700).

We are using Leveled compaction with 160Mb sstable size.
PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(

  was:
We are have troubles when cassandra drops messages because there is too many 
(over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
too) and many pending compactions (over 700).

PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(


 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Bug
Reporter: Roman Skvazh
Priority: Critical
 Attachments: cassa-many-small-sstables.txt


 We are have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 We are using Leveled compaction with 160Mb sstable size.
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Skvazh updated CASSANDRA-6698:


Description: 
We are have troubles when cassandra drops messages because there is too many 
(over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
too) and many pending compactions (over 700).

We are using Leveled compaction with 160Mb sstable size.
PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(

  was:
We are have troubles when cassandra drops messages because there is too many 
(over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
too) and many pending compactions (over 700).

We are using Leveled compaction with 165Mb sstable size.
PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(


 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Bug
Reporter: Roman Skvazh
Priority: Critical
 Attachments: cassa-many-small-sstables.txt


 We are have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 We are using Leveled compaction with 160Mb sstable size.
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6698:
--

Reproduced In: 2.0.5, 2.0.4  (was: 2.0.4, 2.0.5)
 Priority: Minor  (was: Critical)
   Issue Type: Improvement  (was: Bug)

 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Roman Skvazh
Priority: Minor
 Attachments: cassa-many-small-sstables.txt


 We have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 We are using Leveled compaction with 160Mb sstable size.
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Skvazh updated CASSANDRA-6698:


Description: 
We have troubles when cassandra drops messages because there is too many (over 
10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes too) 
and many pending compactions (over 700).

We are using Leveled compaction with 160Mb sstable size.
PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(

  was:
We are have troubles when cassandra drops messages because there is too many 
(over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
too) and many pending compactions (over 700).

We are using Leveled compaction with 160Mb sstable size.
PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to be 
finished. Because this, we can not run full repair for about a month :(


 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Bug
Reporter: Roman Skvazh
Priority: Critical
 Attachments: cassa-many-small-sstables.txt


 We have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 We are using Leveled compaction with 160Mb sstable size.
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899747#comment-13899747
 ] 

Jonathan Ellis commented on CASSANDRA-6698:
---

Can you elaborate as to why CASSANDRA-4341 and CASSANDRA-5371 aren't helping?

 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Roman Skvazh
Priority: Minor
 Attachments: cassa-many-small-sstables.txt


 We have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 We are using Leveled compaction with 160Mb sstable size.
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6698) Many too small SSTables when full repair

2014-02-12 Thread Roman Skvazh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899769#comment-13899769
 ] 

Roman Skvazh commented on CASSANDRA-6698:
-

We can not use STCS because we have wide-rows and heavy inserts/deletes

 Many too small SSTables when full repair
 

 Key: CASSANDRA-6698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Roman Skvazh
Priority: Minor
 Attachments: cassa-many-small-sstables.txt


 We have troubles when cassandra drops messages because there is too many 
 (over 10,000 on one column family) small (from 1Kb to 200Kb, and normal sizes 
 too) and many pending compactions (over 700).
 We are using Leveled compaction with 160Mb sstable size.
 PS. Temp fix: stop repair, disable thrift,gossip and wait for compactions to 
 be finished. Because this, we can not run full repair for about a month :(



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6379) Replace index_interval with min/max_index_interval

2014-02-12 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-6379:
---

Reviewer: Aleksey Yeschenko

 Replace index_interval with min/max_index_interval
 --

 Key: CASSANDRA-6379
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6379
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 2.1


 As a continuation of the work in CASSANDRA-5519, we want to replace the 
 {{index_interval}} attribute of tables with {{min_index_interval}} and 
 {{max_index_interval}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Remove dead ColumnSlice.NavigatableMapIterator

2014-02-12 Thread aleksey
Updated Branches:
  refs/heads/trunk 15d607568 - f66b9eb27


Remove dead ColumnSlice.NavigatableMapIterator


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f66b9eb2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f66b9eb2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f66b9eb2

Branch: refs/heads/trunk
Commit: f66b9eb27d88e6f438d4e9d4d4bb8885f78c7bf1
Parents: 15d6075
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 13 03:10:27 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 13 03:10:27 2014 +0300

--
 .../apache/cassandra/db/filter/ColumnSlice.java | 49 
 1 file changed, 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f66b9eb2/src/java/org/apache/cassandra/db/filter/ColumnSlice.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ColumnSlice.java 
b/src/java/org/apache/cassandra/db/filter/ColumnSlice.java
index f5ea49a..02a5a0c 100644
--- a/src/java/org/apache/cassandra/db/filter/ColumnSlice.java
+++ b/src/java/org/apache/cassandra/db/filter/ColumnSlice.java
@@ -138,55 +138,6 @@ public class ColumnSlice
 }
 }
 
-public static class NavigableMapIterator extends AbstractIteratorCell
-{
-private final NavigableMapCellName, Cell map;
-private final ColumnSlice[] slices;
-
-private int idx = 0;
-private IteratorCell currentSlice;
-
-public NavigableMapIterator(NavigableMapCellName, Cell map, 
ColumnSlice[] slices)
-{
-this.map = map;
-this.slices = slices;
-}
-
-protected Cell computeNext()
-{
-if (currentSlice == null)
-{
-if (idx = slices.length)
-return endOfData();
-
-ColumnSlice slice = slices[idx++];
-// Note: we specialize the case of start ==  and finish =  
because it is slightly more efficient, but also they have a specific
-// meaning (namely, they always extend to the beginning/end of 
the range).
-if (slice.start.isEmpty())
-{
-if (slice.finish.isEmpty())
-currentSlice = map.values().iterator();
-else
-currentSlice = map.headMap(new 
FakeCellName(slice.finish), true).values().iterator();
-}
-else if (slice.finish.isEmpty())
-{
-currentSlice = map.tailMap(new FakeCellName(slice.start), 
true).values().iterator();
-}
-else
-{
-currentSlice = map.subMap(new FakeCellName(slice.start), 
true, new FakeCellName(slice.finish), true).values().iterator();
-}
-}
-
-if (currentSlice.hasNext())
-return currentSlice.next();
-
-currentSlice = null;
-return computeNext();
-}
-}
-
 public static class NavigableSetIterator extends AbstractIteratorCell
 {
 private final NavigableSetCell set;



[jira] [Updated] (CASSANDRA-6697) Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation where possible

2014-02-12 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6697:


Description: 
This is a prerequisite for CASSANDRA-6689.

The basic idea is to, if unsafe is available, abuse it to modify preallocated 
ByteBuffers so that when they are short lived they do not need to be 
instantiated. Initially this will only be helpful for comparisons and lookups 
on the BBs, but with some modifications to the read path we should be able to 
reduce the need in CASSANDRA-6689 to construct BBs to pass to the native 
protocol (thrift may have to continue as is)

  was:
This is a prerequisite for CASSANDRA-6691.

The basic idea is to, if unsafe is available, abuse it to modify preallocated 
ByteBuffers so that when they are short lived they do not need to be 
instantiated. Initially this will only be helpful for comparisons and lookups 
on the BBs, but with some modifications to the read path we should be able to 
reduce the need in CASSANDRA-6691 to construct BBs to pass to the native 
protocol (thrift may have to continue as is)


 Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation 
 where possible
 --

 Key: CASSANDRA-6697
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6697
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
 Fix For: 2.1


 This is a prerequisite for CASSANDRA-6689.
 The basic idea is to, if unsafe is available, abuse it to modify preallocated 
 ByteBuffers so that when they are short lived they do not need to be 
 instantiated. Initially this will only be helpful for comparisons and lookups 
 on the BBs, but with some modifications to the read path we should be able to 
 reduce the need in CASSANDRA-6689 to construct BBs to pass to the native 
 protocol (thrift may have to continue as is)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6697) Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation where possible

2014-02-12 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6697:


Description: 
This is a prerequisite for CASSANDRA-6691.

The basic idea is to, if unsafe is available, abuse it to modify preallocated 
ByteBuffers so that when they are short lived they do not need to be 
instantiated. Initially this will only be helpful for comparisons and lookups 
on the BBs, but with some modifications to the read path we should be able to 
reduce the need in CASSANDRA-6691 to construct BBs to pass to the native 
protocol (thrift may have to continue as is)

  was:
This is a prerequisite for CASSANDRA-6692.

The basic idea is to, if unsafe is available, abuse it to modify preallocated 
ByteBuffers so that when they are short lived they do not need to be 
instantiated. Initially this will only be helpful for comparisons and lookups 
on the BBs, but with some modifications to the read path we should be able to 
reduce the need in CASSANDRA-6692 to construct BBs to pass to the native 
protocol (thrift may have to continue as is)


 Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation 
 where possible
 --

 Key: CASSANDRA-6697
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6697
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
 Fix For: 2.1


 This is a prerequisite for CASSANDRA-6691.
 The basic idea is to, if unsafe is available, abuse it to modify preallocated 
 ByteBuffers so that when they are short lived they do not need to be 
 instantiated. Initially this will only be helpful for comparisons and lookups 
 on the BBs, but with some modifications to the read path we should be able to 
 reduce the need in CASSANDRA-6691 to construct BBs to pass to the native 
 protocol (thrift may have to continue as is)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6697) Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation where possible

2014-02-12 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899835#comment-13899835
 ] 

Benedict commented on CASSANDRA-6697:
-

Updated the repository with a different approach, with static methods defining 
simple classes that are passed to the accessor to determine the action it takes 
with the data it wants to return. I think it is cleaner, and probably faster 
(it also allows a few more situations to avoid garbage than the alternative 
approach)

 Refactor Cell and CellName ByteBuffer accessors to avoid garbage allocation 
 where possible
 --

 Key: CASSANDRA-6697
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6697
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
 Fix For: 2.1


 This is a prerequisite for CASSANDRA-6689.
 The basic idea is to, if unsafe is available, abuse it to modify preallocated 
 ByteBuffers so that when they are short lived they do not need to be 
 instantiated. Initially this will only be helpful for comparisons and lookups 
 on the BBs, but with some modifications to the read path we should be able to 
 reduce the need in CASSANDRA-6689 to construct BBs to pass to the native 
 protocol (thrift may have to continue as is)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6379) Replace index_interval with min/max_index_interval

2014-02-12 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-6379:
---

Attachment: 6379-thrift-gen.txt
6379.txt

6379.txt (and [branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-6379]) 
replaces {{index_interval}} with {{min_index_interval}} and 
{{max_index_interval}}.  When migrating, the existing {{index_interval}} value 
is used for {{min_index_interval}}.  I chose 2048 for the default 
{{max_index_interval}}; in practice, this limit should only be hit for 
infrequently-read SSTables when the index summary memory pool is very full.

If you want to test it out, I suggest setting logging to TRACE for 
o.a.c.io.sstable.IndexSummary manager, setting {{index_summary_capacity_in_mb}} 
to 1, and {{index_summary_resize_interval_in_minutes}} to 1.  After inserting a 
few million rows with stress, the index summary pool will hit capacity and some 
resizing will happen.  The JMX functions on IndexSummaryManager are also handy.


 Replace index_interval with min/max_index_interval
 --

 Key: CASSANDRA-6379
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6379
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 2.1

 Attachments: 6379-thrift-gen.txt, 6379.txt


 As a continuation of the work in CASSANDRA-5519, we want to replace the 
 {{index_interval}} attribute of tables with {{min_index_interval}} and 
 {{max_index_interval}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6379) Replace index_interval with min/max_index_interval

2014-02-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899839#comment-13899839
 ] 

Tyler Hobbs edited comment on CASSANDRA-6379 at 2/13/14 12:39 AM:
--

6379.txt (and [branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-6379]) 
replaces {{index_interval}} with {{min_index_interval}} and 
{{max_index_interval}}.  When migrating, the existing {{index_interval}} value 
is used for {{min_index_interval}}.  I chose 2048 for the default 
{{max_index_interval}}; in practice, this limit should only be hit for 
infrequently-read SSTables when the index summary memory pool is very full.

If you want to test it out, I suggest setting logging to TRACE for 
o.a.c.io.sstable.IndexSummaryManager, setting {{index_summary_capacity_in_mb}} 
to 1, and {{index_summary_resize_interval_in_minutes}} to 1.  After inserting a 
few million rows with stress, the index summary pool will hit capacity and some 
resizing will happen.  The JMX functions on IndexSummaryManager are also handy.



was (Author: thobbs):
6379.txt (and [branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-6379]) 
replaces {{index_interval}} with {{min_index_interval}} and 
{{max_index_interval}}.  When migrating, the existing {{index_interval}} value 
is used for {{min_index_interval}}.  I chose 2048 for the default 
{{max_index_interval}}; in practice, this limit should only be hit for 
infrequently-read SSTables when the index summary memory pool is very full.

If you want to test it out, I suggest setting logging to TRACE for 
o.a.c.io.sstable.IndexSummary manager, setting {{index_summary_capacity_in_mb}} 
to 1, and {{index_summary_resize_interval_in_minutes}} to 1.  After inserting a 
few million rows with stress, the index summary pool will hit capacity and some 
resizing will happen.  The JMX functions on IndexSummaryManager are also handy.


 Replace index_interval with min/max_index_interval
 --

 Key: CASSANDRA-6379
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6379
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 2.1

 Attachments: 6379-thrift-gen.txt, 6379.txt


 As a continuation of the work in CASSANDRA-5519, we want to replace the 
 {{index_interval}} attribute of tables with {{min_index_interval}} and 
 {{max_index_interval}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Improvements and fixes to cassandra/stress Patch by Benedict; reviewed by Pavel Yaskevich for CASSANDRA-6691

2014-02-12 Thread xedin
Updated Branches:
  refs/heads/trunk f66b9eb27 - 10b617364


Improvements and fixes to cassandra/stress
Patch by Benedict; reviewed by Pavel Yaskevich for CASSANDRA-6691


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/10b61736
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/10b61736
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/10b61736

Branch: refs/heads/trunk
Commit: 10b617364e9f639358f82c70056e31533a1ec11c
Parents: f66b9eb
Author: belliottsmith git...@sub.laerad.com
Authored: Wed Feb 12 11:36:35 2014 +
Committer: Pavel Yaskevich xe...@apache.org
Committed: Wed Feb 12 17:25:07 2014 -0800

--
 .../org/apache/cassandra/stress/Operation.java  |  74 ++---
 .../apache/cassandra/stress/StressAction.java   |  13 ++-
 .../cassandra/stress/generatedata/DataGen.java  |   6 +-
 .../stress/generatedata/DataGenBytesRandom.java |   2 +-
 .../stress/generatedata/DataGenHex.java |   2 +-
 .../generatedata/DataGenStringDictionary.java   |   6 +-
 .../generatedata/DataGenStringRepeats.java  |  16 +--
 .../cassandra/stress/generatedata/KeyGen.java   |   2 +-
 .../cassandra/stress/generatedata/RowGen.java   |   4 +-
 .../operations/CqlIndexedRangeSlicer.java   |   2 +-
 .../stress/operations/CqlInserter.java  |   2 +-
 .../stress/operations/CqlOperation.java | 111 +++
 .../cassandra/stress/operations/CqlReader.java  |   8 +-
 .../stress/operations/ThriftCounterAdder.java   |   2 +-
 .../operations/ThriftIndexedRangeSlicer.java|   2 +-
 .../stress/operations/ThriftInserter.java   |   6 +-
 .../stress/operations/ThriftReader.java |  30 -
 .../cassandra/stress/settings/SettingsKey.java  |   7 +-
 18 files changed, 244 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/10b61736/tools/stress/src/org/apache/cassandra/stress/Operation.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/Operation.java 
b/tools/stress/src/org/apache/cassandra/stress/Operation.java
index fa7a453..4519b19 100644
--- a/tools/stress/src/org/apache/cassandra/stress/Operation.java
+++ b/tools/stress/src/org/apache/cassandra/stress/Operation.java
@@ -21,10 +21,25 @@ import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
 import java.util.Collections;
+import java.util.EnumMap;
 import java.util.List;
 
 import org.apache.cassandra.stress.generatedata.KeyGen;
 import org.apache.cassandra.stress.generatedata.RowGen;
+import org.apache.cassandra.stress.operations.CqlCounterAdder;
+import org.apache.cassandra.stress.operations.CqlCounterGetter;
+import org.apache.cassandra.stress.operations.CqlIndexedRangeSlicer;
+import org.apache.cassandra.stress.operations.CqlInserter;
+import org.apache.cassandra.stress.operations.CqlMultiGetter;
+import org.apache.cassandra.stress.operations.CqlRangeSlicer;
+import org.apache.cassandra.stress.operations.CqlReader;
+import org.apache.cassandra.stress.operations.ThriftCounterAdder;
+import org.apache.cassandra.stress.operations.ThriftCounterGetter;
+import org.apache.cassandra.stress.operations.ThriftIndexedRangeSlicer;
+import org.apache.cassandra.stress.operations.ThriftInserter;
+import org.apache.cassandra.stress.operations.ThriftMultiGetter;
+import org.apache.cassandra.stress.operations.ThriftRangeSlicer;
+import org.apache.cassandra.stress.operations.ThriftReader;
 import org.apache.cassandra.stress.settings.Command;
 import org.apache.cassandra.stress.settings.CqlVersion;
 import org.apache.cassandra.stress.settings.SettingsCommandMixed;
@@ -66,7 +81,8 @@ public abstract class Operation
 public final RowGen rowGen;
 public final ListColumnParent columnParents;
 public final StressMetrics metrics;
-public final SettingsCommandMixed.CommandSelector readWriteSelector;
+public final SettingsCommandMixed.CommandSelector commandSelector;
+private final EnumMapCommand, State substates;
 private Object cqlCache;
 
 public State(Command type, StressSettings settings, StressMetrics 
metrics)
@@ -74,9 +90,15 @@ public abstract class Operation
 this.type = type;
 this.timer = metrics.getTiming().newTimer();
 if (type == Command.MIXED)
-readWriteSelector = ((SettingsCommandMixed) 
settings.command).selector();
+{
+commandSelector = ((SettingsCommandMixed) 
settings.command).selector();
+substates = new EnumMap(Command.class);
+}
 else
-readWriteSelector = null;
+{
+commandSelector = null;
+substates = null;
+}
   

git commit: remove dead code

2014-02-12 Thread dbrosius
Updated Branches:
  refs/heads/cassandra-2.0 de6a74a2d - 5f60fcc39


remove dead code


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5f60fcc3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5f60fcc3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5f60fcc3

Branch: refs/heads/cassandra-2.0
Commit: 5f60fcc39c9ac8958bd2d8565404515f919d8124
Parents: de6a74a
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed Feb 12 20:50:35 2014 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed Feb 12 20:50:35 2014 -0500

--
 src/java/org/apache/cassandra/service/ReadCallback.java | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f60fcc3/src/java/org/apache/cassandra/service/ReadCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java 
b/src/java/org/apache/cassandra/service/ReadCallback.java
index afff530..777ef90 100644
--- a/src/java/org/apache/cassandra/service/ReadCallback.java
+++ b/src/java/org/apache/cassandra/service/ReadCallback.java
@@ -96,9 +96,6 @@ public class ReadCallbackTMessage, TResolved implements 
IAsyncCallbackTMessag
 if (!await(command.getTimeout(), TimeUnit.MILLISECONDS))
 {
 // Same as for writes, see AbstractWriteResponseHandler
-int acks = received.get();
-if (resolver.isDataPresent()  acks = blockfor)
-acks = blockfor - 1;
 ReadTimeoutException ex = new 
ReadTimeoutException(consistencyLevel, received.get(), blockfor, 
resolver.isDataPresent());
 if (logger.isDebugEnabled())
 logger.debug(Read timeout: {}, ex.toString());



[1/2] git commit: remove dead code

2014-02-12 Thread dbrosius
Updated Branches:
  refs/heads/trunk 10b617364 - 4b4a8dd4f


remove dead code


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5f60fcc3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5f60fcc3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5f60fcc3

Branch: refs/heads/trunk
Commit: 5f60fcc39c9ac8958bd2d8565404515f919d8124
Parents: de6a74a
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed Feb 12 20:50:35 2014 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed Feb 12 20:50:35 2014 -0500

--
 src/java/org/apache/cassandra/service/ReadCallback.java | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f60fcc3/src/java/org/apache/cassandra/service/ReadCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/ReadCallback.java 
b/src/java/org/apache/cassandra/service/ReadCallback.java
index afff530..777ef90 100644
--- a/src/java/org/apache/cassandra/service/ReadCallback.java
+++ b/src/java/org/apache/cassandra/service/ReadCallback.java
@@ -96,9 +96,6 @@ public class ReadCallbackTMessage, TResolved implements 
IAsyncCallbackTMessag
 if (!await(command.getTimeout(), TimeUnit.MILLISECONDS))
 {
 // Same as for writes, see AbstractWriteResponseHandler
-int acks = received.get();
-if (resolver.isDataPresent()  acks = blockfor)
-acks = blockfor - 1;
 ReadTimeoutException ex = new 
ReadTimeoutException(consistencyLevel, received.get(), blockfor, 
resolver.isDataPresent());
 if (logger.isDebugEnabled())
 logger.debug(Read timeout: {}, ex.toString());



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2014-02-12 Thread dbrosius
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4b4a8dd4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4b4a8dd4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4b4a8dd4

Branch: refs/heads/trunk
Commit: 4b4a8dd4f04cef4dcd4943cf0ac9a5b3c58c5272
Parents: 10b6173 5f60fcc
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed Feb 12 20:57:51 2014 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed Feb 12 20:57:51 2014 -0500

--
 src/java/org/apache/cassandra/service/ReadCallback.java | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4b4a8dd4/src/java/org/apache/cassandra/service/ReadCallback.java
--
diff --cc src/java/org/apache/cassandra/service/ReadCallback.java
index 0c0d50c,777ef90..c43feaa
--- a/src/java/org/apache/cassandra/service/ReadCallback.java
+++ b/src/java/org/apache/cassandra/service/ReadCallback.java
@@@ -98,10 -96,7 +98,8 @@@ public class ReadCallbackTMessage, TRe
  if (!await(command.getTimeout(), TimeUnit.MILLISECONDS))
  {
  // Same as for writes, see AbstractWriteResponseHandler
- int acks = received;
- if (resolver.isDataPresent()  acks = blockfor)
- acks = blockfor - 1;
 -ReadTimeoutException ex = new 
ReadTimeoutException(consistencyLevel, received.get(), blockfor, 
resolver.isDataPresent());
 +ReadTimeoutException ex = new 
ReadTimeoutException(consistencyLevel, received, blockfor, 
resolver.isDataPresent());
++
  if (logger.isDebugEnabled())
  logger.debug(Read timeout: {}, ex.toString());
  throw ex;



git commit: use long math for long results

2014-02-12 Thread dbrosius
Updated Branches:
  refs/heads/trunk 4b4a8dd4f - 951fc8554


use long math for long results


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/951fc855
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/951fc855
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/951fc855

Branch: refs/heads/trunk
Commit: 951fc855496826f9785ad0df66e8d30badf51e4b
Parents: 4b4a8dd
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed Feb 12 20:59:58 2014 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed Feb 12 20:59:58 2014 -0500

--
 .../apache/cassandra/stress/generatedata/DataGenStringRepeats.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/951fc855/tools/stress/src/org/apache/cassandra/stress/generatedata/DataGenStringRepeats.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/generatedata/DataGenStringRepeats.java
 
b/tools/stress/src/org/apache/cassandra/stress/generatedata/DataGenStringRepeats.java
index 4c5bb89..62ea922 100644
--- 
a/tools/stress/src/org/apache/cassandra/stress/generatedata/DataGenStringRepeats.java
+++ 
b/tools/stress/src/org/apache/cassandra/stress/generatedata/DataGenStringRepeats.java
@@ -50,7 +50,7 @@ public class DataGenStringRepeats extends DataGen
 
 private byte[] getData(long index, int column, ByteBuffer seed)
 {
-final long key = (column * repeatFrequency) + ((seed == null ? index : 
Math.abs(seed.hashCode())) % repeatFrequency);
+final long key = ((long)column * repeatFrequency) + ((seed == null ? 
index : Math.abs(seed.hashCode())) % repeatFrequency);
 byte[] r = cache.get(key);
 if (r != null)
 return r;



git commit: remove dead params

2014-02-12 Thread dbrosius
Updated Branches:
  refs/heads/trunk 951fc8554 - 4334f99f4


remove dead params


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4334f99f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4334f99f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4334f99f

Branch: refs/heads/trunk
Commit: 4334f99f4bbe0f8361d2063f53b1d10cbc4ba8d0
Parents: 951fc85
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed Feb 12 21:13:12 2014 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed Feb 12 21:13:12 2014 -0500

--
 .../org/apache/cassandra/db/RangeTombstoneListTest.java |  6 --
 .../locator/OldNetworkTopologyStrategyTest.java | 12 ++--
 .../stress/generatedata/DataGenStringDictionary.java|  6 +++---
 3 files changed, 9 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4334f99f/test/unit/org/apache/cassandra/db/RangeTombstoneListTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/RangeTombstoneListTest.java 
b/test/unit/org/apache/cassandra/db/RangeTombstoneListTest.java
index 92a0b4a..7dc86da 100644
--- a/test/unit/org/apache/cassandra/db/RangeTombstoneListTest.java
+++ b/test/unit/org/apache/cassandra/db/RangeTombstoneListTest.java
@@ -196,12 +196,6 @@ public class RangeTombstoneListTest
 @Test
 public void addAllTest()
 {
-//addAllTest(false);
-addAllTest(true);
-}
-
-private void addAllTest(boolean doMerge)
-{
 RangeTombstoneList l1 = new RangeTombstoneList(cmp, 0);
 l1.add(rt(0, 4, 5));
 l1.add(rt(6, 10, 2));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4334f99f/test/unit/org/apache/cassandra/locator/OldNetworkTopologyStrategyTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/locator/OldNetworkTopologyStrategyTest.java 
b/test/unit/org/apache/cassandra/locator/OldNetworkTopologyStrategyTest.java
index a11a128..c7470f8 100644
--- a/test/unit/org/apache/cassandra/locator/OldNetworkTopologyStrategyTest.java
+++ b/test/unit/org/apache/cassandra/locator/OldNetworkTopologyStrategyTest.java
@@ -181,7 +181,7 @@ public class OldNetworkTopologyStrategyTest extends 
SchemaLoader
 BigIntegerToken newToken = new 
BigIntegerToken(21267647932558653966460912964485513216);
 BigIntegerToken[] tokens = initTokens();
 BigIntegerToken[] tokensAfterMove = initTokensAfterMove(tokens, 
movingNodeIdx, newToken);
-PairSetRangeToken, SetRangeToken ranges = 
calculateStreamAndFetchRanges(tokens, tokensAfterMove, movingNodeIdx, newToken);
+PairSetRangeToken, SetRangeToken ranges = 
calculateStreamAndFetchRanges(tokens, tokensAfterMove, movingNodeIdx);
 
 assertEquals(ranges.left.iterator().next().left, 
tokensAfterMove[movingNodeIdx]);
 assertEquals(ranges.left.iterator().next().right, 
tokens[movingNodeIdx]);
@@ -198,7 +198,7 @@ public class OldNetworkTopologyStrategyTest extends 
SchemaLoader
 BigIntegerToken newToken = new 
BigIntegerToken(35267647932558653966460912964485513216);
 BigIntegerToken[] tokens = initTokens();
 BigIntegerToken[] tokensAfterMove = initTokensAfterMove(tokens, 
movingNodeIdx, newToken);
-PairSetRangeToken, SetRangeToken ranges = 
calculateStreamAndFetchRanges(tokens, tokensAfterMove, movingNodeIdx, newToken);
+PairSetRangeToken, SetRangeToken ranges = 
calculateStreamAndFetchRanges(tokens, tokensAfterMove, movingNodeIdx);
 
 assertEquals(No data should be streamed, ranges.left.size(), 0);
 assertEquals(ranges.right.iterator().next().left, 
tokens[movingNodeIdx]);
@@ -216,7 +216,7 @@ public class OldNetworkTopologyStrategyTest extends 
SchemaLoader
 BigIntegerToken newToken = new 
BigIntegerToken(90070591730234615865843651857942052864);
 BigIntegerToken[] tokens = initTokens();
 BigIntegerToken[] tokensAfterMove = initTokensAfterMove(tokens, 
movingNodeIdx, newToken);
-PairSetRangeToken, SetRangeToken ranges = 
calculateStreamAndFetchRanges(tokens, tokensAfterMove, movingNodeIdx, newToken);
+PairSetRangeToken, SetRangeToken ranges = 
calculateStreamAndFetchRanges(tokens, tokensAfterMove, movingNodeIdx);
 
 // sort the results, so they can be compared
 Range[] toStream = ranges.left.toArray(new Range[0]);
@@ -248,7 +248,7 @@ public class OldNetworkTopologyStrategyTest extends 
SchemaLoader
 BigIntegerToken newToken = new 
BigIntegerToken(52535295865117307932921825928971026432);
 BigIntegerToken[] tokens = initTokens();
 BigIntegerToken[] tokensAfterMove = initTokensAfterMove(tokens, 

[jira] [Commented] (CASSANDRA-1983) Make sstable filenames contain a UUID instead of increasing integer

2014-02-12 Thread Daniel Shelepov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13900054#comment-13900054
 ] 

Daniel Shelepov commented on CASSANDRA-1983:


Is this still needed?  Naming in 2.0+ is still incremental as far as I can 
tell.  

I'd like to work on this fix while I'm learning the codebase.

 Make sstable filenames contain a UUID instead of increasing integer
 ---

 Key: CASSANDRA-1983
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1983
 Project: Cassandra
  Issue Type: Improvement
Reporter: David King
Priority: Minor

 sstable filenames look like CFName-1569-Index.db, containing an integer for 
 uniqueness. This makes it possible (however unlikely) that the integer could 
 overflow, which could be a problem. It also makes it difficult to collapse 
 multiple nodes into a single one with rsync. I do this occasionally for 
 testing: I'll copy our 20 node cluster into only 3 nodes by copying all of 
 the data files and running cleanup; at present this requires a manual step of 
 uniqifying the overlapping sstable names. If instead of an incrementing 
 integer, it would be handy if these contained a UUID or somesuch that 
 guarantees uniqueness across the cluster.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >