Re: Proprietary Replication Strategies: Cassandra Driver Support

2016-10-10 Thread Ben Bromhead
FYI there is an everywhere strategy waiting to be accepted:

https://issues.apache.org/jira/browse/CASSANDRA-12629

On Sat, 8 Oct 2016 at 10:56 Vladimir Yudovin  wrote:

Well, it can be useful in some scenarios - e.g. temporary tables on nearest
or the same node.



Best regards, Vladimir Yudovin,

Winguzone - Hosted Cloud Cassandra on Azure and SoftLayer.

Launch your cluster in minutes.









 On Sat, 08 Oct 2016 13:44:00 -0400 Jeff Jirsajji...@gmail.com
wrote 



I'm sure that's what he meant, I just disagree that it sounds useful



--

Jeff Jirsa





 On Oct 8, 2016, at 10:33 AM, Vladimir Yudovin vla...@winguzone.com
wrote:



 As far as I understand Edward meant to have option determinate actual
storage node on client side, by driver, disregarding key hash/tokens
mechanism.



 Best regards, Vladimir Yudovin,

 Winguzone - Hosted Cloud Cassandra on Azure and SoftLayer.

 Launch your cluster in minutes.









  On Sat, 08 Oct 2016 13:17:14 -0400 Jeff Jirsa &
amp;lt;jji...@gmail.comgt; wrote 



 That sounds awful, especially since you could just use SimpleStrategy
with RF=1 and then bootstrap / decom would handle resharding for you as
expected.



 --

 Jeff Jirsa





 gt; On Oct 8, 2016, at 10:09 AM, Edward Capriolo &
amp;lt;edlinuxg...@gmail.comgt; wrote:

 gt;

 gt; I have contemplated using LocalStrategy as a "do it yourself
client side

 gt; sharding system".

 gt;

 gt; On Sat, Oct 8, 2016 at 12:37 AM, Vladimir Yudovin &
amp;lt;vla...@winguzone.comgt;

 gt; wrote:

 gt;

 gt;gt; Hi Prasenjit,

 gt;gt; I would like to get the replication factors of the
key-spaces using the

 gt;gt; strategies in the same way we get the replication
factors for Simple and

 gt;gt; NetworkTopology.

 gt;gt; Actually LocalSarategy has no replication factor:

 gt;gt;

 gt;gt; SELECT * FROM system_schema.keyspaces WHERE
keyspace_name IN ('system',

 gt;gt; 'system_schema');

 gt;gt; keyspace_name | durable_writes | replication

 gt;gt;
---++---

 gt;gt; -

 gt;gt; system | True | {'class':

 gt;gt; 'org.apache.cassandra.locator.LocalStrategy'}

 gt;gt; system_schema | True | {'class':

 gt;gt; 'org.apache.cassandra.locator.LocalStrategy'}

 gt;gt;

 gt;gt;

 gt;gt; It's used for internal tables and not accessible to
users:

 gt;gt;

 gt;gt; CREATE KEYSPACE excel WITH replication = {'class':
'LocalStrategy'};

 gt;gt; ConfigurationException: Unable to use given strategy
class: LocalStrategy

 gt;gt; is reserved for internal use.

 gt;gt;

 gt;gt;

 gt;gt; Best regards, Vladimir Yudovin,

 gt;gt; Winguzone - Hosted Cloud Cassandra on Azure and
SoftLayer.

 gt;gt; Launch your cluster in minutes.

 gt;gt;

 gt;gt;

 gt;gt;

 gt;gt;

 gt;gt;  On Fri, 07 Oct 2016 17:06:09 -0400 Prasenjit

 gt;gt; Sarkaramp;lt;prasenjit.sar...@datos.ioamp;gt;
wrote 

 gt;gt;

 gt;gt; Thanks Vlad and Jeremiah.

 gt;gt;

 gt;gt; There were questions about support, so let me address
that in more detail.

 gt;gt;

 gt;gt; If I look at the latest Cassandra python driver, the
support for

 gt;gt; LocalStrategy is very limited (code snippet shown
below) and the support

 gt;gt; for EverywhereStrategy is non-existent. By limited I
mean that the

 gt;gt; Cassandra python driver only provides the name of the
strategy for

 gt;gt; LocalStrategy and not much else.

 gt;gt;

 gt;gt; What I would like (and happy to help) is for the
Cassandra python driver to

 gt;gt; provide support for Local and Everywhere to the same
extent it is provided

 gt;gt; for Simple and NetworkTopology. I understand that
token aware routing is

 gt;gt; not applicable to either strategy but I would like to
get the replication

 gt;gt; factors of the key-spaces using the strategies in the
same way we get the

 gt;gt; replication factors for Simple and NetworkTopology.

 gt;gt;

 gt;gt; Hope this helps,

 gt;gt; Prasenjit

 gt;gt;

 gt;gt;

 gt;gt; class LocalStrategy(ReplicationStrategy):

 gt;gt; def __init__(self, options_map):

 gt;gt; pass

 gt;gt; def make_token_replica_map(self, token_to_host_owner,
ring):

 gt;gt; return {}

 gt;gt; def export_for_schema(self):

 gt;gt; """

 gt;gt; Returns a string version of these replication options
which are

 gt;gt; suitable for use in a CREATE KEYSPACE statement.

 gt;gt; """

 gt;gt; return "{'class': 'LocalStrategy'}"

 gt;gt; def __eq__(self, other):

 gt;gt; return isinstance(other, LocalStrategy)

 gt;gt;

 gt;gt; On Fri, Oct 7, 2016 at 11:56 AM, Jeremiah D Jordan
amp;lt;

 gt;gt; jeremiah.jor...@gmail.comamp;gt; wrote:

 gt;gt;

 gt;gt; amp;gt; What kind of support are you thinking
of? All drivers should support

 gt;gt; them

 gt;gt; amp;gt; already, drivers shouldn’t care about
replication strategy except when

 gt;gt; amp;gt; trying to do token aware routing.

 gt;gt; amp;gt; But since anyone can make a custom
replication strategy, drivers that

 gt;gt; do

 gt;gt; amp;gt; token 

Re: Bootstrapping data from Cassandra 2.2.5 datacenter to 3.0.8 datacenter fails because of streaming errors

2016-10-10 Thread Jonathan Haddad
You can't stream between major versions. Don't tear down your first data
center, upgrade it instead.
On Mon, Oct 10, 2016 at 4:35 PM Abhishek Verma  wrote:

> Hi Cassandra users,
>
> We are trying to upgrade our Cassandra version from 2.2.5 to 3.0.8
> (running on Mesos, but that's besides the point). We have two datacenters,
> so in order to preserve our data, we are trying to upgrade one datacenter
> at a time.
>
> Initially both DCs (dc1 and dc2) are running 2.2.5. The idea is to tear
> down dc1 completely (delete all the data in it), bring it up with 3.0.8,
> let data replicate from dc2 to dc1, and then tear down dc2, bring it up
> with 3.0.8 and replicate data from dc1.
>
> I am able to reproduce the problem on bare metal clusters running on 3
> nodes. I am using Oracle's server-jre-8u74-linux-x64 JRE.
>
> *Node A*: Downloaded 2.2.5-bin.tar.gz, changed the seeds to include its
> own IP address, changed listen_address and rpc_address to its own IP and
> changed endpoint_snitch to GossipingPropertyFileSnitch. I
> changed conf/cassandra-rackdc.properties to
> dc=dc2
> rack=rack2
> This node started up fine and is UN in nodetool status in dc2.
>
> I used CQL shell to create a table and insert 3 rows:
> verma@x:~/apache-cassandra-2.2.5$ bin/cqlsh $HOSTNAME
> Connected to Test Cluster at x:9042.
> [cqlsh 5.0.1 | Cassandra 2.2.5 | CQL spec 3.3.1 | Native protocol v4]
> Use HELP for help.
> cqlsh> desc tmp
>
> CREATE KEYSPACE tmp WITH replication = {'class':
> 'NetworkTopologyStrategy', 'dc1': '1', 'dc2': '1'}  AND durable_writes =
> true;
>
> CREATE TABLE tmp.map (
> key text PRIMARY KEY,
> value text
> )...;
> cqlsh> select * from tmp.map;
>
>  key | value
> -+---
>   k1 |v1
>   k3 |v3
>   k2 |v2
>
>
> *Node B:* Downloaded 3.0.8-bin.tar.gz, changed the seeds to include
> itself and node A, changed listen_address and rpc_address to its own IP,
> changed endpoint_snitch to GossipingPropertyFileSnitch. I did not change
> conf/cassandra-rackdc.properties and its contents are
> dc=dc1
> rack=rack1
>
> In the logs, I see:
> INFO  [main] 2016-10-10 22:42:42,850 MessagingService.java:557 - Starting
> Messaging Service on /10.164.32.29:7000 (eth0)
> INFO  [main] 2016-10-10 22:42:42,864 StorageService.java:784 - This node
> will not auto bootstrap because it is configured to be a seed node.
>
> So I start a third node:
> *Node C:* Downloaded 3.0.8-bin.tar.gz, changed the seeds to include node
> A and node B, changed listen_address and rpc_address to its own IP, changed
> endpoint_snitch to GossipingPropertyFileSnitch. I did not change
> conf/cassandra-rackdc.properties.
> Now, nodetool status shows:
>
> verma@xxx:~/apache-cassandra-3.0.8$ bin/nodetool status
> Datacenter: dc1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens   Owns (effective)  Host ID
> Rack
> UJ 87.81 KB   256  ?
> 9064832d-ed5c-4c42-ad5a-f754b52b670c  rack1
> UN107.72 KB  256  100.0%
>  28b1043f-115b-46a5-b6b6-8609829cde76  rack1
> Datacenter: dc2
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens   Owns (effective)  Host ID
> Rack
> UN  73.2 KB256  100.0%
>  09cc542c-2299-45a5-a4d1-159c239ded37  rack2
>
> Nodetool describe cluster shows:
> verma@xxx:~/apache-cassandra-3.0.8$ bin/nodetool describecluster
> Cluster Information:
> Name: Test Cluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> c2a2bb4f-7d31-3fb8-a216-00b41a643650: [, ]
>
> 9770e3c5-3135-32e2-b761-65a0f6d8824e: []
>
> Note that there are two schema versions and they don't match.
>
> I see the following in the system.log:
>
> INFO  [InternalResponseStage:1] 2016-10-10 22:48:36,055
> ColumnFamilyStore.java:390 - Initializing system_auth.roles
> INFO  [main] 2016-10-10 22:48:36,316 StorageService.java:1149 - JOINING:
> waiting for schema information to complete
> INFO  [main] 2016-10-10 22:48:36,316 StorageService.java:1149 - JOINING:
> schema complete, ready to bootstrap
> INFO  [main] 2016-10-10 22:48:36,316 StorageService.java:1149 - JOINING:
> waiting for pending range calculation
> INFO  [main] 2016-10-10 22:48:36,317 StorageService.java:1149 - JOINING:
> calculation complete, ready to bootstrap
> INFO  [main] 2016-10-10 22:48:36,319 StorageService.java:1149 - JOINING:
> getting bootstrap token
> INFO  [main] 2016-10-10 22:48:36,357 StorageService.java:1149 - JOINING:
> sleeping 3 ms for pending range setup
> INFO  [main] 2016-10-10 22:49:06,358 StorageService.java:1149 - JOINING:
> Starting to bootstrap...
> INFO  [main] 2016-10-10 22:49:06,494 StreamResultFuture.java:87 - [Stream
> #bfb5e470-8f3b-11e6-b69a-1b451159408e] Executing streaming plan for
> Bootstrap
> INFO  [StreamConnectionEstablisher:1] 

Bootstrapping data from Cassandra 2.2.5 datacenter to 3.0.8 datacenter fails because of streaming errors

2016-10-10 Thread Abhishek Verma
Hi Cassandra users,

We are trying to upgrade our Cassandra version from 2.2.5 to 3.0.8 (running
on Mesos, but that's besides the point). We have two datacenters, so in
order to preserve our data, we are trying to upgrade one datacenter at a
time.

Initially both DCs (dc1 and dc2) are running 2.2.5. The idea is to tear
down dc1 completely (delete all the data in it), bring it up with 3.0.8,
let data replicate from dc2 to dc1, and then tear down dc2, bring it up
with 3.0.8 and replicate data from dc1.

I am able to reproduce the problem on bare metal clusters running on 3
nodes. I am using Oracle's server-jre-8u74-linux-x64 JRE.

*Node A*: Downloaded 2.2.5-bin.tar.gz, changed the seeds to include its own
IP address, changed listen_address and rpc_address to its own IP and
changed endpoint_snitch to GossipingPropertyFileSnitch. I
changed conf/cassandra-rackdc.properties to
dc=dc2
rack=rack2
This node started up fine and is UN in nodetool status in dc2.

I used CQL shell to create a table and insert 3 rows:
verma@x:~/apache-cassandra-2.2.5$ bin/cqlsh $HOSTNAME
Connected to Test Cluster at x:9042.
[cqlsh 5.0.1 | Cassandra 2.2.5 | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh> desc tmp

CREATE KEYSPACE tmp WITH replication = {'class': 'NetworkTopologyStrategy',
'dc1': '1', 'dc2': '1'}  AND durable_writes = true;

CREATE TABLE tmp.map (
key text PRIMARY KEY,
value text
)...;
cqlsh> select * from tmp.map;

 key | value
-+---
  k1 |v1
  k3 |v3
  k2 |v2


*Node B:* Downloaded 3.0.8-bin.tar.gz, changed the seeds to include itself
and node A, changed listen_address and rpc_address to its own IP, changed
endpoint_snitch to GossipingPropertyFileSnitch. I did not change
conf/cassandra-rackdc.properties and its contents are
dc=dc1
rack=rack1

In the logs, I see:
INFO  [main] 2016-10-10 22:42:42,850 MessagingService.java:557 - Starting
Messaging Service on /10.164.32.29:7000 (eth0)
INFO  [main] 2016-10-10 22:42:42,864 StorageService.java:784 - This node
will not auto bootstrap because it is configured to be a seed node.

So I start a third node:
*Node C:* Downloaded 3.0.8-bin.tar.gz, changed the seeds to include node A
and node B, changed listen_address and rpc_address to its own IP, changed
endpoint_snitch to GossipingPropertyFileSnitch. I did not change
conf/cassandra-rackdc.properties.
Now, nodetool status shows:

verma@xxx:~/apache-cassandra-3.0.8$ bin/nodetool status
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address   Load   Tokens   Owns (effective)  Host ID
  Rack
UJ 87.81 KB   256  ?
9064832d-ed5c-4c42-ad5a-f754b52b670c  rack1
UN107.72 KB  256  100.0%
 28b1043f-115b-46a5-b6b6-8609829cde76  rack1
Datacenter: dc2
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address   Load   Tokens   Owns (effective)  Host ID
  Rack
UN  73.2 KB256  100.0%
 09cc542c-2299-45a5-a4d1-159c239ded37  rack2

Nodetool describe cluster shows:
verma@xxx:~/apache-cassandra-3.0.8$ bin/nodetool describecluster
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
c2a2bb4f-7d31-3fb8-a216-00b41a643650: [, ]

9770e3c5-3135-32e2-b761-65a0f6d8824e: []

Note that there are two schema versions and they don't match.

I see the following in the system.log:

INFO  [InternalResponseStage:1] 2016-10-10 22:48:36,055
ColumnFamilyStore.java:390 - Initializing system_auth.roles
INFO  [main] 2016-10-10 22:48:36,316 StorageService.java:1149 - JOINING:
waiting for schema information to complete
INFO  [main] 2016-10-10 22:48:36,316 StorageService.java:1149 - JOINING:
schema complete, ready to bootstrap
INFO  [main] 2016-10-10 22:48:36,316 StorageService.java:1149 - JOINING:
waiting for pending range calculation
INFO  [main] 2016-10-10 22:48:36,317 StorageService.java:1149 - JOINING:
calculation complete, ready to bootstrap
INFO  [main] 2016-10-10 22:48:36,319 StorageService.java:1149 - JOINING:
getting bootstrap token
INFO  [main] 2016-10-10 22:48:36,357 StorageService.java:1149 - JOINING:
sleeping 3 ms for pending range setup
INFO  [main] 2016-10-10 22:49:06,358 StorageService.java:1149 - JOINING:
Starting to bootstrap...
INFO  [main] 2016-10-10 22:49:06,494 StreamResultFuture.java:87 - [Stream
#bfb5e470-8f3b-11e6-b69a-1b451159408e] Executing streaming plan for
Bootstrap
INFO  [StreamConnectionEstablisher:1] 2016-10-10 22:49:06,495
StreamSession.java:242 - [Stream #bfb5e470-8f3b-11e6-b69a-1b451159408e]
Starting streaming to /
INFO  [StreamConnectionEstablisher:2] 2016-10-10 22:49:06,495
StreamSession.java:242 - [Stream #bfb5e470-8f3b-11e6-b69a-1b451159408e]
Starting streaming to /
INFO  [StreamConnectionEstablisher:2] 2016-10-10 22:49:06,500
StreamCoordinator.java:213 - [Stream 

[RELEASE] Apache Cassandra 2.1.16 released

2016-10-10 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache
Cassandra version 2.1.16.

Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.

 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download
section:

 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 2.1 series. As always,
please pay attention to the release notes[2] and Let us know[3] if you
were to encounter any problem.

Enjoy!

[1]: (CHANGES.txt) https://goo.gl/Unwb9s
[2]: (NEWS.txt) https://goo.gl/LuZHa5
[3]: https://issues.apache.org/jira/browse/CASSANDRA


CASSANDRA-12758 in 2.1? (was: Re: [VOTE] Release Apache Cassandra 2.1.16)

2016-10-10 Thread Michael Shuler
I also agree this is minor and did not intend to re-roll.

My question is whether CASSANDRA-12758 should go to to the
'cassandra-2.1' branch and be tagged with fixver of '2.1.x' in JIRA? Is
this minor improvement satisfactory for a the critical-only nature of
the 2.1 branch and go into the next 2.1 release, or leave it for 2.2+?

-- 
Kind regards,
Michael

On 10/10/2016 03:09 PM, Nate McCall wrote:
>> It's too minor for a re-roll, and safe enough to just apply yourself if you
>> want it.
> 
> Agreed.
> 
>>
>> On Mon, Oct 10, 2016 at 2:44 PM, Michael Shuler 
>> wrote:
>>
>>> Nate, do think CASSANDRA-12758 should go to 2.1.x?
>>>
>>> --
>>> Michael
>>>
>>> On 10/10/2016 02:26 PM, Nate McCall wrote:
 Hi Romain,
 I appreciate you speaking up about this, but I stuck with my +1 in
 order to get 2.1.16 with the NTR fix out since I have seen
 CASSANDRA-11363 with every recent client installation. Also, running
 the patch in production produced results satisfactory enough to me to
 preclude the need for explicit monitoring added by your patch (though
 I do think it's a good idea to have a metric).

 Thanks for both the patch and bringing it up regardless.

 -Nate

 On Fri, Oct 7, 2016 at 11:45 AM, Romain Hardouin
  wrote:
> Hi,
> I use the "current 2.1.16" (commit 
> cdd535fcac4ba79bb371e8373c6504d9e3978853)
>>> on production in 5 DCs (82 nodes) out of 7 and it works well!I've just had
>>> to add a MBean to track changes of the NTR queue length on top of cdd535f.
>>> This allow to make correlations with other metrics and see the impact of a
>>> change.
> I've filed a ticket with patches for 2.1 and the trunk
>>> https://issues.apache.org/jira/browse/CASSANDRA-12758
> Do you think this MBean could land in the final 2.1.16 since it goes
>>> hand-in-hand with CASSANDRA-11363?
>
> Thanks,
> Romain
>>>
>>>



Re: [VOTE] Release Apache Cassandra 2.1.16

2016-10-10 Thread Nate McCall
> It's too minor for a re-roll, and safe enough to just apply yourself if you
> want it.

Agreed.

>
> On Mon, Oct 10, 2016 at 2:44 PM, Michael Shuler 
> wrote:
>
>> Nate, do think CASSANDRA-12758 should go to 2.1.x?
>>
>> --
>> Michael
>>
>> On 10/10/2016 02:26 PM, Nate McCall wrote:
>> > Hi Romain,
>> > I appreciate you speaking up about this, but I stuck with my +1 in
>> > order to get 2.1.16 with the NTR fix out since I have seen
>> > CASSANDRA-11363 with every recent client installation. Also, running
>> > the patch in production produced results satisfactory enough to me to
>> > preclude the need for explicit monitoring added by your patch (though
>> > I do think it's a good idea to have a metric).
>> >
>> > Thanks for both the patch and bringing it up regardless.
>> >
>> > -Nate
>> >
>> > On Fri, Oct 7, 2016 at 11:45 AM, Romain Hardouin
>> >  wrote:
>> >> Hi,
>> >> I use the "current 2.1.16" (commit 
>> >> cdd535fcac4ba79bb371e8373c6504d9e3978853)
>> on production in 5 DCs (82 nodes) out of 7 and it works well!I've just had
>> to add a MBean to track changes of the NTR queue length on top of cdd535f.
>> This allow to make correlations with other metrics and see the impact of a
>> change.
>> >> I've filed a ticket with patches for 2.1 and the trunk
>> https://issues.apache.org/jira/browse/CASSANDRA-12758
>> >> Do you think this MBean could land in the final 2.1.16 since it goes
>> hand-in-hand with CASSANDRA-11363?
>> >>
>> >> Thanks,
>> >> Romain
>>
>>


Re: [VOTE] Release Apache Cassandra 2.1.16

2016-10-10 Thread Brandon Williams
It's too minor for a re-roll, and safe enough to just apply yourself if you
want it.

On Mon, Oct 10, 2016 at 2:44 PM, Michael Shuler 
wrote:

> Nate, do think CASSANDRA-12758 should go to 2.1.x?
>
> --
> Michael
>
> On 10/10/2016 02:26 PM, Nate McCall wrote:
> > Hi Romain,
> > I appreciate you speaking up about this, but I stuck with my +1 in
> > order to get 2.1.16 with the NTR fix out since I have seen
> > CASSANDRA-11363 with every recent client installation. Also, running
> > the patch in production produced results satisfactory enough to me to
> > preclude the need for explicit monitoring added by your patch (though
> > I do think it's a good idea to have a metric).
> >
> > Thanks for both the patch and bringing it up regardless.
> >
> > -Nate
> >
> > On Fri, Oct 7, 2016 at 11:45 AM, Romain Hardouin
> >  wrote:
> >> Hi,
> >> I use the "current 2.1.16" (commit 
> >> cdd535fcac4ba79bb371e8373c6504d9e3978853)
> on production in 5 DCs (82 nodes) out of 7 and it works well!I've just had
> to add a MBean to track changes of the NTR queue length on top of cdd535f.
> This allow to make correlations with other metrics and see the impact of a
> change.
> >> I've filed a ticket with patches for 2.1 and the trunk
> https://issues.apache.org/jira/browse/CASSANDRA-12758
> >> Do you think this MBean could land in the final 2.1.16 since it goes
> hand-in-hand with CASSANDRA-11363?
> >>
> >> Thanks,
> >> Romain
>
>


Re: [VOTE] Release Apache Cassandra 2.1.16

2016-10-10 Thread Michael Shuler
Nate, do think CASSANDRA-12758 should go to 2.1.x?

-- 
Michael

On 10/10/2016 02:26 PM, Nate McCall wrote:
> Hi Romain,
> I appreciate you speaking up about this, but I stuck with my +1 in
> order to get 2.1.16 with the NTR fix out since I have seen
> CASSANDRA-11363 with every recent client installation. Also, running
> the patch in production produced results satisfactory enough to me to
> preclude the need for explicit monitoring added by your patch (though
> I do think it's a good idea to have a metric).
> 
> Thanks for both the patch and bringing it up regardless.
> 
> -Nate
> 
> On Fri, Oct 7, 2016 at 11:45 AM, Romain Hardouin
>  wrote:
>> Hi,
>> I use the "current 2.1.16" (commit cdd535fcac4ba79bb371e8373c6504d9e3978853) 
>> on production in 5 DCs (82 nodes) out of 7 and it works well!I've just had 
>> to add a MBean to track changes of the NTR queue length on top of cdd535f. 
>> This allow to make correlations with other metrics and see the impact of a 
>> change.
>> I've filed a ticket with patches for 2.1 and the trunk 
>> https://issues.apache.org/jira/browse/CASSANDRA-12758
>> Do you think this MBean could land in the final 2.1.16 since it goes 
>> hand-in-hand with CASSANDRA-11363?
>>
>> Thanks,
>> Romain



Re: [VOTE] Release Apache Cassandra 2.1.16

2016-10-10 Thread Nate McCall
Hi Romain,
I appreciate you speaking up about this, but I stuck with my +1 in
order to get 2.1.16 with the NTR fix out since I have seen
CASSANDRA-11363 with every recent client installation. Also, running
the patch in production produced results satisfactory enough to me to
preclude the need for explicit monitoring added by your patch (though
I do think it's a good idea to have a metric).

Thanks for both the patch and bringing it up regardless.

-Nate

On Fri, Oct 7, 2016 at 11:45 AM, Romain Hardouin
 wrote:
> Hi,
> I use the "current 2.1.16" (commit cdd535fcac4ba79bb371e8373c6504d9e3978853) 
> on production in 5 DCs (82 nodes) out of 7 and it works well!I've just had to 
> add a MBean to track changes of the NTR queue length on top of cdd535f. This 
> allow to make correlations with other metrics and see the impact of a change.
> I've filed a ticket with patches for 2.1 and the trunk 
> https://issues.apache.org/jira/browse/CASSANDRA-12758
> Do you think this MBean could land in the final 2.1.16 since it goes 
> hand-in-hand with CASSANDRA-11363?
>
> Thanks,
> Romain


Failing tests 2016-10-10

2016-10-10 Thread Philip Thompson
trunk:

===
testall: 2 failures

org.apache.cassandra.service.RemoveTest
 .testLocalHostId
CASSANDRA-9541.

org.apache.cassandra.db.KeyspaceTest
 .testLimitSSTables
New failure. Needs a jira ticket.

===
dtest: All passed!

===
upgrade: Currently failing to complete a run. Looking into tthis.
===
novnode: 8 failures

Still the 8 paging failures from CASSANDRA-12666. Under review


[VOTE RESULT] Release Apache Cassandra 2.1.16

2016-10-10 Thread Michael Shuler
Including myself, I count 8 +1 votes and no -1 votes for this release.
I'll get the release published!

-- 
Kind regards,
Michael

On 10/05/2016 06:09 PM, Michael Shuler wrote:
> I propose the following artifacts for release as 2.1.16.
> 
> sha1: 87034cd05964e64c6c925597279865a40a8c152f
> Git:
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.16-tentative
> Artifacts:
> https://repository.apache.org/content/repositories/orgapachecassandra-1129/org/apache/cassandra/apache-cassandra/2.1.16/
> Staging repository:
> https://repository.apache.org/content/repositories/orgapachecassandra-1129/
> 
> The Debian packages are available here: http://people.apache.org/~mshuler
> 
> The vote will be open for 72 hours (longer if needed).
> 
> [1]: (CHANGES.txt) https://goo.gl/xc7jn6
> [2]: (NEWS.txt) https://goo.gl/O0C3Gb
>