Re: Ye old singleton debate

2017-03-15 Thread Ali Akhtar
+1. Would be awesome if this could be mocked / tested. On Thu, Mar 16, 2017 at 3:47 AM, Edward Capriolo wrote: > This question came up today: > > OK, say you mock, how do you construct a working multi-process > representation of how C* actually works from within a unit

Ye old singleton debate

2017-03-15 Thread Edward Capriolo
This question came up today: OK, say you mock, how do you construct a working multi-process representation of how C* actually works from within a unit test without running the code that actually constructs the cluster? 1) Don't do that (construct a multinode cluster in a test) just mock the crap

Re: Does "nodetool repair" need to be run on each node for a given table?

2017-03-15 Thread Thakrar, Jayesh
Thank you Eric for helping out. The reason I sent the question a second time is because I did not see my question and the first reply from the usergroup. After I sent the question a second time, I got a personal flame from somebody else too and so examined my "spam" folders and that's where I

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Ryan Svihla
give it a try see how it behaves On Mar 15, 2017 10:09 AM, "Frank Hughes" wrote: > Thanks Ryan, appreciated again. getPolicy just had this: > > Policy policy = new TokenAwarePolicy(DCAwareRoundRobinPolicy. > builder().build()); > > so i guess i need > > Policy policy =

Re: Does "nodetool repair" need to be run on each node for a given table?

2017-03-15 Thread Eric Evans
On Tue, Mar 14, 2017 at 12:04 PM, daemeon reiydelle wrote: > Am I unreasonable in expecting a poster to have looked at the documentation > before posting? And that reposting the same query WITHOUT reading the > documents (when pointed out to them) when asked to do so is not

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Frank Hughes
Thanks Ryan, appreciated again. getPolicy just had this: Policy policy = new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().build()); so i guess i need Policy policy = new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().build(), false); Frank On 2017-03-15 13:45 (-), Ryan Svihla

Re: Change the IP of a live node

2017-03-15 Thread Ryan Svihla
I've actually changed the ip address quite a bit (gossip complains on startup and happily picks up the new address), I think this maybe easier such as..can those ip addresses route to one another ? As in can the first node with 192.168.xx.xx hit the node with 10.179.xx.xx on that interface? On

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Ryan Svihla
I don't see what getPolicy is retrieving but you want to use TokenAware with the shuffle false option in the ctor, it defaults to shuffle true so that load is spread when people have horribly fat partitions. On Wed, Mar 15, 2017 at 9:41 AM, Frank Hughes wrote: > Thanks

Re: Change the IP of a live node

2017-03-15 Thread kurt greaves
Cassandra uses the IP address for more or less everything. It's possible to change it through some hackery however probably not a great idea. The nodes system tables will still reference the old IP which is likely your problem here. On 14 March 2017 at 18:58, George Sigletos

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Frank Hughes
Thanks for reply. Much appreciated. I should have included more detail. So I am using replication factor 2, and the code is using a token aware method of distributing the work so that only data that is primarily owned by the node is read on that local machine. So i guess this points to the

Re: changing compaction strategy

2017-03-15 Thread kurt greaves
The rogue pending task is likely a non-issue. If your jmx command went through without errors and you got the log message you can assume it worked. It won't show in the schema unless you run the ALTER statement which affects the whole cluster. If you were switching from STCS then you wouldn't

Re: Internal Security - Authentication & Authorization

2017-03-15 Thread Sam Tunnicliffe
> > Here is what I have pieced together. Please let me know if I am on the > right track. You're more or less right regarding the built in authenticator/authorizer/role manager (which are usually referred to as "internal" as they store their data in Cassandra tables). One important thing to note

Re: Internal Security - Authentication & Authorization

2017-03-15 Thread kurt greaves
Jacob, seems you are on the right track however my understanding is that only the user that was auth'd has their permissions/roles/creds cached. Also. Cassandra will query at QUORUM for the "cassandra" user, and at LOCAL_ONE for *all* other users. This is the same for creating users/roles.

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Ryan Svihla
LOCAL_ONE just means local to the datacenter by default the tokenaware policy will go to a replica that owns that data (primary or any replica depends on the driver) and that may or may not be the node the driver process is running on. So to put this more concretely if you have RF 2 with that 4

TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Frank Hughes
Hi there, Im running a java process on a 4 node cassandra 3.9 cluster on EC2 (instance type t2.2xlarge), the process running separately on each of the nodes (i.e. 4 running JVMs). The process is just doing reads from Cassandra and building a SOLR index and using the java driver with

Re: Slow repair

2017-03-15 Thread Ben Slater
When you say you’re running repair to “rebalance” do you mean to populate the new DC? If so, the normal/correct procedure is to use nodetool rebuild rather than repair. See https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html for the full details. Cheers

Slow repair

2017-03-15 Thread Gábor Auth
Hi, We are working with a two DCs Cassandra cluster (EU and US), so that the distance is over 160 ms between them. I've added a new DC to this cluster, modified the keyspace's replication factor and trying to rebalance it with repair but the repair is very slow (over 10-15 minutes per node per