Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Ryan Svihla
give it a try see how it behaves On Mar 15, 2017 10:09 AM, "Frank Hughes" wrote: > Thanks Ryan, appreciated again. getPolicy just had this: > > Policy policy = new TokenAwarePolicy(DCAwareRoundRobinPolicy. > builder().build()); > > so i guess i need > > Policy policy =

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Frank Hughes
Thanks Ryan, appreciated again. getPolicy just had this: Policy policy = new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().build()); so i guess i need Policy policy = new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().build(), false); Frank On 2017-03-15 13:45 (-), Ryan Svihla

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Ryan Svihla
I don't see what getPolicy is retrieving but you want to use TokenAware with the shuffle false option in the ctor, it defaults to shuffle true so that load is spread when people have horribly fat partitions. On Wed, Mar 15, 2017 at 9:41 AM, Frank Hughes wrote: > Thanks

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Frank Hughes
Thanks for reply. Much appreciated. I should have included more detail. So I am using replication factor 2, and the code is using a token aware method of distributing the work so that only data that is primarily owned by the node is read on that local machine. So i guess this points to the

Re: TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Ryan Svihla
LOCAL_ONE just means local to the datacenter by default the tokenaware policy will go to a replica that owns that data (primary or any replica depends on the driver) and that may or may not be the node the driver process is running on. So to put this more concretely if you have RF 2 with that 4

TransportException - Consistency LOCAL_ONE - EC2

2017-03-15 Thread Frank Hughes
Hi there, Im running a java process on a 4 node cassandra 3.9 cluster on EC2 (instance type t2.2xlarge), the process running separately on each of the nodes (i.e. 4 running JVMs). The process is just doing reads from Cassandra and building a SOLR index and using the java driver with