RE: Interesting use case

2016-06-08 Thread Peer, Oded
Why do you think the amount of partitions is different in these tables? The 
partition key is the same (system_name and event_name). The number of rows per 
partition is different.



From: kurt Greaves [mailto:k...@instaclustr.com]
Sent: Thursday, June 09, 2016 7:52 AM
To: user@cassandra.apache.org
Subject: Re: Interesting use case

I would say it's probably due to a significantly larger number of partitions 
when using the overwrite method - but really you should be seeing similar 
performance unless one of the schemas ends up generating a lot more disk IO.
If you're planning to read the last N values for an event at the same time the 
widerow schema would be better, otherwise reading N events using the overwrite 
schema will result in you hitting N partitions. You really need to take into 
account how you're going to read the data when you design a schema, not only 
how many writes you can push through.

On 8 June 2016 at 19:02, John Thomas 
> wrote:
We have a use case where we are storing event data for a given system and only 
want to retain the last N values.  Storing extra values for some time, as long 
as it isn’t too long, is fine but never less than N.  We can't use TTLs to 
delete the data because we can't be sure how frequently events will arrive and 
could end up losing everything.  Is there any built in mechanism to accomplish 
this or a known pattern that we can follow?  The events will be read and 
written at a pretty high frequency so the solution would have to be performant 
and not fragile under stress.

We’ve played with a schema that just has N distinct columns with one value in 
each but have found overwrites seem to perform much poorer than wide rows.  The 
use case we tested only required we store the most recent value:

CREATE TABLE eventyvalue_overwrite(
system_name text,
event_name text,
event_time timestamp,
event_value blob,
PRIMARY KEY (system_name,event_name))

CREATE TABLE eventvalue_widerow (
system_name text,
event_name text,
event_time timestamp,
event_value blob,
PRIMARY KEY ((system_name, event_name), event_time))
WITH CLUSTERING ORDER BY (event_time DESC)

We tested it against the DataStax AMI on EC2 with 6 nodes, replication 3, write 
consistency 2, and default settings with a write only workload and got 190K/s 
for wide row and 150K/s for overwrite.  Thinking through the write path it 
seems the performance should be pretty similar, with probably smaller sstables 
for the overwrite schema, can anyone explain the big difference?

The wide row solution is more complex in that it requires a separate clean up 
thread that will handle deleting the extra values.  If that’s the path we have 
to follow we’re thinking we’d add a bucket of some sort so that we can delete 
an entire partition at a time after copying some values forward, on the 
assumption that deleting the whole partition is much better than deleting some 
slice of the partition.  Is that true?  Also, is there any difference between 
setting a really short ttl and doing a delete?

I know there are a lot of questions in there but we’ve been going back and 
forth on this for a while and I’d really appreciate any help you could give.

Thanks,
John



--
Kurt Greaves
k...@instaclustr.com
www.instaclustr.com


Re: Change authorization from AllowAllAuthorizer to CassandraAuthorizer

2016-06-08 Thread Jai Bheemsen Rao Dhanwada
C* version  I am using is 2.1.13

Cluster 1 - Single DC
Cluster 2 - Multi DC

On Wed, Jun 8, 2016 at 7:01 AM, Felipe Esteves <
felipe.este...@b2wdigital.com> wrote:

> Hi,
>
> Just a feedback from my scenario, it all went well, no downtime. In my
> case, I had authentication enabled from the beginning, just needed to
> change the authorizer.
>
> Felipe Esteves
>
> Tecnologia
>
> felipe.este...@b2wdigital.com 
>
> Tel.: (21) 3504-7162 ramal 57162
>
> Skype: felipe2esteves
>
> 2016-06-08 9:05 GMT-03:00 :
>
>> Which Cassandra version? Most of my
>> authentication-from-non-authentication experience is from Cassandra 1.1 –
>> 2.0. After that, I just enable from the beginning.
>>
>>
>>
>> Sean Durity – Lead Cassandra Admin
>>
>> Big DATA Team
>>
>> MTC 2250
>>
>> For support, create a JIRA
>> 
>>
>>
>>
>> *From:* Jai Bheemsen Rao Dhanwada [mailto:jaibheem...@gmail.com]
>> *Sent:* Tuesday, June 07, 2016 8:31 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: Change authorization from AllowAllAuthorizer to
>> CassandraAuthorizer
>>
>>
>>
>> Hello Sean,
>>
>>
>> Recently I tried to enable Authentication on a existing cluster, I have
>> see the below behaviour. (Clients already have the credentials and 3 node
>> C* cluster)
>>
>>
>>
>> cluster 1 - Enabled Authentication on node1 by adding iptable rules (so
>> that client will not communicate to this node) and I was able to connect to
>> cql with default user and create the required users.
>>
>>
>>
>> cluster 2- Enabled Authentication on node1 by adding iptable rules but
>> the default user was not created and below are the logs.
>>
>>
>>
>> WARN  [NonPeriodicTasks:1] 2016-06-07 20:59:17,898
>> PasswordAuthenticator.java:230 - PasswordAuthenticator skipped default user
>> setup: some nodes were not ready
>>
>> WARN  [NonPeriodicTasks:1] 2016-06-07 20:59:28,007 Auth.java:241 -
>> Skipped default superuser setup: some nodes were not ready
>>
>>
>>
>> Any idea why the behaviour is not consistent across the two clusters?
>>
>>
>>
>> P.S: In both the cases the *system_auth *keyspace was created when the
>> first node was updated.
>>
>>
>>
>> On Tue, Jun 7, 2016 at 11:19 AM, Felipe Esteves <
>> felipe.este...@b2wdigital.com> wrote:
>>
>> Thank you, Sean!
>>
>>
>> *Felipe Esteves*
>>
>> Tecnologia
>>
>> felipe.este...@b2wdigital.com 
>>
>> Tel.: (21) 3504-7162 ramal 57162
>>
>> Skype: felipe2esteves
>>
>>
>>
>> 2016-06-07 14:20 GMT-03:00 :
>>
>> I answered a similar question here:
>>
>> https://groups.google.com/forum/#!topic/nosql-databases/lLBebUCjD8Y
>>
>>
>>
>>
>>
>> Sean Durity – Lead Cassandra Admin
>>
>>
>>
>> *From:* Felipe Esteves [mailto:felipe.este...@b2wdigital.com]
>> *Sent:* Tuesday, June 07, 2016 12:07 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Change authorization from AllowAllAuthorizer to
>> CassandraAuthorizer
>>
>>
>>
>> Hi guys,
>>
>>
>>
>> I have a Cassandra 2.1.8 Community cluster running with
>> AllowAllAuthorizer and have to change it, so I can implement permissions in
>> different users.
>>
>> As I've checked in the docs, seems like a simple change,
>> from AllowAllAuthorizer to CassandraAuthorizer in cassandra.yaml.
>>
>> However, I'm a litte concerned about the performance of the cluster while
>> I'm restarting all the nodes. Is it possible to have any downtime (access
>> errors, maybe), as all the data was created with AllowAllAuthorizer?
>>
>> --
>>
>> *Felipe Esteves*
>>
>> Tecnologia
>>
>> felipe.este...@b2wdigital.com 
>>
>> Tel.: (21) 3504-7162 ramal 57162
>>
>>
>>
>>
>> --
>>
>>
>> The information in this Internet Email is confidential and may be legally
>> privileged. It is intended solely for the addressee. Access to this Email
>> by anyone else is unauthorized. If you are not the intended recipient, any
>> disclosure, copying, distribution or any action taken or omitted to be
>> taken in reliance on it, is prohibited and may be unlawful. When addressed
>> to our clients any opinions or advice contained in this Email are subject
>> to the terms and conditions expressed in any applicable governing The Home
>> Depot terms of business or client engagement letter. The Home Depot
>> disclaims all responsibility and liability for the accuracy and content of
>> this attachment and for any damages or losses arising from any
>> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
>> items of a destructive nature, which may be contained in this attachment
>> and shall not be liable for direct, indirect, consequential or special
>> damages in connection with this e-mail message or its attachment.
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>> The information in this Internet Email is confidential and may be legally
>> privileged. It is 

Re: Unable to use native library in C* trigger

2016-06-08 Thread Brian Kelly
Thank you very much! I am able to use the library now after adding it to the 
whitelist.

Brian

From: Ben Slater >
Reply-To: "user@cassandra.apache.org" 
>
Date: Tuesday, June 7, 2016 at 4:30 PM
To: "user@cassandra.apache.org" 
>
Subject: Re: Unable to use native library in C* trigger

My guess would be it’s due to the UDF sandbox whitelist/blacklist 
(https://github.com/apache/cassandra/blob/5288d434b3b559c7006fa001a2dc56f4f4b2e2c3/src/java/org/apache/cassandra/cql3/functions/UDFunction.java).

As far as I’m aware there is no way current of avoiding this (other than 
recompiling C* with a new whitelist I guess). There is a JIRA for non-sandboxed 
UDFs: https://issues.apache.org/jira/browse/CASSANDRA-9892

Cheers
Ben

On Wed, 8 Jun 2016 at 01:07 Brian Kelly 
> wrote:
Hi, all,

I am attempting write a trigger that depends on a native library. The library 
is successfully loaded by the JVM (so the java.library.path should not be the 
issue - I just have the jnilib in lib/sigar-bin for now) but any call to its 
methods result in a java.lang.UnsatisfiedLinkError. This only happens in the 
Cassandra trigger; a standalone Java program has no issues using the library. 
Does anybody have an idea what may be causing this? Here is the output when the 
trigger fires:

cqlsh>  INSERT INTO test.test (key, value) values ('1', '1');
ServerError: 

And the debug.log:

ERROR [SharedPool-Worker-1] 2016-06-07 10:03:42,870 Message.java:611 - 
Unexpected exception during request; channel = [id: 0x3a5b79c3, 
L:/127.0.0.1:9042 - 
R:/127.0.0.1:53097]
java.lang.UnsatisfiedLinkError: 
org.apache.cassandra.triggers.AuditTrigger.callSomeNativeMethod()Ljava/lang/String;
at 
org.apache.cassandra.triggers.AuditTrigger.callSomeNativeMethod(Native Method) 
~[na:na]
at 
org.apache.cassandra.triggers.AuditTrigger.augment(AuditTrigger.java:44) 
~[na:na]
at 
org.apache.cassandra.triggers.TriggerExecutor.executeInternal(TriggerExecutor.java:229)
 ~[main/:na]
at 
org.apache.cassandra.triggers.TriggerExecutor.execute(TriggerExecutor.java:119) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:819)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:431)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:417)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:219) 
~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:204) 
~[main/:na]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
 ~[main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
 [main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
 [main/:na]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.36.Final.jar:4.0.36.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
 [netty-all-4.0.36.Final.jar:4.0.36.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.36.Final.jar:4.0.36.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
 [netty-all-4.0.36.Final.jar:4.0.36.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_25]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:106) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]

I am just running one, local node.

Thanks,
Brian
--

Ben Slater
Chief Product Officer, Instaclustr
+61 437 929 798


Re: Nodetool repair inconsistencies

2016-06-08 Thread Jason Kania
Hi Paul,
I have tried running 'nodetool compact' and the situation remains the same 
after I deleted the files that caused 'nodetool compact' to generate an 
exception in the first place.
My concern is that if I delete some sstable sets from a directory or even if I 
completely eliminate the sstables in a directory on one machine, run 'nodetool 
repair' followed by 'nodetool compact', that directory remains empty. My 
understanding has been that these equivalently named directories should contain 
roughly the same amount of content.
Thanks,
Jason

  From: Paul Fife 
 To: user@cassandra.apache.org; Jason Kania  
 Sent: Wednesday, June 8, 2016 12:55 PM
 Subject: Re: Nodetool repair inconsistencies
   
Hi Jason -
Did you run a major compaction after the repair completed? Do you have other 
reasons besides the number/size of sstables to believe all nodes don't have a 
copy of the current data at the end of the repair operation?
Thanks,Paul
On Wed, Jun 8, 2016 at 8:12 AM, Jason Kania  wrote:

Hi Romain,
The problem is that there is no error to share. I am focusing on the 
inconsistency that when I run nodetool repair, get no errors and yet the 
content in the same directory on the different nodes is vastly different. This 
lack of an error is nature of my question, not the nodetool compact error.
Thanks,
Jason
  From: Romain Hardouin 
 To: "user@cassandra.apache.org" ; Jason Kania 
 
 Sent: Wednesday, June 8, 2016 8:30 AM
 Subject: Re: Nodetool repair inconsistencies
  
Hi Jason,
It's difficult for the community to help you if you don't share the error 
;-)What the logs said when you ran a major compaction? (i.e. the first error 
you encountered) 
Best,
Romain

Le Mercredi 8 juin 2016 3h34, Jason Kania  a écrit :
 

 I am running a 3 node cluster of 3.0.6 instances and encountered an error when 
running nodetool compact. I then ran nodetool repair. No errors were returned.
I then attempted to run nodetool compact again, but received the same error so 
the repair made no correction and reported no errors.
After that, I moved the problematic files out of the directory, restarted 
cassandra and attempted the repair again. The repair again completed without 
errors, however, no files were added to the directory that had contained the 
corrupt files. So nodetool repair does not seem to be making actual repairs.
I started looking around and numerous directories have vastly different amounts 
of content across the 3 nodes. There are 3 replicas so I would expect to find 
similar amounts of content in the same data directory on the different nodes.

Is there any way to dig deeper into this? I don't want to be caught because 
replication/repair is silently failing. I noticed that there is always an "some 
repair failed" amongst the repair output but that is so completely unhelpful 
and has always been present.

Thanks,
Jason


   

   



  

Re: Consistency level ONE and using withLocalDC

2016-06-08 Thread Alain RODRIGUEZ
Hi George,

Would that be correct?


I think it is actually quite the opposite :-).

It is very well explained here:
https://docs.datastax.com/en/drivers/java/2.0/com/datastax/driver/core/policies/DCAwareRoundRobinPolicy.Builder.html#withUsedHostsPerRemoteDc-int-

Connection is opened to the X nodes in the remote DC. But it will only be
used to indeed do a local operation as a fallback if the operation is not
using a LOCAL_* consistency level.

Sorry I have been so long answering you.

---
Alain Rodriguez - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2016-05-20 17:54 GMT+02:00 George Sigletos :

> Hello,
>
> Using withLocalDC="myLocalDC" and withUsedHostsPerRemoteDc>0 will
> guarantee that you will connect to one of the nodes in "myLocalDC",
>
> but DOES NOT guarantee that your read/write request will be acknowledged
> by a "myLocalDC" node. It may well be acknowledged by a remote DC node as
> well, even if "myLocalDC" is up and running.
>
> Would that be correct? Thank you
>
> Kind regards,
> George
>


Re: Nodetool repair inconsistencies

2016-06-08 Thread Paul Fife
Hi Jason -

Did you run a major compaction after the repair completed? Do you have
other reasons besides the number/size of sstables to believe all nodes
don't have a copy of the current data at the end of the repair operation?

Thanks,
Paul

On Wed, Jun 8, 2016 at 8:12 AM, Jason Kania  wrote:

> Hi Romain,
>
> The problem is that there is no error to share. I am focusing on the
> inconsistency that when I run nodetool repair, get no errors and yet the
> content in the same directory on the different nodes is vastly different.
> This lack of an error is nature of my question, not the nodetool compact
> error.
>
> Thanks,
>
> Jason
>
> --
> *From:* Romain Hardouin 
> *To:* "user@cassandra.apache.org" ; Jason
> Kania 
> *Sent:* Wednesday, June 8, 2016 8:30 AM
> *Subject:* Re: Nodetool repair inconsistencies
>
> Hi Jason,
>
> It's difficult for the community to help you if you don't share the error
> ;-)
> What the logs said when you ran a major compaction? (i.e. the first error
> you encountered)
>
> Best,
>
> Romain
>
> Le Mercredi 8 juin 2016 3h34, Jason Kania  a écrit
> :
>
>
> I am running a 3 node cluster of 3.0.6 instances and encountered an error
> when running nodetool compact. I then ran nodetool repair. No errors were
> returned.
>
> I then attempted to run nodetool compact again, but received the same
> error so the repair made no correction and reported no errors.
>
> After that, I moved the problematic files out of the directory, restarted
> cassandra and attempted the repair again. The repair again completed
> without errors, however, no files were added to the directory that had
> contained the corrupt files. So nodetool repair does not seem to be making
> actual repairs.
>
> I started looking around and numerous directories have vastly different
> amounts of content across the 3 nodes. There are 3 replicas so I would
> expect to find similar amounts of content in the same data directory on the
> different nodes.
>
> Is there any way to dig deeper into this? I don't want to be caught
> because replication/repair is silently failing. I noticed that there is
> always an "some repair failed" amongst the repair output but that is so
> completely unhelpful and has always been present.
>
> Thanks,
>
> Jason
>
>
>
>
>


Re: Lightweight Transactions during datacenter outage

2016-06-08 Thread Romain Hardouin
> Would you know why the driver doesn't automatically change to LOCAL_SERIAL 
> during a DC outage ?
I would say because *you* decide, not the driver ;-) This kind of fallback 
could be achieved with a custom downgrading policy 
(DowngradingConsistencyRetryPolicy [*] doesn't handle ConsistencyLevel.SERIAL / 
LOCAL_SERIAL )
* 
https://github.com/datastax/python-driver/blob/2.7.2-cassandra-2.1/cassandra/policies.py#L747
Best,
Romain
 

Le Mercredi 8 juin 2016 15h41, Jeronimo de A. Barros 
 a écrit :
 

 Tyler,
Thank you, it's working now:
self.query['online'] = SimpleStatement("UPDATE table USING ttl %s SET l = True 
WHERE k2 = %s IF l = False;", consistency_level=ConsistencyLevel.LOCAL_QUORUM, 
serial_consistency_level=ConsistencyLevel.LOCAL_SERIAL) 
Would you know why the driver doesn't automatically change to LOCAL_SERIAL 
during a DC outage ? Or the driver already has an option to make this change 
from SERIAL to LOCAL_SERIAL ?
Again, thank you very much, the bill for the beers is on me in September during 
the Cassandra Summit. ;-)
Best regards, Jero

On Tue, Jun 7, 2016 at 6:39 PM, Tyler Hobbs  wrote:

You can set the serial_consistency_level to LOCAL_SERIAL to tolerate a DC 
failure: 
http://datastax.github.io/python-driver/api/cassandra/query.html#cassandra.query.Statement.serial_consistency_level.
  It defaults to SERIAL, which ignores DCs.

On Tue, Jun 7, 2016 at 12:26 PM, Jeronimo de A. Barros 
 wrote:

Hi,
I have a cluster spreaded among 2 datacenters (DC1 and DC2), two server on each 
DC and I have a keyspace with NetworkTopologyStrategy (DC1:2 and DC2:2) with 
the following table:
CREATE TABLE test (  k1 int,  k2 timeuuid,  PRIMARY KEY ((k1), k2)) WITH 
CLUSTERING ORDER BY (k2 DESC)
During a datacenter outage, as soon as a datacenter goes offline, I get this 
error during a lightweight transaction:
cqlsh:devtest> insert into test (k1,k2) values(1,now()) if not exists;Request 
did not complete within rpc_timeout.                                          
And a short time after the on-line DC verify the second DC is off-line:
cqlsh:devtest> insert into test (k1,k2) values(1,now()) if not exists;Unable to 
complete request: one or more nodes were unavailable.                       
So, my question is: Is there any way to keep lightweight transactions working 
during a datacenter outage using the C* Python driver 2.7.2 ?
I was thinking about catch the exception and do a simple insert (without "IF") 
when the error occur, but having the lightweight transactions working even 
during a DC outage/split would be nice.
Thanks in advance for any help/hints.
Best regards, Jero



-- 
Tyler Hobbs
DataStax




  

Re: Nodetool repair inconsistencies

2016-06-08 Thread Jason Kania
Hi Romain,
The problem is that there is no error to share. I am focusing on the 
inconsistency that when I run nodetool repair, get no errors and yet the 
content in the same directory on the different nodes is vastly different. This 
lack of an error is nature of my question, not the nodetool compact error.
Thanks,
Jason
  From: Romain Hardouin 
 To: "user@cassandra.apache.org" ; Jason Kania 
 
 Sent: Wednesday, June 8, 2016 8:30 AM
 Subject: Re: Nodetool repair inconsistencies
   
Hi Jason,
It's difficult for the community to help you if you don't share the error 
;-)What the logs said when you ran a major compaction? (i.e. the first error 
you encountered) 
Best,
Romain

Le Mercredi 8 juin 2016 3h34, Jason Kania  a écrit :
 

 I am running a 3 node cluster of 3.0.6 instances and encountered an error when 
running nodetool compact. I then ran nodetool repair. No errors were returned.
I then attempted to run nodetool compact again, but received the same error so 
the repair made no correction and reported no errors.
After that, I moved the problematic files out of the directory, restarted 
cassandra and attempted the repair again. The repair again completed without 
errors, however, no files were added to the directory that had contained the 
corrupt files. So nodetool repair does not seem to be making actual repairs.
I started looking around and numerous directories have vastly different amounts 
of content across the 3 nodes. There are 3 replicas so I would expect to find 
similar amounts of content in the same data directory on the different nodes.

Is there any way to dig deeper into this? I don't want to be caught because 
replication/repair is silently failing. I noticed that there is always an "some 
repair failed" amongst the repair output but that is so completely unhelpful 
and has always been present.

Thanks,
Jason


   

  

Re: Change authorization from AllowAllAuthorizer to CassandraAuthorizer

2016-06-08 Thread Felipe Esteves
Hi,

Just a feedback from my scenario, it all went well, no downtime. In my
case, I had authentication enabled from the beginning, just needed to
change the authorizer.

Felipe Esteves

Tecnologia

felipe.este...@b2wdigital.com 

Tel.: (21) 3504-7162 ramal 57162

Skype: felipe2esteves

2016-06-08 9:05 GMT-03:00 :

> Which Cassandra version? Most of my authentication-from-non-authentication
> experience is from Cassandra 1.1 – 2.0. After that, I just enable from the
> beginning.
>
>
>
> Sean Durity – Lead Cassandra Admin
>
> Big DATA Team
>
> MTC 2250
>
> For support, create a JIRA
> 
>
>
>
> *From:* Jai Bheemsen Rao Dhanwada [mailto:jaibheem...@gmail.com]
> *Sent:* Tuesday, June 07, 2016 8:31 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Change authorization from AllowAllAuthorizer to
> CassandraAuthorizer
>
>
>
> Hello Sean,
>
>
> Recently I tried to enable Authentication on a existing cluster, I have
> see the below behaviour. (Clients already have the credentials and 3 node
> C* cluster)
>
>
>
> cluster 1 - Enabled Authentication on node1 by adding iptable rules (so
> that client will not communicate to this node) and I was able to connect to
> cql with default user and create the required users.
>
>
>
> cluster 2- Enabled Authentication on node1 by adding iptable rules but the
> default user was not created and below are the logs.
>
>
>
> WARN  [NonPeriodicTasks:1] 2016-06-07 20:59:17,898
> PasswordAuthenticator.java:230 - PasswordAuthenticator skipped default user
> setup: some nodes were not ready
>
> WARN  [NonPeriodicTasks:1] 2016-06-07 20:59:28,007 Auth.java:241 - Skipped
> default superuser setup: some nodes were not ready
>
>
>
> Any idea why the behaviour is not consistent across the two clusters?
>
>
>
> P.S: In both the cases the *system_auth *keyspace was created when the
> first node was updated.
>
>
>
> On Tue, Jun 7, 2016 at 11:19 AM, Felipe Esteves <
> felipe.este...@b2wdigital.com> wrote:
>
> Thank you, Sean!
>
>
> *Felipe Esteves*
>
> Tecnologia
>
> felipe.este...@b2wdigital.com 
>
> Tel.: (21) 3504-7162 ramal 57162
>
> Skype: felipe2esteves
>
>
>
> 2016-06-07 14:20 GMT-03:00 :
>
> I answered a similar question here:
>
> https://groups.google.com/forum/#!topic/nosql-databases/lLBebUCjD8Y
>
>
>
>
>
> Sean Durity – Lead Cassandra Admin
>
>
>
> *From:* Felipe Esteves [mailto:felipe.este...@b2wdigital.com]
> *Sent:* Tuesday, June 07, 2016 12:07 PM
> *To:* user@cassandra.apache.org
> *Subject:* Change authorization from AllowAllAuthorizer to
> CassandraAuthorizer
>
>
>
> Hi guys,
>
>
>
> I have a Cassandra 2.1.8 Community cluster running with AllowAllAuthorizer
> and have to change it, so I can implement permissions in different users.
>
> As I've checked in the docs, seems like a simple change,
> from AllowAllAuthorizer to CassandraAuthorizer in cassandra.yaml.
>
> However, I'm a litte concerned about the performance of the cluster while
> I'm restarting all the nodes. Is it possible to have any downtime (access
> errors, maybe), as all the data was created with AllowAllAuthorizer?
>
> --
>
> *Felipe Esteves*
>
> Tecnologia
>
> felipe.este...@b2wdigital.com 
>
> Tel.: (21) 3504-7162 ramal 57162
>
>
>
>
> --
>
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>
>
>
>
>
>
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are 

Re: Lightweight Transactions during datacenter outage

2016-06-08 Thread Jeronimo de A. Barros
Tyler,

Thank you, it's working now:

self.query['online'] = SimpleStatement("UPDATE table USING ttl %s SET l =
True WHERE k2 = %s IF l = False;",
consistency_level=ConsistencyLevel.LOCAL_QUORUM,
serial_consistency_level=ConsistencyLevel.LOCAL_SERIAL)

Would you know why the driver doesn't automatically change to LOCAL_SERIAL
during a DC outage ? Or the driver already has an option to make this
change from SERIAL to LOCAL_SERIAL ?

Again, thank you very much, the bill for the beers is on me in September
during the Cassandra Summit. ;-)

Best regards, Jero


On Tue, Jun 7, 2016 at 6:39 PM, Tyler Hobbs  wrote:

> You can set the serial_consistency_level to LOCAL_SERIAL to tolerate a DC
> failure:
> http://datastax.github.io/python-driver/api/cassandra/query.html#cassandra.query.Statement.serial_consistency_level.
> It defaults to SERIAL, which ignores DCs.
>
> On Tue, Jun 7, 2016 at 12:26 PM, Jeronimo de A. Barros <
> jeronimo.bar...@gmail.com> wrote:
>
>> Hi,
>>
>> I have a cluster spreaded among 2 datacenters (DC1 and DC2), two server
>> on each DC and I have a keyspace with NetworkTopologyStrategy (DC1:2 and
>> DC2:2) with the following table:
>>
>> CREATE TABLE test (
>>   k1 int,
>>   k2 timeuuid,
>>   PRIMARY KEY ((k1), k2)
>> ) WITH CLUSTERING ORDER BY (k2 DESC)
>>
>> During a datacenter outage, as soon as a datacenter goes offline, I get
>> this error during a lightweight transaction:
>>
>> cqlsh:devtest> insert into test (k1,k2) values(1,now()) if not exists;
>> Request did not complete within rpc_timeout.
>>
>>
>> And a short time after the on-line DC verify the second DC is off-line:
>>
>> cqlsh:devtest> insert into test (k1,k2) values(1,now()) if not exists;
>> Unable to complete request: one or more nodes were unavailable.
>>
>>
>> So, my question is: Is there any way to keep lightweight transactions
>> working during a datacenter outage using the C* Python driver 2.7.2 ?
>>
>> I was thinking about catch the exception and do a simple insert (without
>> "IF") when the error occur, but having the lightweight transactions working
>> even during a DC outage/split would be nice.
>>
>> Thanks in advance for any help/hints.
>>
>> Best regards, Jero
>>
>
>
>
> --
> Tyler Hobbs
> DataStax 
>


Re: Nodetool repair inconsistencies

2016-06-08 Thread Romain Hardouin
Hi Jason,
It's difficult for the community to help you if you don't share the error 
;-)What the logs said when you ran a major compaction? (i.e. the first error 
you encountered) 
Best,
Romain

Le Mercredi 8 juin 2016 3h34, Jason Kania  a écrit :
 

 I am running a 3 node cluster of 3.0.6 instances and encountered an error when 
running nodetool compact. I then ran nodetool repair. No errors were returned.
I then attempted to run nodetool compact again, but received the same error so 
the repair made no correction and reported no errors.
After that, I moved the problematic files out of the directory, restarted 
cassandra and attempted the repair again. The repair again completed without 
errors, however, no files were added to the directory that had contained the 
corrupt files. So nodetool repair does not seem to be making actual repairs.
I started looking around and numerous directories have vastly different amounts 
of content across the 3 nodes. There are 3 replicas so I would expect to find 
similar amounts of content in the same data directory on the different nodes.

Is there any way to dig deeper into this? I don't want to be caught because 
replication/repair is silently failing. I noticed that there is always an "some 
repair failed" amongst the repair output but that is so completely unhelpful 
and has always been present.

Thanks,
Jason


  

Re: How to remove 'compact storage' attribute?

2016-06-08 Thread Romain Hardouin
 
Hi,
You can't yet, see https://issues.apache.org/jira/browse/CASSANDRA-10857Note 
that secondary indexes don't scale. Be aware of their limitations.If you want 
to change the data model of a CF, a Spark job can do the trick.
Best,
Romain   

 Le Mardi 7 juin 2016 10h51, "Lu, Boying"  a écrit :
 

  #yiv3185006454 #yiv3185006454 -- filtered {font-family:SimSun;panose-1:2 1 6 
0 3 1 1 1 1 1;}#yiv3185006454 filtered {font-family:SimSun;panose-1:2 1 6 0 3 1 
1 1 1 1;}#yiv3185006454 filtered {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 
2 4;}#yiv3185006454 filtered {font-family:SimSun;panose-1:2 1 6 0 3 1 1 1 1 
1;}#yiv3185006454 p.yiv3185006454MsoNormal, #yiv3185006454 
li.yiv3185006454MsoNormal, #yiv3185006454 div.yiv3185006454MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:11.0pt;}#yiv3185006454 a:link, 
#yiv3185006454 span.yiv3185006454MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv3185006454 a:visited, #yiv3185006454 
span.yiv3185006454MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv3185006454 
span.yiv3185006454EmailStyle17 {color:windowtext;}#yiv3185006454 
.yiv3185006454MsoChpDefault {}#yiv3185006454 filtered {margin:72.0pt 90.0pt 
72.0pt 90.0pt;}#yiv3185006454 div.yiv3185006454WordSection1 {}#yiv3185006454 
Hi, All,    Since the Astyanax client has been EOL, we are considering to 
migrate to Datastax java client in our product.    One thing I notice is that 
the CFs created  by Astyanax have ‘compact storage’ attribute which prevent us 
from using some new features provided by CQL such as secondary index.    Does 
anyone know how to remove this attribute? “ALTER TABLE” seems doesn’t work 
according to the CQL document.    Thanks     Boying    

  

RE: Change authorization from AllowAllAuthorizer to CassandraAuthorizer

2016-06-08 Thread SEAN_R_DURITY
Which Cassandra version? Most of my authentication-from-non-authentication 
experience is from Cassandra 1.1 – 2.0. After that, I just enable from the 
beginning.

Sean Durity – Lead Cassandra Admin
Big DATA Team
MTC 2250
For support, create a 
JIRA

From: Jai Bheemsen Rao Dhanwada [mailto:jaibheem...@gmail.com]
Sent: Tuesday, June 07, 2016 8:31 PM
To: user@cassandra.apache.org
Subject: Re: Change authorization from AllowAllAuthorizer to CassandraAuthorizer

Hello Sean,

Recently I tried to enable Authentication on a existing cluster, I have see the 
below behaviour. (Clients already have the credentials and 3 node C* cluster)

cluster 1 - Enabled Authentication on node1 by adding iptable rules (so that 
client will not communicate to this node) and I was able to connect to cql with 
default user and create the required users.

cluster 2- Enabled Authentication on node1 by adding iptable rules but the 
default user was not created and below are the logs.

WARN  [NonPeriodicTasks:1] 2016-06-07 20:59:17,898 
PasswordAuthenticator.java:230 - PasswordAuthenticator skipped default user 
setup: some nodes were not ready
WARN  [NonPeriodicTasks:1] 2016-06-07 20:59:28,007 Auth.java:241 - Skipped 
default superuser setup: some nodes were not ready

Any idea why the behaviour is not consistent across the two clusters?

P.S: In both the cases the system_auth keyspace was created when the first node 
was updated.

On Tue, Jun 7, 2016 at 11:19 AM, Felipe Esteves 
> wrote:
Thank you, Sean!


Felipe Esteves

Tecnologia

felipe.este...@b2wdigital.com

Tel.: (21) 3504-7162 ramal 57162

Skype: felipe2esteves

2016-06-07 14:20 GMT-03:00 
>:
I answered a similar question here:
https://groups.google.com/forum/#!topic/nosql-databases/lLBebUCjD8Y


Sean Durity – Lead Cassandra Admin

From: Felipe Esteves 
[mailto:felipe.este...@b2wdigital.com]
Sent: Tuesday, June 07, 2016 12:07 PM
To: user@cassandra.apache.org
Subject: Change authorization from AllowAllAuthorizer to CassandraAuthorizer

Hi guys,

I have a Cassandra 2.1.8 Community cluster running with AllowAllAuthorizer and 
have to change it, so I can implement permissions in different users.
As I've checked in the docs, seems like a simple change, from 
AllowAllAuthorizer to CassandraAuthorizer in cassandra.yaml.
However, I'm a litte concerned about the performance of the cluster while I'm 
restarting all the nodes. Is it possible to have any downtime (access errors, 
maybe), as all the data was created with AllowAllAuthorizer?
--

Felipe Esteves

Tecnologia

felipe.este...@b2wdigital.com

Tel.: (21) 3504-7162 ramal 57162




The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.






The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect,