Re: Can someone invite me to the apache cassandra slack channel?

2023-05-06 Thread Sam Tunnicliffe
I've sent the invite, I think it may need an admin to approve it though, so if 
you didn't get it yet it should be on its way.

> On 6 May 2023, at 06:52, Const Eust  wrote:
> 
> https://infra.apache.org/slack.html
> 
> The directions say someone with apache.org  powers needs 
> to do it. 
> 
> I lost my job recently and I was in the slack with my work account.



[RELEASE] Apache Cassandra 4.0.1 released

2021-09-07 Thread Sam Tunnicliffe


The Cassandra team is pleased to announce the release of Apache Cassandra 
version 4.0.1.

Apache Cassandra is a fully distributed database. It is the right choice when 
you need scalability and high availability without compromising performance.

 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download section:

 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 4.0 series. As always, please pay 
attention to the release notes[2] and let us know[3] if you were to encounter 
any problem.

Enjoy!

[1]: CHANGES.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-4.0.1
[2]: NEWS.txt 
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-4.0.1
[3]: https://issues.apache.org/jira/browse/CASSANDRA



Re: Issue with native protocol

2021-07-29 Thread Sam Tunnicliffe
Assuming that the one node doesn't have 
native_transport_max_negotiable_protocol_version=3 in cassandra.yaml, you could 
check its log for 
"Detected peers which do not fully support protocol V4. Capping max negotiable 
version to V3". 

The details are in CASSANDRA-15193, but tl;dr is that a serialisation bug 
affecting paging in mixed version clusters means that it was/is not ideal to 
support V4 in a cluster containing both 2.x and 3.x nodes. Each 3.x node 
determines the max protocol version it should support based on the advertised 
versions of its peers. It's possible that the affected node missed an update 
regarding one of its peers and so is incorrectly enforcing the cap. If that is 
the case then restarting that node should prompt it to reevaluate the cap.



> On 29 Jul 2021, at 07:54, Erick Ramirez  wrote:
> 
> Thanks, Pekka. But we know from an earlier post from Srinivas that the driver 
> is trying to negotiate with v4 but the node wouldn't:
> 
> [2021-07-09 23:26:52.382 -0700]  
> com.datastax.driver.core.Connection - DEBUG: Got unsupported protocol version 
> error from /: for version V4 server supports version V3
> [2021-07-09 23:26:52.382 -0700]  
> com.datastax.driver.core.Connection - DEBUG: Connection[//: -1, 
> inFlight=0, closed=true] closing connection
> [2021-07-09 23:26:52.382 -0700]  
> com.datastax.driver.core.Host.STATES - DEBUG: [//:] 
> Connection[/10.39.38.166:9042-1, inFlight=0, closed=true] closed, remaining = > 0
> [2021-07-09 23:26:52.383 -0700]  com.datastax.driver.core.Cluster - 
> DEBUG: Cannot connect with protocol V4, trying V3
> 
> So we know it's just the one problematic node in the cluster which won't 
> negotiate. The SHOW VERSION in cqlsh also indicates v3 but I can't figure out 
> what could be triggering it. Cheers!



Re: 4.0 best feature/fix?

2021-05-07 Thread Sam Tunnicliffe
That's a driver error using protocol V5, which is the default from 4.0-rc1 but 
only recently added to the drivers. Can you try specifying protocol V4 with all 
the same parameters? Also, if it's at all possible (which it may not be, given 
the divergence between driver versions 3 & 4), could you try with protocol V5 
and driver version 3.11.0?

Thanks,
Sam


> On 7 May 2021, at 16:12, Joe Obernberger  wrote:
> 
> I can retry Java 11.
> 
> I am seeing this error a lot - still debugging, but I'll throw it out there - 
> using 4.11.1 driver and a 4 node RC1 cluster.  I'm seeing warning in the 
> cassandra logs about slow queries, but no errors.  This error is client side.
> 
> Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: All 4 
> node(s) tried for the query failed (showing first 3 nodes, use getAllErrors() 
> for more): Node(endPoint=/172.16.100.39:9042, 
> hostId=93f9cb0f-ea71-4e3d-b62a-f0ea0e888c47, hashCode=345a8431): 
> [java.lang.NullPointerException], Node(endPoint=/172.16.100.36:9042,   
> hostId=d9702f96-256e-45ae-8e12-69a42712be50, hashCode=4c7ac5bb): 
> [java.lang.NullPointerException], Node(endPoint=chaos/172.16.100.37:9042, 
> hostId=08a19658-40be-4e55-8709-812b3d4ac750, hashCode=7ba07f0e): 
> [java.lang.NullPointerException]
> at 
> com.datastax.oss.driver.api.core.AllNodesFailedException.copy(AllNodesFailedException.java:141)
> at 
> com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
> at 
> com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
> at 
> com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:230)
> at 
> com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:54)
> at 
> com.ngc.helios.heliosingestservice.IngestService.splitOrigData(IngestService.java:55)
> at 
> com.ngc.helios.heliosingestservice.IngestService_ClientProxy.splitOrigData(IngestService_ClientProxy.zig:157)
> at 
> com.ngc.helios.heliosingestservice.IngestResource.splitOrigData(IngestResource.java:27)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at 
> org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:170)
> at 
> org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:130)
> at 
> org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:643)
> at 
> org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:507)
> at 
> org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$2(ResourceMethodInvoker.java:457)
> at 
> org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364)
> at 
> org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:459)
> at 
> org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:419)
> at 
> org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:393)
> at 
> org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:68)
> at 
> org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:492)
> ... 15 more
> Suppressed: java.lang.NullPointerException
> at 
> com.datastax.oss.protocol.internal.PrimitiveSizes.sizeOfShortBytes(PrimitiveSizes.java:59)
> at 
> com.datastax.oss.protocol.internal.request.Execute$Codec.encodedSize(Execute.java:78)
> at 
> com.datastax.oss.protocol.internal.FrameCodec.encodedBodySize(FrameCodec.java:272)
> at 
> com.datastax.oss.protocol.internal.SegmentBuilder.addFrame(SegmentBuilder.java:75)
> at 
> com.datastax.oss.driver.internal.core.protocol.FrameToSegmentEncoder.write(FrameToSegmentEncoder.java:56)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:709)
> at 
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:792)
> at 
> 

Re: CVE-2020-13946 Apache Cassandra RMI Rebind Vulnerability

2020-09-02 Thread Sam Tunnicliffe
Hi Manish,

unfortunately I'm afraid, as far as I'm aware there is not.

Thanks,
Sam

> On 2 Sep 2020, at 04:14, manish khandelwal  
> wrote:
> 
> Hi Sam
> 
> Is there any alternative to avoid this vulnerability? Like upgrade to 
> specific JVM version.
> 
> Regards
> Manish
> 
> On Tue, Sep 1, 2020 at 8:03 PM Sam Tunnicliffe  <mailto:s...@beobal.com>> wrote:
> CVE-2020-13946 Apache Cassandra RMI Rebind Vulnerability
> 
> Versions Affected:
> All versions prior to: 2.1.22, 2.2.18, 3.0.22, 3.11.8 and 4.0-beta2
> 
> Description:
> It is possible for a local attacker without access to the Apache Cassandra 
> process or configuration files to manipulate the RMI registry to perform a 
> man-in-the-middle attack and capture user names and passwords used to access 
> the JMX interface. The attacker can then use these credentials to access the 
> JMX interface and perform unauthorised operations.
> Users should also be aware of CVE-2019-2684, a JRE vulnerability that enables 
> this issue to be exploited remotely.
> 
> Mitigation:
> 2.1.x users should upgrade to 2.1.22
> 2.2.x users should upgrade to 2.2.18
> 3.0.x users should upgrade to 3.0.22
> 3.11.x users should upgrade to 3.11.8
> 4.0-beta1 users should upgrade to 4.0-beta2
> 
> 



CVE-2020-13946 Apache Cassandra RMI Rebind Vulnerability

2020-09-01 Thread Sam Tunnicliffe
CVE-2020-13946 Apache Cassandra RMI Rebind Vulnerability

Versions Affected:
All versions prior to: 2.1.22, 2.2.18, 3.0.22, 3.11.8 and 4.0-beta2

Description:
It is possible for a local attacker without access to the Apache Cassandra 
process or configuration files to manipulate the RMI registry to perform a 
man-in-the-middle attack and capture user names and passwords used to access 
the JMX interface. The attacker can then use these credentials to access the 
JMX interface and perform unauthorised operations.
Users should also be aware of CVE-2019-2684, a JRE vulnerability that enables 
this issue to be exploited remotely.

Mitigation:
2.1.x users should upgrade to 2.1.22
2.2.x users should upgrade to 2.2.18
3.0.x users should upgrade to 3.0.22
3.11.x users should upgrade to 3.11.8
4.0-beta1 users should upgrade to 4.0-beta2




Re: Impact of enabling authentication on performance

2020-06-04 Thread Sam Tunnicliffe
Passwords are hashed using bcrypt, which performs a configurable number of 
encryption rounds on the input. The more rounds, the more computationally 
expensive the hashing and so the more effort required to defeat by brute force. 
By default, Cassandra encrypts with 2^10 rounds, but this can be set anywhere 
between 2^4 and 2^31, the trade off being a lower number of rounds is 
technically less secure but puts less strain on the servers, particularly if 
you have a lot of short lived client connections and/or thundering herd issues. 

To override the default use a system property, which can be added to 
jvm-server.options, e.g.:

cassandra.auth_bcrypt_gensalt_log2_rounds=4 

Bcrypt encodes the number of rounds used to generate a hash in the hash itself 
so existing passwords will continue to work, they just won't benefit from the 
reduced costs. See https://issues.apache.org/jira/browse/CASSANDRA-8085 for 
(slightly) more info.


> On 4 Jun 2020, at 07:39, Gil Ganz  wrote:
> 
> Great advice guys, will check it out.
> Jeff, what do you mean exactly by dropping bcrypt rounds?
> 
> 
> On Wed, Jun 3, 2020 at 10:22 AM Alex Ott  > wrote:
> You can decrease this time for picking up the change by using lower number
> for credentials_update_interval_in_ms, roles_update_interval_in_ms &
> permissions_update_interval_in_ms 
> 
> Durity, Sean R  at "Tue, 2 Jun 2020 14:48:28 +" wrote:
>  DSR> To flesh this out a bit, I set roles_validity_in_ms and 
> permissions_validity_in_ms to
>  DSR> 360 (10 minutes). The default of 2000 is far too often for my use 
> cases. Usually I set
>  DSR> the RF for system_auth to 3 per DC. On a larger, busier cluster I have 
> set it to 6 per
>  DSR> DC. NOTE: if you set the validity higher, it may take that amount of 
> time before a change
>  DSR> in password or table permissions is picked up (usually less).
> 
> 
>  DSR> Sean Durity
> 
>  DSR> -Original Message-
>  DSR> From: Jeff Jirsa mailto:jji...@gmail.com>>
>  DSR> Sent: Tuesday, June 2, 2020 2:39 AM
>  DSR> To: user@cassandra.apache.org 
>  DSR> Subject: [EXTERNAL] Re: Impact of enabling authentication on performance
> 
>  DSR> Set the Auth cache to a long validity
> 
>  DSR> Don’t go crazy with RF of system auth
> 
>  DSR> Drop bcrypt rounds if you see massive cpu spikes on reconnect storms
> 
> 
>  >> On Jun 1, 2020, at 11:26 PM, Gil Ganz  > wrote:
>  >>
>  >> 
>  >> Hi
>  >> I have a production 3.11.6 cluster which I'm might want to enable 
> authentication in, I'm trying to understand what will be the performance 
> impact, if any.
>  >> I understand each use case might be different, trying to understand if 
> there is a common % people usually see their performance hit, or if someone 
> has looked into this.
>  >> Gil
> 
>  DSR> -
>  DSR> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> 
>  DSR> For additional commands, e-mail: user-h...@cassandra.apache.org 
> 
> 
> 
>  DSR> 
> 
>  DSR> The information in this Internet Email is confidential and may be 
> legally privileged. It is intended solely for the addressee. Access to this 
> Email by anyone else is unauthorized. If you are not the intended recipient, 
> any disclosure, copying, distribution or any action taken or omitted to be 
> taken in reliance on it, is prohibited and may be unlawful. When addressed to 
> our clients any opinions or advice contained in this Email are subject to the 
> terms and conditions expressed in any applicable governing The Home Depot 
> terms of business or client engagement letter. The Home Depot disclaims all 
> responsibility and liability for the accuracy and content of this attachment 
> and for any damages or losses arising from any inaccuracies, errors, viruses, 
> e.g., worms, trojan horses, etc., or other items of a destructive nature, 
> which may be contained in this attachment and shall not be liable for direct, 
> indirect, consequential or special damages in connection with this e-mail 
> message or its attachment.
> 
>  DSR> -
>  DSR> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> 
>  DSR> For additional commands, e-mail: user-h...@cassandra.apache.org 
> 
> 
> 
> -- 
> With best wishes,Alex Ott
> Principal Architect, DataStax
> http://datastax.com/ 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> 
> For additional commands, e-mail: user-h...@cassandra.apache.org 
> 

Re: Cannot replace_address /10.xx.xx.xx because it doesn't exist ingossip

2019-03-15 Thread Sam Tunnicliffe
Do you have a cassandra-topology.properties file in place? If so, GPFS will 
instantiate a PropertyFileSnitch using that for compatibility mode. Then, when 
gossip state doesn’t contain any endpoint info about the down node (because you 
bounced the whole cluster), instead of reading the rack & dc from system.peers, 
it will fall back to the PFS. DC1:r1 is the default in the 
cassandra-topologies.properties in the distro.

> On 15 Mar 2019, at 12:04, Jeff Jirsa  wrote:
> 
> Is this using GPFS?  If so, can you open a JIRA? It feels like potentially 
> GPFS is not persisting the rack/DC info into system.peers and loses the DC on 
> restart. This is somewhat understandable, but definitely deserves a JIRA. 
> 
> On Thu, Mar 14, 2019 at 11:44 PM Stefan Miklosovic 
>  > wrote:
> Hi Fd,
> 
> I tried this on 3 nodes cluster. I killed node 2, both node1 and node3 
> reported node2 to be DN, then I killed node1 and node3 and I restarted them 
> and node2 was reported like this:
> 
> [root@spark-master-1 /]# nodetool status
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens   Owns (effective)  Host ID 
>   Rack
> DN  172.19.0.8  ?  256  64.0% 
> bd75a5e2-2890-44c5-8f7a-fca1b4ce94ab  r1
> Datacenter: dc1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens   Owns (effective)  Host ID 
>   Rack
> UN  172.19.0.5  382.75 KiB  256  64.4% 
> 2a062140-2428-4092-b48b-7495d083d7f9  rack1
> UN  172.19.0.9  171.41 KiB  256  71.6% 
> 9590b791-ad53-4b5a-b4c7-b00408ed02dd  rack3
> 
> Prior to killing of node1 and node3, node2 was indeed marked as DN but it was 
> part of the "Datacenter: dc1" output where both node1 and node3 were.
> 
> But after killing both node1 and node3 (so cluster was totally down), after 
> restarting them, node2 was reported like that.
> 
> I do not know what is the difference here. Are gossiping data somewhere 
> stored on the disk? I would say so, otherwise there is no way how could node1 
> / node3 report 
> that node2 is down but at the same time I dont get why it is "out of the 
> list" where node1 and node3 are.
> 
> 
> On Fri, 15 Mar 2019 at 02:42, Fd Habash  > wrote:
> I can conclusively say, none of these commands were run. However, I think 
> this is  the likely scenario …
> 
>  
> 
> If you have a cluster of three nodes 1,2,3 …
> 
> If 3 shows as DN
> Restart C* on 1 & 2
> Nodetool status should NOT show node 3 IP at all.
>  
> 
> Restarting the cluster while a node is down resets gossip state.
> 
>  
> 
> There is a good chance this is what happened.
> 
>  
> 
> Plausible?
> 
>  
> 
> 
> Thank you
> 
>  
> 
> From: Jeff Jirsa 
> Sent: Thursday, March 14, 2019 11:06 AM
> To: cassandra 
> Subject: Re: Cannot replace_address /10.xx.xx.xx because it doesn't exist 
> ingossip
> 
>  
> 
> Two things that wouldn't be a bug:
> 
>  
> 
> You could have run removenode
> 
> You could have run assassinate
> 
>  
> 
> Also could be some new bug, but that's much less likely. 
> 
>  
> 
>  
> 
> On Thu, Mar 14, 2019 at 2:50 PM Fd Habash  > wrote:
> 
> I have a node which I know for certain was a cluster member last week. It 
> showed in nodetool status as DN. When I attempted to replace it today, I got 
> this message
> 
>  
> ERROR [main] 2019-03-14 14:40:49,208 CassandraDaemon.java:654 - Exception 
> encountered during startup
> 
> java.lang.RuntimeException: Cannot replace_address /10.xx.xx.xxx.xx because 
> it doesn't exist in gossip
> 
> at 
> org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:449)
>  ~[apache-cassandra-2.2.8.jar:2.2.8]
> 
>  
>  
> DN  10.xx.xx.xx  388.43 KB  256  6.9%  
> bdbd632a-bf5d-44d4-b220-f17f258c4701  1e
> 
>  
> Under what conditions does this happen?
> 
>  
>  
> 
> Thank you
> 
>  
>  
> 
> 
> Stefan Miklosovic
> 



Re: Query failure

2019-03-14 Thread Sam Tunnicliffe
Hi Leo

my guess would be that your configuration is not consistent across all nodes in 
the cluster. The responses you’re seeing are totally indicative of being 
connected to a node where PasswordAuthenticator is not enabled in 
cassandra.yaml. 

Thanks,
Sam

> On 14 Mar 2019, at 10:56, Léo FERLIN SUTTON  
> wrote:
> 
> Hello !
> 
> Recently I have noticed some clients are having errors almost every time they 
> try to contact my Cassandra cluster.
> 
> The error messages vary but there is one constant : It's not constant ! Let 
> me show you : 
> 
> From the client host : 
> 
> `cqlsh  --cqlversion "3.4.0" -u cassandra_superuser -p my_password 
> cassandra_address 9042`
> 
> The CL commands will fail half of the time :
> 
> ```
> cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD = 'leo4' 
> AND LOGIN=TRUE;
> InvalidRequest: Error from server: code=2200 [Invalid query] 
> message="org.apache.cassandra.auth.CassandraRoleManager doesn't support 
> PASSWORD"
> cassandra_vault_superuser@cqlsh> CREATE ROLE leo333 WITH PASSWORD = 'leo4' 
> AND LOGIN=TRUE;
> ```
> 
> Same with grants : 
> ```
> cassandra_vault_superuser@cqlsh> GRANT read_write_role TO leo333;
> Unauthorized: Error from server: code=2100 [Unauthorized] message="You have 
> to be logged in and not anonymous to perform this request"
> cassandra_vault_superuser@cqlsh> GRANT read_write_role TO leo333;
> ```
> 
> Same with `list roles` : 
> ```
> cassandra_vault_superuser@cqlsh> list roles;
> 
>  role | super | login | 
> options
> --+---+---+-
> cassandra |  True |  True |   
>  {}
> [...]
> 
> cassandra_vault_superuser@cqlsh> list roles;
> Unauthorized: Error from server: code=2100 [Unauthorized] message="You have 
> to be logged in and not anonymous to perform this request"
> ```
> 
> My Cassandra  (3.0.18) configuration seems correct : 
> ```
> authenticator: PasswordAuthenticator
> authorizer: CassandraAuthorizer
> role_manager: CassandraRoleManager
> ```
> 
> The system_auth schema seems correct as well : 
> `CREATE KEYSPACE system_auth WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'my_dc': '3'}  AND durable_writes = true;`
> 
> 
> I am only having those errors when : 
> 
>   * I am on a non local client. 
>   * Via `cqlsh`
>   * Or via the vaultproject client 
> (https://www.vaultproject.io/docs/secrets/databases/cassandra.html 
> ) (1 error 
> occurred: You have to be logged in and not anonymous to perform this request)
> 
> If I am using cqlsh (with authentification) but from a Cassandra node it 
> works 100% of the time.
> 
> Any idas abut what might be going wrong ?
> 
> Regards,
> 
> Leo
> 



Re: Unexpected error during query + Operation timed out

2019-01-18 Thread Sam Tunnicliffe
There’s a timeout happening when querying the system tables in the auth 
subsystem. The timeout message "Operation timed out - received only 2 
responses” indicates that you’re logging in as the default “cassandra” 
superuser, as queries for all other users are performed at LOCAL_ONE 
consistency. Best practice is to use the default superuser only during initial 
cluster setup and disable it as soon as you’ve configured your own users and 
permissions. You should also check the replication options for the system_auth 
keyspace. In particular if you have a multi-dc setup, make sure your 
replication options reflect that (i.e. using NetworkTopologyStrategy).

I would suggest doing that as soon as possible, but in the meantime you can 
also: 

* Check for down or unresponsive nodes which are causing the queries for the 
auth data to time out. Reads for the default superuser are done at QUORUM, so 
you may be crossing WAN boundaries if you have a multi-dc setup.
* Increase the validity period of the various auth caches. These can be set 
permanently in cassandra.yaml, or temporarily via JMX - see 
http://cassandra.apache.org/doc/latest/operating/security.html 


The actual timeout is happening when the superuser status is being queried. In 
2.1, this status isn’t cached but once permissions for a user have been cached, 
it won’t be queried again until the cached permissions expire.

Thanks,
Sam


> On 18 Jan 2019, at 14:20, rabii lamriq  wrote:
> 
> Hi,
> 
> We have some Timeout frequently in our node cassandra, can any one help me to 
> analyse what can be the cause.
> 
> 
> INFO  [SlabPoolCleaner] 2018-12-27 05:07:01,836 ColumnFamilyStore.java:1197 - 
> Flushing largest CFS(Keyspace='db_cass_agregats', 
> ColumnFamily='customers_metrics') to free up room. Used total: 0.45/0.00, 
> live: 0.33/0.00, flushing: 0.11/0.00, this: 0.11/0.11
> INFO  [SlabPoolCleaner] 2018-12-27 05:07:01,836 ColumnFamilyStore.java:905 - 
> Enqueuing flush of customers_metrics: 239025519 (11%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:55844] 2018-12-27 05:07:01,853 Memtable.java:347 - 
> Writing Memtable-customers_metrics@2086879045(189.254MiB serialized bytes, 
> 163048 ops, 11%/0% of on/off-heap limit)
> INFO  [MemtableFlushWriter:55845] 2018-12-27 05:07:02,398 Memtable.java:382 - 
> Completed flushing 
> /var/opt/data/flat/bdfcas/files/data/db_cass_agregats/customers_metrics-af4be81092db11e78b314125c9bc7cec/db_cass_agregats-customers_metrics-tmp-ka-140634-Data.db
>  (21.863MiB) for commitlog position ReplayPosition(segmentId=1525681566111, 
> position=14599365)
> INFO  [MemtableFlushWriter:55844] 2018-12-27 05:07:02,912 Memtable.java:382 - 
> Completed flushing 
> /var/opt/data/flat/bdfcas/files/data/db_cass_agregats/customers_metrics-af4be81092db11e78b314125c9bc7cec/db_cass_agregats-customers_metrics-tmp-ka-140635-Data.db
>  (21.556MiB) for commitlog position ReplayPosition(segmentId=1525681566117, 
> position=19478541)
> INFO  [SlabPoolCleaner] 2018-12-27 05:07:03,034 ColumnFamilyStore.java:1197 - 
> Flushing largest CFS(Keyspace='db_cass_agregats', 
> ColumnFamily='customers_metrics') to free up room. Used total: 0.45/0.00, 
> live: 0.33/0.00, flushing: 0.11/0.00, this: 0.11/0.11
> INFO  [SlabPoolCleaner] 2018-12-27 05:07:03,035 ColumnFamilyStore.java:905 - 
> Enqueuing flush of customers_metrics: 239127661 (11%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:55845] 2018-12-27 05:07:03,047 Memtable.java:347 - 
> Writing Memtable-customers_metrics@962624348(189.403MiB serialized bytes, 
> 163122 ops, 11%/0% of on/off-heap limit)
> INFO  [SlabPoolCleaner] 2018-12-27 05:07:04,113 ColumnFamilyStore.java:1197 - 
> Flushing largest CFS(Keyspace='db_cass_agregats', 
> ColumnFamily='customers_metrics') to free up room. Used total: 0.45/0.00, 
> live: 0.33/0.00, flushing: 0.11/0.00, this: 0.11/0.11
> INFO  [SlabPoolCleaner] 2018-12-27 05:07:04,113 ColumnFamilyStore.java:905 - 
> Enqueuing flush of customers_metrics: 239049872 (11%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:55844] 2018-12-27 05:07:04,123 Memtable.java:347 - 
> Writing Memtable-customers_metrics@219594445(189.231MiB serialized bytes, 
> 163174 ops, 11%/0% of on/off-heap limit)
> INFO  [MemtableFlushWriter:55845] 2018-12-27 05:07:04,210 Memtable.java:382 - 
> Completed flushing 
> /var/opt/data/flat/bdfcas/files/data/db_cass_agregats/customers_metrics-af4be81092db11e78b314125c9bc7cec/db_cass_agregats-customers_metrics-tmp-ka-140636-Data.db
>  (21.873MiB) for commitlog position ReplayPosition(segmentId=1525681566123, 
> position=24516704)
> INFO  [MemtableFlushWriter:55844] 2018-12-27 05:07:05,168 Memtable.java:382 - 
> Completed flushing 
> /var/opt/data/flat/bdfcas/files/data/db_cass_agregats/customers_metrics-af4be81092db11e78b314125c9bc7cec/db_cass_agregats-customers_metrics-tmp-ka-140637-Data.db
>  (21.613MiB) for commitlog position ReplayPosition(segmentId=1525681566129, 
> 

Re: system_auth keyspace replication factor

2018-11-26 Thread Sam Tunnicliffe
> I suspect some of the intermediate queries (determining role, etc) happen at 
> quorum in 2.2+, but I don’t have time to go read the code and prove it. 

This isn’t true. Aside from when using the default superuser, only 
CRM::getAllRoles reads at QUORUM (because the resultset would include the 
default superuser if present). This is only called during execution of a LIST 
ROLES statement and isn’t on the login path.

From the driver log you can see that the actual authentication exchange happens 
quickly, so I’d say that the problem described in CSHARP-436 is a more likely 
candidate. 

> Sadly, this recommendation is out of date / incorrect.  For `system_auth` we 
> are mostly using a formula like `RF=min(num_dc_nodes, 5)` and see no issues.

+1 to that, RF=N is way over the top. 

Thanks,
Sam


> On 26 Nov 2018, at 09:44, Oleksandr Shulgin  
> wrote:
> 
> On Fri, Nov 23, 2018 at 5:38 PM Vitali Dyachuk  > wrote:
> 
> We have recently met a problem when we added 60 nodes in 1 region to the 
> cluster
> and set an RF=60 for the system_auth ks, following this documentation 
> https://docs.datastax.com/en/cql/3.3/cql/cql_using/useUpdateKeyspaceRF.html 
> 
> 
> Sadly, this recommendation is out of date / incorrect.  For `system_auth` we 
> are mostly using a formula like `RF=min(num_dc_nodes, 5)` and see no issues.
> 
> Is there a chance to correct the documentation @datastax?
> 
> Regards,
> --
> Alex
> 



Re: System auth empty, how to populate it

2018-07-18 Thread Sam Tunnicliffe
The salted hash being different is fine, the bcrypt library generates a
random 128 bit salt when encrypting a new password. The salt is then
encoded in the hashed string so you'd expect a different salted_hash each
time a given plaintext string is encoded.

I inserted exactly that data into a clean system, then switched it to use
PasswordAuthenticator and I can login using the default credentials without
any issue. Did you also drop the legacy credentials table
(system_auth.credentials) as per the upgrade docs that I linked yesterday
(in NEWS.txt)? If you didn't, the authenticator will continue to read from
the old table (you don't need a restart after dropping, the switch will
happen immediately).



On 18 July 2018 at 12:12, Thomas Lété  wrote:

> It’s my mail client that changed the quote mark, I didn’t see it, it’s
> just an export of the data I get from DevCenter, the salted hash is not the
> same as I saw in this guide : https://support.datastax.
> com/hc/en-us/articles/207932926-FAQ-How-to-recover-
> from-a-lost-superuser-password
> But it should be correct as it was generated by Cassandra itself yesterday.
>
> The export :
> cassandra@cqlsh> SELECT * from system_auth.roles;
>
>  role  | can_login | is_superuser | member_of | salted_hash
> ---+---+--+---+-
> -
>  cassandra |  True | True |  null | $2a$10$
> 7sXeNr3okw61oisR9pCyHeWEO3wPzx3w8r/LKwtDSW2Tt68f4KFmi
>
> Le 18 juil. 2018 à 12:26, Sam Tunnicliffe  a écrit :
>
> It may be an artifact of the email client, but that's not a valid INSERT
> statement - the closing quote on the password hash is U2019 (right side
> quotation mark) but the opening quote is U0027 (apostrophe) - which is what
> cqlsh expects. Can you just SELECT * from system_auth.roles and check that
> the salted_hash is correct?
>
> On 18 July 2018 at 11:06, Thomas Lété  wrote:
>
>> Yes it’s the config I’m using and I’m trying to add the Password Auth to
>> :-)
>>
>> Here is the content of the roles table :
>>
>> INSERT INTO roles (role,can_login,is_superuser,member_of,salted_hash)
>> VALUES ('cassandra',true,true,null,'$2a$10$7sXeNr3okw61oisR9pCyHeWE
>> O3wPzx3w8r/LKwtDSW2Tt68f4KFmi’);
>>
>> It seems correct but I’m not able to authenticate (using cqlsh v5.0.1 or
>> DevCenter 1.6.0)
>>
>> I’m starting to consider going from scratch and use the default config
>> and check if it works...
>>
>> Le 18 juil. 2018 à 12:03, Sam Tunnicliffe  a écrit :
>>
>> With that config you'll be using the default AllowAllAuthenticator, so I
>> assume you are able to connect cqlsh without any credentials? If so, can
>> you verify the contents of the system_auth.roles table? It should contain
>> only the cassandra user.
>>
>> On 18 July 2018 at 08:02, Thomas Lété  wrote:
>>
>>> I’m using the default ones, the commented parts are the one I use when I
>>> try the PasswordAuthenticator :) (line 19 to 24)
>>>
>>> > Le 18 juil. 2018 à 08:51, Horia Mocioi  a
>>> écrit :
>>> >
>>> > If this is the file that you are currently using...he first things that
>>> > I see is that you do not have any authenticator and role_manager:
>>> >
>>> > https://github.com/apache/cassandra/blob/1d506f9d09c880ff2b2
>>> 693e3e27fa5
>>> > 8c02ecf398/conf/cassandra.yaml#L103
>>> >
>>> > https://github.com/apache/cassandra/blob/1d506f9d09c880ff2b2
>>> 693e3e27fa5
>>> > 8c02ecf398/conf/cassandra.yaml#L123
>>> >
>>> > On ons, 2018-07-18 at 08:33 +0200, Thomas Lété wrote:
>>> >> Unfortunately, I’m not a java dev so I’m not able to create an
>>> >> authenticator…
>>> >>
>>> >> I don’t like to do that usually but I share with you a gist of the
>>> >> config, it was generated by OpsCenter when it was free, I just
>>> >> updated it for Cassandra >= 3… Maybe you will see something :
>>> >>
>>> >> https://gist.github.com/bistory/ececc0bef7627f39a21e4e8f0c8d841c
>>> >>
>>> >>> Le 18 juil. 2018 à 00:28, Horia Mocioi 
>>> >>> a écrit :
>>> >>>
>>> >>> Cassandra allows to use custom authenticators so I would create a
>>> >>> CustomPasswordAuthenticator. This would be a copy of the existing
>>> >>> PasswordAuthenticator. I would add several debugging info like:
>>> >>> provided username and password, the output of the checkpw function,
>>> >

Re: System auth empty, how to populate it

2018-07-18 Thread Sam Tunnicliffe
It may be an artifact of the email client, but that's not a valid INSERT
statement - the closing quote on the password hash is U2019 (right side
quotation mark) but the opening quote is U0027 (apostrophe) - which is what
cqlsh expects. Can you just SELECT * from system_auth.roles and check that
the salted_hash is correct?

On 18 July 2018 at 11:06, Thomas Lété  wrote:

> Yes it’s the config I’m using and I’m trying to add the Password Auth to
> :-)
>
> Here is the content of the roles table :
>
> INSERT INTO roles (role,can_login,is_superuser,member_of,salted_hash)
> VALUES ('cassandra',true,true,null,'$2a$10$7sXeNr3okw61oisR9pCyHeWEO3wPzx
> 3w8r/LKwtDSW2Tt68f4KFmi’);
>
> It seems correct but I’m not able to authenticate (using cqlsh v5.0.1 or
> DevCenter 1.6.0)
>
> I’m starting to consider going from scratch and use the default config and
> check if it works...
>
> Le 18 juil. 2018 à 12:03, Sam Tunnicliffe  a écrit :
>
> With that config you'll be using the default AllowAllAuthenticator, so I
> assume you are able to connect cqlsh without any credentials? If so, can
> you verify the contents of the system_auth.roles table? It should contain
> only the cassandra user.
>
> On 18 July 2018 at 08:02, Thomas Lété  wrote:
>
>> I’m using the default ones, the commented parts are the one I use when I
>> try the PasswordAuthenticator :) (line 19 to 24)
>>
>> > Le 18 juil. 2018 à 08:51, Horia Mocioi  a
>> écrit :
>> >
>> > If this is the file that you are currently using...he first things that
>> > I see is that you do not have any authenticator and role_manager:
>> >
>> > https://github.com/apache/cassandra/blob/1d506f9d09c880ff2b2693e3e27fa5
>> > 8c02ecf398/conf/cassandra.yaml#L103
>> >
>> > https://github.com/apache/cassandra/blob/1d506f9d09c880ff2b2693e3e27fa5
>> > 8c02ecf398/conf/cassandra.yaml#L123
>> >
>> > On ons, 2018-07-18 at 08:33 +0200, Thomas Lété wrote:
>> >> Unfortunately, I’m not a java dev so I’m not able to create an
>> >> authenticator…
>> >>
>> >> I don’t like to do that usually but I share with you a gist of the
>> >> config, it was generated by OpsCenter when it was free, I just
>> >> updated it for Cassandra >= 3… Maybe you will see something :
>> >>
>> >> https://gist.github.com/bistory/ececc0bef7627f39a21e4e8f0c8d841c
>> >>
>> >>> Le 18 juil. 2018 à 00:28, Horia Mocioi 
>> >>> a écrit :
>> >>>
>> >>> Cassandra allows to use custom authenticators so I would create a
>> >>> CustomPasswordAuthenticator. This would be a copy of the existing
>> >>> PasswordAuthenticator. I would add several debugging info like:
>> >>> provided username and password, the output of the checkpw function,
>> >>> what cql statement is executed etc (any other info that would help
>> >>> me to understand what is being executed in the authenticator).
>> >>> From: Thomas Lété 
>> >>> Sent: Tuesday, July 17, 2018 5:24:39 PM
>> >>> To: user@cassandra.apache.org
>> >>> Subject: Re: System auth empty, how to populate it
>> >>>
>> >>> Thanks for your reply,
>> >>>
>> >>> - I have not defined role_manager in the config
>> >>> - I dropped the users table, it was present in the keyspace
>> >>> - Cassandra then created a record in the roles table, yay !
>> >>>
>> >>> But when I do clash -u cassandra -p cassandra
>> >>>
>> >>> => Invalid credentials supplied.
>> >>> Authentication error on host xx: Provided username cassandra
>> >>> and/or password are incorrect
>> >>>
>> >>> I already repaired system_auth a few times, nothing help...
>> >>>
>> >>>> Le 17 juil. 2018 à 16:47, Sam Tunnicliffe  a
>> >>>> écrit :
>> >>>>
>> >>>> The default superuser is only created at startup if 3 conditions
>> >>>> are met:
>> >>>>
>> >>>> i) The default role manager is configured. In cassandra.yaml, you
>> >>>> should see "role_manager: CassandraRoleManager". This is also the
>> >>>> default value, so unless you're explicitly using a custom role
>> >>>> manager it should be good.
>> >>>> ii) The system_auth.users table (legacy, pre-2.2) should not be
>> >>>> present. Present means present in the 

Re: System auth empty, how to populate it

2018-07-18 Thread Sam Tunnicliffe
With that config you'll be using the default AllowAllAuthenticator, so I
assume you are able to connect cqlsh without any credentials? If so, can
you verify the contents of the system_auth.roles table? It should contain
only the cassandra user.

On 18 July 2018 at 08:02, Thomas Lété  wrote:

> I’m using the default ones, the commented parts are the one I use when I
> try the PasswordAuthenticator :) (line 19 to 24)
>
> > Le 18 juil. 2018 à 08:51, Horia Mocioi  a
> écrit :
> >
> > If this is the file that you are currently using...he first things that
> > I see is that you do not have any authenticator and role_manager:
> >
> > https://github.com/apache/cassandra/blob/1d506f9d09c880ff2b2693e3e27fa5
> > 8c02ecf398/conf/cassandra.yaml#L103
> >
> > https://github.com/apache/cassandra/blob/1d506f9d09c880ff2b2693e3e27fa5
> > 8c02ecf398/conf/cassandra.yaml#L123
> >
> > On ons, 2018-07-18 at 08:33 +0200, Thomas Lété wrote:
> >> Unfortunately, I’m not a java dev so I’m not able to create an
> >> authenticator…
> >>
> >> I don’t like to do that usually but I share with you a gist of the
> >> config, it was generated by OpsCenter when it was free, I just
> >> updated it for Cassandra >= 3… Maybe you will see something :
> >>
> >> https://gist.github.com/bistory/ececc0bef7627f39a21e4e8f0c8d841c
> >>
> >>> Le 18 juil. 2018 à 00:28, Horia Mocioi 
> >>> a écrit :
> >>>
> >>> Cassandra allows to use custom authenticators so I would create a
> >>> CustomPasswordAuthenticator. This would be a copy of the existing
> >>> PasswordAuthenticator. I would add several debugging info like:
> >>> provided username and password, the output of the checkpw function,
> >>> what cql statement is executed etc (any other info that would help
> >>> me to understand what is being executed in the authenticator).
> >>> From: Thomas Lété 
> >>> Sent: Tuesday, July 17, 2018 5:24:39 PM
> >>> To: user@cassandra.apache.org
> >>> Subject: Re: System auth empty, how to populate it
> >>>
> >>> Thanks for your reply,
> >>>
> >>> - I have not defined role_manager in the config
> >>> - I dropped the users table, it was present in the keyspace
> >>> - Cassandra then created a record in the roles table, yay !
> >>>
> >>> But when I do clash -u cassandra -p cassandra
> >>>
> >>> => Invalid credentials supplied.
> >>> Authentication error on host xx: Provided username cassandra
> >>> and/or password are incorrect
> >>>
> >>> I already repaired system_auth a few times, nothing help...
> >>>
> >>>> Le 17 juil. 2018 à 16:47, Sam Tunnicliffe  a
> >>>> écrit :
> >>>>
> >>>> The default superuser is only created at startup if 3 conditions
> >>>> are met:
> >>>>
> >>>> i) The default role manager is configured. In cassandra.yaml, you
> >>>> should see "role_manager: CassandraRoleManager". This is also the
> >>>> default value, so unless you're explicitly using a custom role
> >>>> manager it should be good.
> >>>> ii) The system_auth.users table (legacy, pre-2.2) should not be
> >>>> present. Present means present in the schema, not on disk. Unlike
> >>>> most system tables, this table is droppable (in fact this is a
> >>>> necessary step in upgrading from earlier versions).
> >>>> iii) There should be no preexisting roles present in the
> >>>> system_auth.roles table. This is verified with a regular query,
> >>>> so you must either use CQL to delete existing roles, or remove
> >>>> the data directories and commit logs on *all* nodes.
> >>>>
> >>>> Even if these three conditions are met, but the default user
> >>>> isn't being created the manual insert that Horia suggested should
> >>>> work. If system_auth.roles table exists and you are able to
> >>>> perform the insert, I'm very surprised when you say it's empty
> >>>> after you issue the insert. If you check again and it turns out
> >>>> the manual insert is working as expected, you need to make sure
> >>>> that the legacy tables have been dropped from schema (assuming
> >>>> you upgraded from a pre-3.0 version at some point). If the legacy
> >>>> tables are still present, the authentica

Re: System auth empty, how to populate it

2018-07-17 Thread Sam Tunnicliffe
The default superuser is only created at startup if 3 conditions are met:

i) The default role manager is configured. In cassandra.yaml, you should
see "role_manager: CassandraRoleManager". This is also the default value,
so unless you're explicitly using a custom role manager it should be good.
ii) The system_auth.users table (legacy, pre-2.2) should not be present.
Present means present in the schema, not on disk. Unlike most system
tables, this table is droppable (in fact this is a necessary step in
upgrading from earlier versions).
iii) There should be no preexisting roles present in the system_auth.roles
table. This is verified with a regular query, so you must either use CQL to
delete existing roles, or remove the data directories and commit logs on
*all* nodes.

Even if these three conditions are met, but the default user isn't being
created the manual insert that Horia suggested should work. If
system_auth.roles table exists and you are able to perform the insert, I'm
very surprised when you say it's empty after you issue the insert. If you
check again and it turns out the manual insert is working as expected, you
need to make sure that the legacy tables have been dropped from schema
(assuming you upgraded from a pre-3.0 version at some point). If the legacy
tables are still present, the authenticator will continue to read from them
and so would be ignoring the new entry in the roles table. (see:
https://github.com/apache/cassandra/blob/cassandra-3.11.2/NEWS.txt#L619-L640
)


On 17 July 2018 at 15:18, Thomas Lété  wrote:

> Yes I did that multiple time, always following the same procedure : stop
> Cassandra, on all nodes, remove data, update config then restart nodes one
> by one…
>
> I really don’t understand when I could have done wrong...
>
> > Le 17 juil. 2018 à 16:15, Simon Fontana Oscarsson <
> simon.fontana.oscars...@ericsson.com> a écrit :
> >
> > This is very strange behavior if Cassandra won't recreate the cassandra
> user when you delete the folder.
> > So just to make sure, you are stopping Cassandra on all nodes and
> deleting the data directory?
> >
> > --
> > SIMON FONTANA OSCARSSON
> > Software Developer
> >
> > Ericsson
> > Ölandsgatan 1
> > 37133 Karlskrona, Sweden
> > simon.fontana.oscars...@ericsson.com
> > www.ericsson.com
> >
> > On tis, 2018-07-17 at 16:01 +0200, Thomas Lété wrote:
> >> It’s empty...
> >>
> >>>
> >>> Le 17 juil. 2018 à 15:59, Horia Mocioi  a
> écrit :
> >>>
> >>> Could you also send the output of "select * from system_auth.roles"?
> >>> (you will need to change authenticator to AllowAllAuthenticator and
> >>> authorizer to AllowAllAuthorizer)
> >>>
> >>> On tis, 2018-07-17 at 15:43 +0200, Thomas Lété wrote:
> 
>  Ok I tried that, nothing better (I already tried dropping the entire
>  system_auth folder that way, same result)
> 
>  When I open the log, I found nothing about « Password » and when I
>  search for « roles », I only find that :
> 
>  DEBUG [main] 2018-07-17 15:37:39,420
>  CompactionStrategyManager.java:380 - Recreating compaction strategy -
>  disk boundaries are out of date for system_auth.roles.
>  DEBUG [main] 2018-07-17 15:37:39,420 DiskBoundaryManager.java:53 -
>  Refreshing disk boundary cache for system_auth.roles
>  DEBUG [main] 2018-07-17 15:37:39,422 DiskBoundaryManager.java:56 -
>  Updating boundaries from
>  DiskBoundaries{directories=[DataDirectory{location=/home/cassandra/da
>  ta}], positions=[max(9223372036854775807)], ringVersion=3,
>  directoriesVersion=0} to
>  DiskBoundaries{directories=[DataDirectory{location=/home/cassandra/da
>  ta}], positions=[max(9223372036854775807)], ringVersion=16,
>  directoriesVersion=0} for system_auth.roles
> 
>  The configuration I use for Auth is the following :
> 
>  authorizer: CassandraAuthorizer
>  permissions_validity_in_ms: 2000
>  permissions_update_interval_in_ms: 2000
>  authenticator: PasswordAuthenticator
>  credentials_validity_in_ms: 2000
>  credentials_update_interval_in_ms: 2000
> 
> >
> > Le 17 juil. 2018 à 15:26, Simon Fontana Oscarsson  > cars...@ericsson.com> a écrit :
> >
> > Could you try the following steps?
> >
> > Stop Cassandra.
> > Change authenticator in yaml to PasswordAuthenticator if not
> > already done.
> > Remove data directory with `rm -rf data/system_auth/roles-*`
> > Start Cassandra.
> > Login with `cqlsh -u cassandra -p cassandra`
> >
> > Works for me.
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: How to restrict users to specific DC.

2018-04-19 Thread Sam Tunnicliffe
https://issues.apache.org/jira/browse/CASSANDRA-13985 is probably what
you're looking for here

Thanks,
Sam

On 10 April 2018 at 11:55, Rahul Singh  wrote:

> That seems to be more of a network segmentation issue. Protect the other
> nodes behind a firewall / security group. Each node in the different DCs
> would be able to talk to each other but the user client machine can only
> access the traffic only DC
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Apr 6, 2018, 4:47 PM -0400, Pranay akula ,
> wrote:
>
> Thanks Jim for your reply,
>
> > ... just in case even he change his contact points he shouldn't be able
> execute queries on DC2.
>
>  What I mean was we have 2 DC only 1 serving traffic, Let's say an
> individual user wants to run a query from cqlsh/Devcenter on DC serving
> requests, I want to prevent it.
>
> I kind of think it's not possible but wanted to know if there were ways to
> implement it like killing the session or similar
>
> Thanks
> Pranay
>
> On Fri, Apr 6, 2018, 12:22 PM Jim Witschey 
> wrote:
>
>> Pranay,
>>
>> > Is it possible to restrict users to specific DC in cassandra,  let's
>> say an user A is connecting to DC1 and executing queries, how to can I
>> restrict that user to that particular DC...
>>
>> This part sounds like a job for a DC-aware load-balancing policy in the
>> driver.
>>
>> > ... just in case even he change his contact points he shouldn't be able
>> execute queries on DC2.
>>
>> This part confuses me. What problem are you trying to solve? Are you
>> concerned about a DC getting hit with more requests than you'd like
>> because of a misconfigured driver?
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>


Re: unable to start cassandra 3.11.1

2018-02-02 Thread Sam Tunnicliffe
I've actually just committed the fix for this to the 3.11 and trunk
branches, so if you desperately need a compatible build you can make build
from those branches.
As I mentioned on the JIRA, I expect we'll move to a release vote very
soon, so hopefully should have a 3.11.2 release with this fix shortly.


On 2 February 2018 at 12:19, Marcus Haarmann 
wrote:

> you can try to checkout https://github.com/beobal/cassandra/tree/14173-3.
> 11
> and compile yourself a compatible version (unreleased), in case you are
> bound to
> the latest java runtime for any reason.
>
> Marcus Haarmann
>
> --
> *Von: *"Kant Kodali" 
> *An: *"user" 
> *Gesendet: *Donnerstag, 1. Februar 2018 23:45:06
> *Betreff: *Re: unable to start cassandra 3.11.1
>
> Ok I saw the ticket looks like this java version "1.8.0_162" wont work!
>
> On Thu, Feb 1, 2018 at 2:43 PM, Kant Kodali  wrote:
>
>> Hi Justin,
>> I am using
>>
>> java version "1.8.0_162"
>>
>> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
>>
>>
>> Thanks!
>>
>> On Thu, Feb 1, 2018 at 2:40 PM, Justin Cameron 
>> wrote:
>>
>>> Unfortunately C* 3.11.1 is incompatible with the latest version of Java.
>>> You'll need to either downgrade to Java 1.8.0.151-5 or wait for C* 3.11.2
>>> (see https://issues.apache.org/jira/browse/CASSANDRA-14173 for details)
>>>
>>> On Fri, 2 Feb 2018 at 09:35 Kant Kodali  wrote:
>>>
 Hi All,
 I am unable to start cassandra 3.11.1. Below is the stack trace.

 Exception (java.lang.AbstractMethodError) encountered during startup: 
 org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
 java.lang.AbstractMethodError: 
 org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
 at 
 javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
 at 
 javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
 at 
 javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
 at 
 org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
 at 
 org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689)
 ERROR 22:33:49 Exception encountered during startup
 java.lang.AbstractMethodError: 
 org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
 at 
 javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
  ~[na:1.8.0_162]
 at 
 javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
  ~[na:1.8.0_162]
 at 
 javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
  ~[na:1.8.0_162]
 at 
 org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
  ~[apache-cassandra-3.11.1.jar:3.11.1]
 at 
 org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
  [apache-cassandra-3.11.1.jar:3.11.1]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188)
  [apache-cassandra-3.11.1.jar:3.11.1]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
  [apache-cassandra-3.11.1.jar:3.11.1]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689)
  [apache-cassandra-3.11.1.jar:3.11.1]

 --
>>>
>>>
>>> *Justin Cameron*Senior Software Engineer
>>>
>>>
>>> 
>>>
>>>
>>> This email has been sent on behalf of Instaclustr Pty. Limited
>>> (Australia) and Instaclustr Inc (USA).
>>>
>>> This email and any attachments may contain confidential and legally
>>> privileged information.  If you are not the intended recipient, do not copy
>>> or disclose its content, but please reply to this email immediately and
>>> highlight the error to the sender and then immediately delete the message.
>>>
>>
>>
>


Re: Cassandra 3.11 fails to start with JDK8u162

2018-01-18 Thread Sam Tunnicliffe
This isn't (wasn't) a known issue, but the way that CASSANDRA-10091 was
implemented using internal JDK classes means it was always possible that a
minor JVM version change could introduce incompatibilities (CASSANDRA-2967
is also relevant).
We did already know that we need to revisit the way this works in 4.0 for
JDK9 support (CASSANDRA-9608), so we should identify a more stable solution
& apply that to both 3.11 and 4.0.
In the meantime, downgrading to 152 is the only real option.

I've opened https://issues.apache.org/jira/browse/CASSANDRA-14173 for this.

Thanks,
Sam


On 18 January 2018 at 08:43, Nicolas Guyomar 
wrote:

> Thank you Thomas for starting this thread, I'm having exactly the same
> issue on AWS EC2 RHEL-7.4_HVM-20180103-x86_64-2-Hourly2-GP2
> (ami-dc13a4a1)  I was starting to bang my head on my desk !
>
> So I'll try to downgrade back to 152 then !
>
>
>
> On 18 January 2018 at 08:34, Steinmaurer, Thomas <
> thomas.steinmau...@dynatrace.com> wrote:
>
>> Hello,
>>
>>
>>
>> after switching from JDK8u152 to JDK8u162, Cassandra fails with the
>> following stack trace upon startup.
>>
>>
>>
>> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception
>> encountered during startup
>>
>> java.lang.AbstractMethodError: org.apache.cassandra.utils.JMX
>> ServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/
>> rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServer
>> SocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
>>
>> at 
>> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
>> ~[na:1.8.0_162]
>>
>> at 
>> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
>> ~[na:1.8.0_162]
>>
>> at 
>> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
>> ~[na:1.8.0_162]
>>
>> at 
>> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
>> ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>> at 
>> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
>> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>> at 
>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188)
>> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>> at 
>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>> at 
>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689)
>> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>>
>>
>> Is this a known issue?
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Thomas
>>
>>
>> The contents of this e-mail are intended for the named addressee only. It
>> contains information that may be confidential. Unless you are the named
>> addressee or an authorized designee, you may not copy or use it, or
>> disclose it to anyone else. If you received it in error please notify us
>> immediately and then destroy it. Dynatrace Austria GmbH (registration
>> number FN 91482h) is a company registered in Linz whose registered office
>> is at 4040 Linz, Austria, Freist
>> 
>> ädterstra
>> 
>> ße 313
>> 
>>
>
>


Re: cassandra non-super user login fails but super user works

2017-10-24 Thread Sam Tunnicliffe
Which version of Cassandra are you running?

My guess is that you're on a version >= 2.2 and that you've created the
non-superuser since upgrading, but haven't yet removed the legacy tables
from the system_auth keyspace. If that's the case, then the new user will
be present in the new tables, but authentication at login time is still
using the old ones.

The schema of the system_auth keyspace was changed in 2.2 with the
introduction of role based access control and requires a little operator
involvement to switch over to using the new tables, see the section on
upgrading to 2.2 in NEWS.txt for the full details.

Thanks,
Sam


On 23 October 2017 at 16:08, Meg Mara  wrote:

> You should probably verify if the ‘can_login’ field of the non-superuser
> role is set to true. You can query the column family system_auth.roles to
> find out.
>
>
>
> Thanks,
>
> Meg Mara
>
>
>
> *From:* Justin Cameron [mailto:jus...@instaclustr.com]
> *Sent:* Sunday, October 22, 2017 6:21 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: cassandra non-super user login fails but super user works
>
>
>
> Try setting the replication factor of the system_auth keyspace to the
> number of nodes in your cluster.
>
> ALTER KEYSPACE system_auth WITH replication = {'class':
> 'NetworkTopologyStrategy', '': ''};
>
>
>
> On Sun, 22 Oct 2017 at 20:06 Who Dadddy  wrote:
>
> Anyone seen this before? Pretty basic setup, super user can login fine but
> non-super user can’t?
>
> Any pointers appreciated.
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
> --
>
> *Justin Cameron*
> Senior Software Engineer
>
>
>
> 
>
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>


Re: system_auth replication factor in Cassandra 2.1

2017-08-30 Thread Sam Tunnicliffe
It's a better rule of thumb to use an RF of 3 to 5 per DC and this is what
the docs now suggest:
http://cassandra.apache.org/doc/latest/operating/security.html#authentication

Out of the box, the system_auth keyspace is setup with SimpleStrategy and
RF=1 so that it works on any new system including dev & test clusters, but
obviously that's no use for a production system.

Regarding the increased rate of authentication errors: did you run repair
after changing the RF? Auth queries are done at CL.LOCAL_ONE, so if you
haven't repaired, the data for the user logging in will probably not be
where it should be. The exception to this is the default "cassandra" user,
queries for that user are done at CL.QUORUM, which will indeed lead to
timeouts and authentication errors with a very high RF. It's recommended to
only use that default user to bootstrap the setup of your own users &
superusers, the link above also has info on this.

Thanks,
Sam


On 30 August 2017 at 16:50, Chuck Reynolds  wrote:

> So I’ve read that if your using authentication in Cassandra 2.1 that your
> replication factor should match the number of nodes in your datacenter.
>
>
>
> *Is that true?*
>
>
>
> I have two datacenter cluster, 135 nodes in datacenter 1 & 227 nodes in an
> AWS datacenter.
>
>
>
> *Why do I want to replicate the system_auth table that many times?*
>
>
>
> *What are the benefits and disadvantages of matching the number of nodes
> as opposed to the standard replication factor of 3? *
>
>
>
>
>
> The reason I’m asking the question is because it seems like I’m getting a
> lot of authentication errors now and they seem to happen more under load.
>
>
>
> Also, querying the system_auth table from cqlsh to get the users seems to
> now timeout.
>
>
>
>
>
> Any help would be greatly appreciated.
>
>
>
> Thanks
>


Re: Secondary Index

2017-06-26 Thread Sam Tunnicliffe
The second query will be much more efficient as the partition key
restriction is applied before the one using the indexed column. Only the
replicas for that partition will be involved in query 2, whereas query 1
will be distributed to enough replicas to cover the whole token ring. Also
(& this doesn't make so much difference to the above queries), the indexes
are sorted by primary key of the rows they refer to. So the more of the
primary key you specify in the query, the more targeted the index lookup
becomes.

On 25 June 2017 at 16:18, techpyaasa .  wrote:

> Thanks for the reply.
>
> I just have one more doubt, please do clarify this.
>
> Will be there any performance difference between these 2 queries for the
> above table.
>
> 1. select * from ks1.cf1 where status=1;
> 2. select * from ks1.cf1 where id1=123456 and status=1;
>
> where id1 is partition key and status is indexed column as I said above.
>
> Could you please tell me the performance difference btwn above 2 queries.
>
> Thanks in advance,
>
> Techpyaasaa
>
> On Tue, Jun 20, 2017 at 9:03 PM, ZAIDI, ASAD A  wrote:
>
>> Hey there –
>>
>>
>>
>> Like other suggested before adding more index , look for opportunity to
>> de-normalize your data model OR create composite keys for your primary
>> index – if that works for you.
>>
>> Secondary index are there so you can leverage them they come with cost.
>> They’re difficult to manage , as you repair data  your secondary index will
>> NOT be automatically repaired so you’ll need to maintain them
>>
>> On each cluster node. Depending on size of your cluster that could be a
>> significant effort. Be prepared to rebuild your new index (nodetool
>> rebuild_index) as often as you change the data . performance will
>> eventually get a hit cause index rebuilding is expensive operation on CPU
>> ..
>>
>>
>>
>> See please http://docs.datastax.com/en/cql/3.1/cql/ddl/ddl_when_use_ind
>> ex_c.html
>>
>>
>>
>>
>>
>>
>>
>> *From:* techpyaasa . [mailto:techpya...@gmail.com]
>> *Sent:* Tuesday, June 20, 2017 2:30 AM
>> *To:* ZAIDI, ASAD A 
>> *Cc:* user@cassandra.apache.org
>> *Subject:* Re: Secondary Index
>>
>>
>>
>> Hi ZAIDI,
>>
>> Thanks for reply.
>> Sorry I didn't get your line
>> "You can get away the potential situation by leveraging composite key, if
>> that is possible for you?"
>>
>> How can I get through it??
>>
>> Like I have a table as below
>>
>> CREATE TABLE ks1.cf1 (id1 bigint, id2 bigint, resp text, status int,
>> PRIMARY KEY (id1, id2)
>>
>> ) WITH CLUSTERING ORDER BY (id2 ASC)
>>
>>
>> 'status' will have values of 0/1/2/3/4 (4 possible values) , insertions
>> to table(partition) will happen based on id2 i.e values(id1,id2,resp,status)
>>
>> I want to have a filtering/criteria applied on 'status' column too like
>> select * from ks1.cf1 where id1=123 and status=0;
>>
>> How can I achieve this w/o secondary index (on 'status' column )??
>>
>>
>>
>> On Tue, Jun 20, 2017 at 12:09 AM, ZAIDI, ASAD A  wrote:
>>
>> If you’re only creating index so that your query work, think again!
>> You’ll be storing secondary index on each node , queries involving index
>> could create issues (slowness!!) down the road the when index on multiple
>> node Is involved and  not maintained!  Tables involving a lot of
>> inserts/delete could easily ruin index performance.
>>
>>
>>
>> You can get away the potential situation by leveraging composite key, if
>> that is possible for you?
>>
>>
>>
>>
>>
>> *From:* techpyaasa . [mailto:techpya...@gmail.com]
>> *Sent:* Monday, June 19, 2017 1:01 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Secondary Index
>>
>>
>>
>> Hi,
>>
>> I want to create Index on already existing table which has more than 3
>> GB/node.
>> We are using c*-2.1.17 with 2 DCs , each DC with 3 groups and each group
>> has 7 nodes.(Total 42 nodes in cluster)
>>
>> So is it ok to create Index on this table now or will it have any problem?
>> If its ok , how much time it would take for this process?
>>
>>
>> Thanks in advance,
>> TechPyaasa
>>
>>
>>
>
>


Re: system_auth replication strategy

2017-04-02 Thread Sam Tunnicliffe
>
> auth logins for super users is 101 replicas serving the read


This only applies to the default superuser (i.e. 'cassandra'), which is one
of the reasons for recommending it is only used during initial setup[1].
Reads for all other users, including superusers, are done at LOCAL_ONE

[1]
http://cassandra.apache.org/doc/latest/operating/security.html#authentication

On Sun, Apr 2, 2017 at 7:07 AM, Jeff Jirsa  wrote:

> > You should use a network topology strategy with high RF in each DC
>
>
> There's some debate here - some blogs/speakers will say to put a replica
> on each instance, but that falls down above a few dozen instances. Imagine
> if you have (for example) 200 instances per DC, auth logins for super users
> is 101 replicas serving the read - that's a really slow login that's likely
> to fail (think about thread pools on the coordinator doing the read
> response handling, it's an ugly ugly mess).
>
> Normal logins do use LOCAL_ONE though so if there are lots of replicas,
> auth will be faster - so use 5-10 replicas per DC, and crank up the caching
> timeouts as well
>
>


Re: Internal Security - Authentication & Authorization

2017-03-15 Thread Sam Tunnicliffe
>
> Here is what I have pieced together. Please let me know if I am on the
> right track.


You're more or less right regarding the built in
authenticator/authorizer/role manager (which are usually referred to as
"internal" as they store their data in Cassandra tables). One important
thing to note is that using the default superuser credentials (i.e. logging
in as 'cassandra') will make these reads happen at QUORUM, not LOCAL_ONE,
which is one of the reasons you shouldn't use those credentials
after initial setup.

There are 2 settings which govern the lifetime of cached auth data. Once an
item has been cached, it becomes eligible for refresh after the update
interval has passed. A get from the cache will trigger this refresh, which
happens in the background and while it's running, the old (maybe stale)
entry is served from the cache. When the validity period expires for a
cache entry, it is removed from the cache and subsequent reads trigger a
blocking fetch from storage.

There is further detail in the docs here:
http://cassandra.apache.org/doc/latest/operating/security.html

 If NOT EXISTS will use SERIAL consistency


This isn't actually true. Because internal storage is just one
implementation of role/user management, it doesn't rely on LWT. Instead,
the configured role manager is consulted before executing the statement,
which is similar to how IF NOT EXISTS in schema updates work.


On Tue, Mar 14, 2017 at 11:44 PM, Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> I have similar question. when we create users or roles what is the
> consistency level used?
>
> I know, If NOT EXISTS will use SERIAL consistency. what consistency will
> be used if just use CREATE USER ?
>
> On Mon, Mar 13, 2017 at 7:09 PM, Jacob Shadix 
> wrote:
>
>> I'm looking for a deeper understanding of how Cassandra interacts with
>> the system_auth keyspace to authenticate/authorize users.
>>
>> Here is what I have pieced together. Please let me know if I am on the
>> right track.
>>
>> A user attempts to connect to Cassandra. Cassandra checks against
>> system_auth for that user @ LOCAL_ONE - - If the user exists, a connection
>> is established. When CQL is executed, C* again checks system_auth for that
>> user @ LOCAL_ONE to determine if it has the correct privileges to perform
>> the CQL. If so, it executes the CQL and the permissions are stored in a
>> cache. During the cache validity timeframe, future requests for ANY user
>> stored in the cache do not require a lookup against system_auth. After the
>> cache validity runs out, any new requests will require a lookup against
>> system_auth.
>>
>> -- Jacob Shadix
>>
>
>


Re: Question on configuring Apache Cassandra with LDAP

2017-03-03 Thread Sam Tunnicliffe
This is something that has been discussed before and there's an JIRA open
for it already, it looks like progress has stalled but you might get some
pointers from the linked WIP branch.

https://issues.apache.org/jira/browse/CASSANDRA-12294


On Fri, Mar 3, 2017 at 7:45 PM, Harika Vangapelli -T (hvangape - AKRAYA INC
at Cisco)  wrote:

> I am trying to configure Cassnadra with LDAP , and I am trying to write
> code and want to extend the functionality of org.apache.cassandra.auth.
> PasswordAuthenticator and override authenticate method but As
> PlainTextSaslAuthenticator innerclass has a private scope not able to use
> method overriding.
>
>
>
> Please let us know what is the best way to implement LDAP security with
> Apache Cassandra (or) a work around for this.
>
>
>
> Thanks,
>
> Harika
>
>
>
> [image:
> http://wwwin.cisco.com/c/dam/cec/organizations/gmcc/services-tools/signaturetool/images/logo/logo_gradient.png]
>
>
>
> *Harika Vangapelli*
>
> Engineer - IT
>
> hvang...@cisco.com
>
> Tel:
>
> *Cisco Systems, Inc.*
>
>
>
>
> United States
> cisco.com
>
>
>
> [image: http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif]Think
> before you print.
>
> This email may contain confidential and privileged material for the sole
> use of the intended recipient. Any review, use, distribution or disclosure
> by others is strictly prohibited. If you are not the intended recipient (or
> authorized to receive for the recipient), please contact the sender by
> reply email and delete all copies of this message.
>
> Please click here
>  for
> Company Registration Information.
>
>
>


Re: Does securing C*'s CQL native interface (running on port 9042) automatically secure its Thrift API interface (running on port 9160)?

2016-11-01 Thread Sam Tunnicliffe
It does, yes. Clients will be required to call the thrift login method with
a valid set of credentials before performing any other RPC calls.
btw, in versions of C* >= 2.2 the Thrift server is not enabled by default
(CASSANDRA-9319).

On Mon, Oct 31, 2016 at 4:50 PM, Li, Guangxing 
wrote:

> Hi,
>
> I secured my C* cluster by having "authenticator:
> org.apache.cassandra.auth.PasswordAuthenticator" in cassandra.yaml. I
> know it secures the CQL native interface running on port 9042 because my
> code uses such interface. Does this also secure the Thrift API interface
> running on port 9160? I searched around the web for answers but could not
> find any. I supposed I can write a sample application using Thrift API
> interface to confirm it, but wondering if I can get a quick answer from you
> experts.
>
> Thanks.
>
> George.
>


Re: During writing data into Cassandra 3.7.0 using Python driver 3.7 sometime loose Connection because of Server NullPointerException (Help please!)

2016-09-23 Thread Sam Tunnicliffe
The stacktrace suggests that when a connection is being established, either
the can_login or is_superuser attribute of the authenticated role is null,
which is definitely a bug as there should be no way to create a role in
that state.

Could you please open a ticket on
https://issues.apache.org/jira/browse/CASSANDRA (including as much detail
as possible)? If you could reply back to this with the ticket # that'd be
helpful for anyone coming across a similar issue in future.

Thanks,
Sam


On Fri, Sep 23, 2016 at 10:33 AM, Rajesh Radhakrishnan <
rajesh.radhakrish...@phe.gov.uk> wrote:

> Hi,
>
>
> In one of our C* cluster we are using the latest Cassandra 3.7.0
> (datastax-ddc.3.70) with Python driver 3.7. We are trying to insert 2
> million row or more data into the database, it works but sometimes we are
> getting "Null pointer Exception". I am quoting  the Exception here.
> Any help would be highly appreciated.
>
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in
> the client its Python 2.7.12.
>
> 
> ==
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 -
> Unexpected exception during request; channel = [id: 0xc208da86,
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
>
> java.lang.NullPointerException: null
>
> at org.apache.cassandra.serializers.BooleanSerializer.
> deserialize(BooleanSerializer.java:33) ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at org.apache.cassandra.serializers.BooleanSerializer.
> deserialize(BooleanSerializer.java:24) ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at org.apache.cassandra.cql3.UntypedResultSet$Row.
> getBoolean(UntypedResultSet.java:273) ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
> [apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
> [apache-cassandra-3.7.0.jar:3.7.0]
>
> at io.netty.channel.SimpleChannelInboundHandler.channelRead(
> SimpleChannelInboundHandler.java:105) [netty-all-4.0.36.Final.jar:4.
> 0.36.Final]
>
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(
> AbstractChannelHandlerContext.java:292) [netty-all-4.0.36.Final.jar:4.
> 0.36.Final]
>
> at io.netty.channel.AbstractChannelHandlerContext.access$600(
> AbstractChannelHandlerContext.java:32) [netty-all-4.0.36.Final.jar:4.
> 0.36.Final]
>
> at io.netty.channel.AbstractChannelHandlerContext$7.run(
> AbstractChannelHandlerContext.java:283) [netty-all-4.0.36.Final.jar:4.
> 0.36.Final]
>
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0_73]
>
> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorServ
> ice$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> [apache-cassandra-3.7.0.jar:3.7.0]
>
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-3.7.0.jar:3.7.0]
>
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
>
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 -
> Unexpected exception during request; channel = [id: 0x8e2eae00,
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
>
> java.lang.NullPointerException: null
>
> at org.apache.cassandra.serializers.BooleanSerializer.
> deserialize(BooleanSerializer.java:33) ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at org.apache.cassandra.serializers.BooleanSerializer.
> deserialize(BooleanSerializer.java:24) ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113)
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>
> at org.apache.cassandra.cql3.UntypedResultSet$Row.
> getBoolean(UntypedResultSet.java:273) 

Re: Read Repairs and CL

2016-08-30 Thread Sam Tunnicliffe
Just to clarify a little further, it's true that read repair queries are
performed at CL ALL, but this is slightly different to a regular,
user-initiated query at that CL.

Say you have RF=5 and you issue read at CL ALL, the coordinator will send
requests to all 5 replicas and block until it receives a response from each
(or a timeout occurs) before replying to the client. This is the
straightforward and intuitive case.

If instead you read at CL QUORUM, the # of replicas required for CL is 3,
so the coordinator only contacts 3 nodes. In the case where a speculative
retry is activated, an additional replica is added to the initial set. The
coordinator will still only wait for 3 out of the 4 responses before
proceeding, but if a digest mismatch occurs the read repair queries are
sent to all 4. It's this follow up query that the coordinator executes at
CL ALL, i.e. it requires all 4 replicas to respond to the read repair query
before merging their results to figure out the canonical, latest data.

You can see that the number of replicas queried/required for read repair is
different than if the client actually requests a read at CL ALL (i.e. here
it's 4, not 5), it's the behaviour of waiting for all *contacted* replicas
to respond which is significant here.

There are addtional considerations when constructing that initial replica
set (which you can follow in
o.a.c.Service.AbstractReadExecutor::getReadExecutor), involving the table's
read_repair_chance, dclocal_read_repair_chance and speculative_retry
options. THe main gotcha is global read repair (via read_repair_chance)
which will trigger cross-dc repairs at CL ALL in the case of a digest
mismatch, even if the requested CL is DC-local.


On Sun, Aug 28, 2016 at 11:55 AM, Ben Slater 
wrote:

> In case anyone else is interested - we figured this out. When C* decides
> it need to do a repair based on a digest mismatch from the initial reads
> for the consistency level it does actually try to do a read at CL=ALL in
> order to get the most up to date data to use to repair.
>
> This led to an interesting issue in our case where we had one node in an
> RF3 cluster down for maintenance (to correct data that became corrupted due
> to a severe write overload) and started getting occasional “timeout during
> read query at consistency LOCAL_QUORUM” failures. We believe this due to
> the case where data for a read was only available on one of the two up
> replicas which then triggered an attempt to repair and a failed read at
> CL=ALL. It seems that CASSANDRA-7947 (a while ago) change the behaviour so
> that C* reports a failure at the originally request level even when it was
> actually the attempted repair read at CL=ALL which could not read
> sufficient replicas - a bit confusing (although I can also see how getting
> CL=ALL errors when you thought you were reading at QUORUM or ONE would be
> confusing).
>
> Cheers
> Ben
>
> On Sun, 28 Aug 2016 at 10:52 kurt Greaves  wrote:
>
>> Looking at the wiki for the read path (http://wiki.apache.org/
>> cassandra/ReadPathForUsers), in the bottom diagram for reading with a
>> read repair, it states the following when "reading from all replica nodes"
>> after there is a hash mismatch:
>>
>> If hashes do not match, do conflict resolution. First step is to read all
>>> data from all replica nodes excluding the fastest replica (since CL=ALL)
>>>
>>
>>  In the bottom left of the diagram it also states:
>>
>>> In this example:
>>>
>> RF>=2
>>>
>> CL=ALL
>>>
>>
>> The (since CL=ALL) implies that the CL for the read during the read
>> repair is based off the CL of the query. However I don't think that makes
>> sense at other CLs. Anyway, I just want to clarify what CL the read for the
>> read repair occurs at for cases where the overall query CL is not ALL.
>>
>> Thanks,
>> Kurt.
>>
>> --
>> Kurt Greaves
>> k...@instaclustr.com
>> www.instaclustr.com
>>
> --
> 
> Ben Slater
> Chief Product Officer
> Instaclustr: Cassandra + Spark - Managed | Consulting | Support
> +61 437 929 798
>


Re: Set up authentication on a live production cluster

2016-08-02 Thread Sam Tunnicliffe
>
> However, the actual keyspace (system_auth) and tables are not created
> until the last node is restarted with the parameters changed


Actually, this is not strictly true. On 2.2+ the tables in system_auth are
created up front, regardless of the auth config. Practically you can't go
about setting up your roles until all nodes in the cluster are on 2.2 or
higher, but if they are, then you can.

With open source Cassandra you cannot implement authentication without at
> least a brief degradation of service (as nodes can’t authenticate) and an
> outage (while the keyspace and tables are created, users are created, and
> permissions are granted).


This is also not 100% accurate. Using a modern driver, your clients can be
configured with credentials before the cluster requires them. Drivers will
not send those credentials unless the server they're connecting to asks for
them. So provided you modify your clients to begin sending credentials
before turning authentication on you can enable it without downtime.

The ugly part is that you need to enable auth on at least one node in order
to set up the roles in the system. This is obviously racy as clients may
start connecting to that node as soon as you've enabled authentication but
before you've added all necessary roles. However, if you can stop clients
connecting to that node (maybe via iptables), then you won't hit that
problem. Once all the necessary roles and credentials are configured, you
can re-enable client connections to it and enable authentication on the
rest of the cluster in a rolling fashion (i.e. stop node, modify yaml,
restart node) and you shouldn't encounter any downtime.

This is now covered in the latest docs, which are targeted at 3.x but this
in case they apply equally to 2.2.
http://cassandra.apache.org/doc/latest/operating/security.html#authentication


On Tue, Aug 2, 2016 at 5:21 PM, Jai  wrote:

> I have done it in production without downtime on apache cassandra by
> manipulating the user creation using iptables on first node.
>
> Sent from my iPhone
>
> On Aug 2, 2016, at 9:11 PM, DuyHai Doan  wrote:
>
> Thank you Sean for the excellent and details explanation, a lot of people
> out there start their Cassandra in production without security and wake up
> some days, too late
>
> On Wed, Apr 13, 2016 at 10:54 PM,  wrote:
>
>> Do the clients already send the credentials? That is the first thing to
>> address.
>>
>>
>>
>> Setting up a cluster for authentication (and authorization) requires a
>> restart with the properties turned on in cassandra.yaml. However, the
>> actual keyspace (system_auth) and tables are not created until the last
>> node is restarted with the parameters changed. So, as you are changing each
>> node, what you get is individual nodes that are requiring a password, but
>> have no system_auth keyspace to authenticate against. Thus, clients cannot
>> connect to these nodes.
>>
>>
>>
>> With open source Cassandra you cannot implement authentication without at
>> least a brief degradation of service (as nodes can’t authenticate) and an
>> outage (while the keyspace and tables are created, users are created, and
>> permissions are granted). The outage can be relatively brief, depending on
>> cluster size, CL, speed to restart, etc.
>>
>>
>>
>> With DataStax Enterprise, there is a TransitionalAuthenticator (and
>> Authorizer) that lets you implement security without a full outage. You
>> basically switch to the Transitional classes so that system_auth gets
>> created. You create all your security objects. Then you switch to
>> PasswordAuthenticator and CassandraAuthorizer. It takes two rolling bounces
>> to get it done, but no outage.
>>
>>
>>
>> I have done both of the above. The DataStax stuff is very helpful, when
>> downtime is a concern. Perhaps you could write your own implementation of
>> the various interfaces to do something like TransitionalAuthenticator, but
>> we have seen that the security interfaces change, so you will probably
>> break/rewrite in later versions. (For one-time use, maybe it is worth a
>> shot?)
>>
>>
>>
>> For anyone setting up new clusters, just start with security turned on so
>> that you don’t end up in the It’s-Production-Can’t-Stop quandary above.
>>
>>
>>
>>
>>
>> Sean Durity
>>
>>
>>
>> *From:* Vigneshwaran [mailto:vigneshwaran2...@gmail.com]
>> *Sent:* Wednesday, April 13, 2016 3:36 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* Set up authentication on a live production cluster
>>
>>
>>
>> Hi,
>>
>>
>>
>> I have setup a 16 node cluster (8 per DC; C* 2.2.4) up and running in our
>> production setup. We use Datastax Java driver 2.1.8.
>>
>>
>>
>> I would like to set up Authentication and Authorization in the cluster
>> without breaking the live clients.
>>
>>
>>
>> From the references I found by googling, I can setup credentials for a
>> new cluster. But it is not clear to me what steps I should take for 

Re: Get clustering column in Custom cassandra trigger

2016-05-26 Thread Sam Tunnicliffe
If you just want the string representations you can just use
Unfiltered::clustering to get the Clustering instance for each Unfiltered,
then call its toString(CFMetadata), passing update.metadata().



On Thu, May 26, 2016 at 12:01 PM, Siddharth Verma <
verma.siddha...@snapdeal.com> wrote:

> Tried the following as well. Still no result.
>
> update.metadata().clusteringColumns().toString()  -> get clustering column
> names
> update.columns().toString()   -> gets no primary key
> colulmns
> update.partitionKey().toString()  -> gets token range
>
> Any help would be appreciated.
>
> Thanks
> Siddharth Verma
>


Re: Why simple replication strategy for system_auth ?

2016-05-13 Thread Sam Tunnicliffe
LocalStrategy means that data is not replicated in the usual way and
remains local to each node. Where it is used, replication is either not
required (for example in the case of secondary indexes and system.local) or
happens out of band via some other method (as in the case of schema, or
system.peers which is populated largely from gossip).

There are several components in Cassandra which generate or persist
"system" data for which a normal distribution makes sense. Auth data is
one, tracing, repair history and materialized view status are others. The
keyspaces for this data generally use SimpleStategy by default as it is
guaranteed to work out of the box, regardless of topology.  The intent of
the advice to configure system_auth with RF=N was to increase the
likelihood that any read of auth data would be done locally, avoiding
remote requests where possible. This is somewhat outdated though and not
really necessary. In fact, the 3.x docs actually suggest "3 to 5 nodes per
Data Center"[1]

FTR, you can't specify LocalStrategy in a CREATE or ALTER KEYSPACE, for
these reasons.

[1]
http://docs.datastax.com/en/cassandra/3.x/cassandra/configuration/secureConfigNativeAuth.htm


On Fri, May 13, 2016 at 10:47 AM, Jérôme Mainaud  wrote:

> Hello,
>
> Is there any good reason why system_auth strategy is SimpleStrategy by
> default instead of LocalStrategy like system and system_schema ?
>
> Especially when documentation advice to set the replication factor to the
> number of nodes in the cluster, which is both weird and inconvenient to
> follow.
>
> Do you think that changing the strategy to LocalStrategy would work or
> have undesirable side effects ?
>
> Thank you.
>
> --
> Jérôme Mainaud
> jer...@mainaud.com
>


Re: Optional TLS CQL Encryption

2016-04-20 Thread Sam Tunnicliffe
>From 3.0, separate ports can be configured for encrypted & non-encrypted
connections.
See https://issues.apache.org/jira/browse/CASSANDRA-9590

On Wed, Apr 20, 2016 at 8:51 AM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:

> Hi Ben,
>
> Thanks for confirming what I saw occur. The Datastax drivers don't play
> very nicely with Twisted Python so connection pooling is inconsistent and
> makes always-on TLS a no-go performance-wise. The encryption overhead isn't
> the problem, it's the build-up of the TLS session for every connection when
> connection pooling is not working as needed. That said it is still
> beneficial to be able to enforce TLS for remote access...MySQL allows you
> to enforce TLS on a per-user basis for example.
>
> If someone has been successful not wrapping the Datastax drivers in
> deferToThread calls when using Twisted I'd appreciate insight on how you
> got that working because its pretty much undocumented.
>
> -J
>
> On Tue, Apr 19, 2016 at 11:46 PM, Ben Bromhead 
> wrote:
>
>> Hi Jason
>>
>> If you enable encryption it will be always on. Optional encryption is
>> generally a bad idea (tm). Also always creating a new session every query
>> is also a bad idea (tm) even without the minimal overhead of encryption.
>>
>> If you are really hell bent on doing this you could have a node that is
>> part of the cluster but has -Dcassandra.join_ring=false set in jvm
>> options in cassandra-env.sh so it does not get any data and configure
>> that to have no encryption enabled. This is known as a fat client. Then
>> connect to that specific node whenever you want to do terrible non
>> encrypted things.
>>
>> Having said all that, please don't do this.
>>
>> Cheers
>>
>> On Tue, 19 Apr 2016 at 15:32 Jason J. W. Williams <
>> jasonjwwilli...@gmail.com> wrote:
>>
>>> Hey Guys,
>>>
>>> Is there a way to make TLS encryption optional for the CQL listener?
>>> We'd like to be able to use for remote management connections but not for
>>> same datacenter usage (since the build/up  tear down cost is too high for
>>> things that don't use pools).
>>>
>>> Right now it appears if we enable encryption it requires it for all
>>> connections, which definitely is not what we want.
>>>
>>> -J
>>>
>> --
>> Ben Bromhead
>> CTO | Instaclustr 
>> +1 650 284 9692
>> Managed Cassandra / Spark on AWS, Azure and Softlayer
>>
>
>


Re: seconday index queries with thrift in cassandra 3.x supported ?

2016-04-07 Thread Sam Tunnicliffe
That certainly looks like a bug, would you mind opening a ticket at
https://issues.apache.org/jira/browse/CASSANDRA please?

Thanks,
Sam

On Thu, Apr 7, 2016 at 2:19 PM, Ivan Georgiev  wrote:

> Hi, are secondary index queries with thrift supported in Cassandra 3.x ?
> Asking as I am not able to get them working.
>
> I am doing a get_range_slices call with row_filter set in the KeyRange
> property, but I am getting an exception in the server with the following
> trace:
>
>
>
> INFO   | jvm 1| 2016/04/07 14:56:35 | 14:56:35.403 [Thrift:16] DEBUG
> o.a.cassandra.service.ReadCallback - Failed; received 0 of 1 responses
>
> INFO   | jvm 1| 2016/04/07 14:56:35 | 14:56:35.404
> [SharedPool-Worker-1] WARN  o.a.c.c.AbstractLocalAwareExecutorService -
> Uncaught exception on thread Thread[SharedPool-Worker-1,5,main]: {}
>
> INFO   | jvm 1| 2016/04/07 14:56:35 | java.lang.RuntimeException:
> java.lang.NullPointerException
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2450)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_72]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 | Caused by:
> java.lang.NullPointerException: null
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.index.internal.keys.KeysSearcher.filterIfStale(KeysSearcher.java:155)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.index.internal.keys.KeysSearcher.access$300(KeysSearcher.java:36)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.index.internal.keys.KeysSearcher$1.prepareNext(KeysSearcher.java:104)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.index.internal.keys.KeysSearcher$1.hasNext(KeysSearcher.java:70)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1792)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   at
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2446)
> ~[apache-cassandra-3.0.4.jar:3.0.4]
>
> INFO   | jvm 1| 2016/04/07 14:56:35 |   ... 4 common
> frames omitted
>
>
>
> Are we still able to do thrift seconday index queries ? Using Cassandra
> 3.0.4. Same call works fine with Cassandra 2.2.5.
>
>
>
> Regards:
>
> Ivan
>


Re: Logging

2016-01-25 Thread Sam Tunnicliffe
Paulo is correct in saying that C* doesn't have a direct equivalent of
SecurityContextHolder. Authenticated principal info is retrievable from the
QueryState during query execution but a) this isn't available to every
method in the call chain and b) its scope is limited to the coordinator for
the request. That is, it isn't serialized and included in the read/mutation
messages which the coordinator distributes to the replicas. So you could
produce a level of audit trail by providing a custom QueryHandler (See
CASSANDRA-6659) that logs each statement along with the principal. But if
the goal is indeed that "every log message in file should start with
username of the user, who initiated this action", it's isn't really
feasible right now

On Mon, Jan 25, 2016 at 3:52 PM, Paulo Motta 
wrote:

> That would work, but afaik Cassandra doesn't have an equivalent of
> RequestContextHolder/SecurityContextHolder that is able to retrieve the
> user/session of a given thread/request (maybe I'm wrong as I'm no auth
> expert), so if these don't exist we'd need to add equivalent to those or do
> it via MDC (set the context when request arrives, propagate to down stream
> threads, cleanup), which can become quite messy as shown in CASSANDRA-7276.
>
> For CQL statements perhaps the query tracing infrastructure could be
> reused to provide that info, but that would require further investigation.
> See CASSANDRA-1123 for more details on that.
>
> 2016-01-25 12:30 GMT-03:00 oleg yusim :
>
>> Paulo,
>>
>> Ideally - all the actions (security purposes, preserving completness of
>> the audit trail). How about this approach:
>> http://www.codelord.net/2010/08/27/logging-with-a-context-users-in-logback-and-spring-security/
>>  ?
>> Would that work? Or you would rather suggest to go MDC way?
>>
>> Thanks,
>>
>> Oleg
>>
>> On Mon, Jan 25, 2016 at 9:23 AM, Paulo Motta 
>> wrote:
>>
>>> What kind of actions? nodetool/system actions or cql statements?
>>>
>>> You could probably achieve identity-based logging with logback Mapped
>>> Diagnostic Context (MDC - logback.qos.ch/manual/mdc.html), but you'd
>>> need to patch your own Cassandra jars in many locations to provide that
>>> information to the logging context, so not exactly a trivial thing to do.
>>> We tried using that to print ks/cf names on log messages but it became a
>>> bit messy due to the SEDA architecture as you need to patch executors to
>>> inherit identifiers from parent threads and cleanup afterwards. See
>>> CASSANDRA-7276 for more background.
>>>
>>> 2016-01-25 12:09 GMT-03:00 oleg yusim :
>>>
 I want to try to re-phrase my question here... what I'm trying to
 achieve is identity-based logging. I.e. every log message in file should
 start with username of the user, who initiated this action. Would that be
 possible to achieve? If so, can you give me a brief example?

 Thanks,

 Oleg

 On Thu, Jan 21, 2016 at 2:57 PM, oleg yusim 
 wrote:

> Joel,
>
> Thanks for reference. What I'm trying to achieve, is to add the name
> of the user, who initiated logged action. I tried c{5}, but what I see is
> that;
>
> TRACE [GossipTasks:1] c{5} 2016-01-21 20:51:17,619 Gossiper.java:700 -
> Performing status check ...
>
> I think, I'm missing something here. Any suggestions?
>
> Thanks,
>
> Oleg
>
>
>
> On Thu, Jan 21, 2016 at 1:30 PM, Joel Knighton <
> joel.knigh...@datastax.com> wrote:
>
>> Cassandra uses logback as its backend for logging.
>>
>> You can find information about configuring logging in Cassandra by
>> searching for "Configuring logging" on docs.datastax.com and
>> selecting the documentation for your version.
>>
>> The documentation for PatternLayouts (the pattern string about which
>> you're asking) in logback is available in the logback manual under the
>> section for Conversion Words
>> http://logback.qos.ch/manual/layouts.html#conversionWord
>>
>>
>> On Thu, Jan 21, 2016 at 1:21 PM, oleg yusim 
>> wrote:
>>
>>> Greetings,
>>>
>>> Guys, can you, please, point me to documentation on how to configure
>>> format of logs? I want make it clear, I'm talking about formatting i.e.
>>> this:
>>>
>>> %-5level %date{HH:mm:ss,SSS} %msg%n
>>>
>>> What if I want to add another parameters into this string? Is there
>>> a list of available parameters here and syntax?
>>>
>>> Thanks,
>>>
>>> Oleg
>>>
>>>
>>
>>
>> --
>>
>> 
>>
>> Joel Knighton
>> Cassandra Developer | joel.knigh...@datastax.com
>>
>> 
>>  
>> 

Re: unable to create a user on version 2.2.4

2016-01-02 Thread Sam Tunnicliffe
If you've upgraded to 2.2.4, the full instructions necessary for
auth-enabled clusters were unfortunately missing from NEWS.txt. See
CASSANDRA-10904 for details.
On 2 Jan 2016 10:05, "david"  wrote:

> we are running cassandra version 2.2.4 on Debian  jessie (latest stable) .
> when i attempt to create user, it doesn't work when i type the following
> 'create user alice with password 'bob' superuser;'
> cqlsh returns fine without any error
>
> however 'list users' does not show the newly created user
> what could be the issue? pls. advise. thanks in advance.
>
>


Re: [Marketing Mail] Re: [Marketing Mail] can't make any permissions change in 2.2.4

2015-12-19 Thread Sam Tunnicliffe
Sorry about the confusing omission of the upgrade instructions from
NEWS.txt, the oversight there was mine in the course of CASSANDRA-7653.
Dropping those tables is absolutely what you need to do in order to trigger
C* to switch over to using the new role-based tables following an upgrade
to 2.2+

I've updated the DataStax blog post Reynald referred to earlier in the
thread and I'll commit an update to NEWS.txt with these instructions on
Monday.

Thanks,
Sam

On Sat, Dec 19, 2015 at 5:00 PM, Kai Wang  wrote:

> Some update. I went through this blog:
>
> https://www.instaclustr.com/5-things-you-need-to-know-about-cassandra-2-2/
>
> and deleted these three tables (with fingers crossed):
>
> system_auth.credentials
> system_auth.users
> system_auth.permissions
>
> Now I seem to be able to use RBAC to modify permissions.
>
> On Fri, Dec 18, 2015 at 9:23 AM, Kai Wang  wrote:
>
>> Sylvain,
>>
>> Thank you very much.
>>
>> On Fri, Dec 18, 2015 at 9:20 AM, Sylvain Lebresne 
>> wrote:
>>
>>> On Fri, Dec 18, 2015 at 3:04 PM, Kai Wang  wrote:
>>>
 Reynald,

 Thanks for link. That explains it.

 Sylvain,

 What exactly are the "legacy tables" I am supposed to drop? Before I
 drop them, is there any way I can confirm the old schema has been converted
 to the new one successfully?

>>>
>>> I didn't worked on those changes so I'm actually not sure of the exact
>>> answer. But I see you commented on the ticket so we'll make sure to include
>>> that information in the NEWS file (and maybe to get the blog post edited).
>>>
>>>

 Thanks.


 On Fri, Dec 18, 2015 at 5:05 AM, Reynald Bourtembourg <
 reynald.bourtembo...@esrf.fr> wrote:

> Done:
> https://issues.apache.org/jira/browse/CASSANDRA-10904
>
>
>
> On 18/12/2015 10:51, Sylvain Lebresne wrote:
>
> On Fri, Dec 18, 2015 at 8:55 AM, Reynald Bourtembourg <
> reynald.bourtembo...@esrf.fr> wrote:
>
>> This does not seem to be explained in the Cassandra 2.2 Upgrading
>> section of the NEWS.txt file:
>>
>>
>> https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-2.2.4
>>
>>
> This is indeed an oversight. Would you mind opening a JIRA ticket so
> we don't forget to add it now?
>
> --
> Sylvain
>
>
>

>>>
>>
>


Re: AssertionError on PasswordAuthenticator

2015-07-27 Thread Sam Tunnicliffe
I don't know Usergrid at all, but the error can be reproduced with cqlsh by
supplying an empty string for the username, e.g bin/cqlsh -u  -p
cassandra
So I guess something in Usergrid is not setting it correctly.

On Sat, Jul 25, 2015 at 5:22 PM, Andreas Schlüter 
dr.andreas.schlue...@gmx.de wrote:

 Hi,



 I am starting to setup Usergrid on Cassandra, but I run into an issue that
 I debugged into and which does not seem to be related to Usergrid or my
 setup, since I run into an AssertionError (which should  never happen
 according to the comment in the Cassandra Code, and which I don’t see how
 to avoid from the client side).

 When I do a Login from Usergrid via thrift, I get the following
 stacktrace: (Cassandra Version is 2.08, I used the standard
 username/password  cassandra/cassandra to exclude errors here)



 ERROR [Thrift:16] 2015-07-25 15:02:32,480 CassandraDaemon.java (line 199)
 Exception in thread Thread[Thrift:16,5,

 main]

 java.lang.AssertionError:
 org.apache.cassandra.exceptions.InvalidRequestException: Key may not be
 empty

 at
 org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:117)

 at
 org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)

 at
 org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)

 at
 org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)

 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)

 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)

 at
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201

 )

 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

 at java.lang.Thread.run(Thread.java:745)

 Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Key
 may not be empty

 at
 org.apache.cassandra.cql3.QueryProcessor.validateKey(QueryProcessor.java:120)

 at
 org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:344)

 at
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:206)

 at
 org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:110)



 Any ideas what might be wrong or which prerequisites need to be met? This
 is the first request for a connection.



 Help would be greatly appreciated, I tried everything I could come up with
 supported by Google…



 Thanks in advance,

 Andreas



Re: Datastax Java Driver vs Cassandra 2.1.7

2015-06-23 Thread Sam Tunnicliffe
Although amending the query is a workaround for this (and duplicating the
columns in the selection is not something I imagine one would deliberately
do), this is still an ugly regression, so I've opened
https://issues.apache.org/jira/browse/CASSANDRA-9636 to fix it.

Thanks,
Sam

On Tue, Jun 23, 2015 at 1:52 PM, Jean Tremblay 
jean.tremb...@zen-innovations.com wrote:

  Hi Sam,

  You have a real good gut feeling.
 I went to see the query that I used since many months… which was working….
 but obviously there is something wrong with it.
 The problem with it was *simply* that I placed twice the same field in the
 select. I corrected in my code and now I don’t have the error with 2.1.7.

  This provocated the error on the nodes:

ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 -
 Unexpected exception during request; channel = [id: 0x5e809aa1, /
 192.168.2.8:49581 = /192.168.2.201:9042]
 java.lang.AssertionError: null
 at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
 ~[apache-cassandra-2.1.7.jar:2.1.7]


 I can also reproduce the error on cqlsh:

  cqlsh select c1, p1, mm, c2, iq, iq from ds.t1 where type='D' and
 c1=1 and mm=201401 and mm=201402 and p1='01';
  *ServerError: ErrorMessage code= [Server error]
 message=java.lang.AssertionError*
 cqlsh select c1, p1, mm, c2, iq  from ds.t1 where type='D' and c1=1
 and mm=201401 and mm=201402 and p1='01';

*c1* | *p1   * | *mm* | *c2* | *iq*
 +---+++-
   *1* |*01* | *201401* |  *1* |   *{**‘XX'**: **97160**}*
  *…*

  *Conclusion… my mistake. Sorry.*


   On 23 Jun 2015, at 13:06 , Sam Tunnicliffe s...@beobal.com wrote:

  Can you share the query that you're executing when you see the error and
 the schema of the target table? It could be something related to
 CASSANDRA-9532.

 On Tue, Jun 23, 2015 at 10:05 AM, Jean Tremblay 
 jean.tremb...@zen-innovations.com wrote:

 Hi,

  I’m using Datastax Java Driver V 2.1.6
 I migrated my cluster to Cassandra V2.1.7
 And now I have an error on my client that goes like:

  2015-06-23 10:49:11.914  WARN 20955 --- [ I/O worker #14]
 com.datastax.driver.core.RequestHandler  : /192.168.2.201:9042 replied
 with server error (java.lang.AssertionError), trying next host.

  And on the node I have an NPE

  ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 -
 Unexpected exception during request; channel = [id: 0x5e809aa1, /
 192.168.2.8:49581 = /192.168.2.201:9042]
 java.lang.AssertionError: null
 at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1289)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1223)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.7.jar:2.1.7]
 at
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 [na:1.8.0_45]
 at
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run

Re: Datastax Java Driver vs Cassandra 2.1.7

2015-06-23 Thread Sam Tunnicliffe
Can you share the query that you're executing when you see the error and
the schema of the target table? It could be something related to
CASSANDRA-9532.

On Tue, Jun 23, 2015 at 10:05 AM, Jean Tremblay 
jean.tremb...@zen-innovations.com wrote:

  Hi,

  I’m using Datastax Java Driver V 2.1.6
 I migrated my cluster to Cassandra V2.1.7
 And now I have an error on my client that goes like:

  2015-06-23 10:49:11.914  WARN 20955 --- [ I/O worker #14]
 com.datastax.driver.core.RequestHandler  : /192.168.2.201:9042 replied
 with server error (java.lang.AssertionError), trying next host.

  And on the node I have an NPE

  ERROR [SharedPool-Worker-1] 2015-06-23 10:56:01,186 Message.java:538 -
 Unexpected exception during request; channel = [id: 0x5e809aa1, /
 192.168.2.8:49581 = /192.168.2.201:9042]
 java.lang.AssertionError: null
 at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.newRow(Selection.java:347)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1289)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1223)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.7.jar:2.1.7]
 at
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.7.jar:2.1.7]
 at
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 [na:1.8.0_45]
 at
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.7.jar:2.1.7]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
 [apache-cassandra-2.1.7.jar:2.1.7]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]

  Is there a known problem on Cassandra 2.1.7?

  Thanks for your comments.

  Jean



Re: Turning on internal security with no downtime

2015-03-03 Thread Sam Tunnicliffe
If you're able to configure your clients so that they don't send requests
to 1 node in the cluster you can enable PasswordAuthenticator 
CassandraAuthorizer on that node only and use cqlsh to setup all your users
 permissions. The rest of the cluster will continue to serve client
requests as normal. Once you've done configuring, alter the RF on
system_auth then run repair on the rest of the nodes (just for the
system_auth ks). Finally, do a rolling restart to enable auth on the nodes
that don't yet have it.

On 25 February 2015 at 22:03, sean_r_dur...@homedepot.com wrote:

  Cassandra 1.2.19



 We would like to turn on Cassandra’s internal security
 (PasswordAuthenticator and CassandraAuthorizer) on the ring (away from
 AllowAll). (Clients are already passing credentials in their connections.)
 However, I know all nodes have to be switched to those before the basic
 security objects (system_auth) are created. So, an outage would be required
 to change all the nodes, let system_auth get created, alter system_auth for
 replication strategy, create all the users/permissions, repair system_auth.



 For DataStax, there is a TransitionalAuthorizer that allows the
 system_auth to get created, but doesn’t really require passwords. So, with
 a double, rolling bounce, you can implement security with no downtime.
 Anything like that for open source? Any other ways you have activated
 security without downtime?







 Sean R. Durity





 --

 The information in this Internet Email is confidential and may be legally
 privileged. It is intended solely for the addressee. Access to this Email
 by anyone else is unauthorized. If you are not the intended recipient, any
 disclosure, copying, distribution or any action taken or omitted to be
 taken in reliance on it, is prohibited and may be unlawful. When addressed
 to our clients any opinions or advice contained in this Email are subject
 to the terms and conditions expressed in any applicable governing The Home
 Depot terms of business or client engagement letter. The Home Depot
 disclaims all responsibility and liability for the accuracy and content of
 this attachment and for any damages or losses arising from any
 inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
 items of a destructive nature, which may be contained in this attachment
 and shall not be liable for direct, indirect, consequential or special
 damages in connection with this e-mail message or its attachment.



Re: Can not connect with cqlsh to something different than localhost

2014-12-08 Thread Sam Tunnicliffe
rpc_address (or rpc_interface) is used for client connections,
listen_address is for inter-node communication.



On 8 December 2014 at 19:21, Richard Snowden richard.t.snow...@gmail.com
wrote:

 $ netstat -ntl | grep 9042
 tcp6   0  0   127.0.0.1:9042  :::*
 LISTEN

 (listen_address not set in cassandra.yaml)

 Even with listen_address: 192.168.111.136 I get:
 $ netstat -ntl | grep 9042
 tcp6   0  0   127.0.0.1:9042  :::*
 LISTEN


 All I want to do is to access Cassandra from outside my VM. Is this really
 that hard?



 On Mon, Dec 8, 2014 at 7:30 PM, Michael Dykman mdyk...@gmail.com wrote:

 The difference is what interface your service is listening on. What is
 the output of

 $ netstat -ntl | grep 9042


 On Mon, 8 Dec 2014 07:21 Richard Snowden richard.t.snow...@gmail.com
 wrote:

 I left listen_address blank - still I can't connect (connection refused).

 cqlsh - OK
 cqlsh ubuntu - fail (ubuntu is my hostname)
 cqlsh 192.168.111.136 - fail

 telnet 192.168.111.136 9042 from outside the VM gives me a connection
 refused.

 I just started a Tomcat in my VM and did a telnet 192.168.111.136 8080
 from outside the VM  - and got the expected result (Connected to
 192.168.111.136. Escape character is '^]'.

 So what's so special in Cassandra?


 On Mon, Dec 8, 2014 at 12:18 PM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 Listen address needs the actual address, not the interface.  This is
 best accomplished by setting up proper hostnames for each machine (through
 DNS or hosts file) and leaving listen_address blank, as it will pick the
 external ip.  Otherwise, you'll need to set the listen address to the IP of
 the machine you want on each machine.  I find the former to be less of a
 pain to manage.


 On Mon Dec 08 2014 at 2:49:55 AM Richard Snowden 
 richard.t.snow...@gmail.com wrote:

 This did not work either. I changed /etc/cassandra.yaml and restarted 
 Cassandra (I even restarted the machine to make 100% sure).

 What I tried:

 1) listen_address: localhost
- connection OK (but of course I can't connect from outside the VM to 
 localhost)

 2) Set listen_interface: eth0
- connection refused

 3) Set listen_address: 192.168.111.136
- connection refused


 What to do?


  Try:
  $ netstat -lnt
  and see which interface port 9042 is listening on. You will likely need 
  to
  update cassandra.yaml to change the interface. By default, Cassandra is
  listening on localhost so your local cqlsh session works.

  On Sun, 7 Dec 2014 23:44 Richard Snowden richard.t.snow...@gmail.com
  wrote:

   I am running Cassandra 2.1.2 in an Ubuntu VM.
  
   cqlsh or cqlsh localhost works fine.
  
   But I can not connect from outside the VM (firewall, etc. disabled).
  
   Even when I do cqlsh 192.168.111.136 in my VM I get connection 
   refused.
   This is strange because when I check my network config I can see that
   192.168.111.136 is my IP:
  
   root@ubuntu:~# ifconfig
  
   eth0  Link encap:Ethernet  HWaddr 00:0c:29:02:e0:de
 inet addr:192.168.111.136  Bcast:192.168.111.255
   Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:fe02:e0de/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:16042 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8638 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:21307125 (21.3 MB)  TX bytes:709471 (709.4 KB)
  
   loLink encap:Local Loopback
 inet addr:127.0.0.1  Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK RUNNING  MTU:65536  Metric:1
 RX packets:550 errors:0 dropped:0 overruns:0 frame:0
 TX packets:550 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:148053 (148.0 KB)  TX bytes:148053 (148.0 KB)
  
  
   root@ubuntu:~# cqlsh 192.168.111.136 9042
   Connection error: ('Unable to connect to any servers', 
   {'192.168.111.136':
   error(111, Tried connecting to [('192.168.111.136', 9042)]. Last 
   error:
   Connection refused)})
  
  
   What to do?
  






Re: Secondary index read/write explanation

2012-09-07 Thread Sam Tunnicliffe
On 7 September 2012 00:42, aaron morton aa...@thelastpickle.com wrote:
 1.  When a write request is received, it is written to the base CF and
 secondary index to secondary (hidden) CF. If this right, will the secondary
 index be written local the node or will it follow RP/OPP to write to nodes.

 it's local.
 If an index is to be updated the previous column values from be read from
 the primary CF so they can be deleted from the secondary index CF before
 inserting the new values.

https://issues.apache.org/jira/browse/CASSANDRA-2897 (in trunk)
removes that read of the previously indexed values from the update
path.


 2.  When a coordinator receives a read request with say predicate x=y where
 column x is the secondary index, how does the coordinator query relevant
 node(s)? How does it avoid sending it to all nodes if it is locally indexed?

 When you ask for x=y the coordinator has no idea the rows for that query
 exist in the cluster. If you ask at CL ONE it only does a local read. If you
 ask at a higher CL it asks CL nodes for each TokenRange in the cluster. Or
 for a restricted token range if you have a key restriction in the query.

 If there is any article/blog that can help understand this better, please
 let me know.

 I think this is still mostly relevant
 http://www.datastax.com/docs/0.7/data_model/secondary_indexes

 Cheers

 -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 6/09/2012, at 5:32 PM, Venkat Rama venkata.s.r...@gmail.com wrote:

 Hi All,

 I am a new bee to Cassandra and trying to understand how secondary indexes
 work.  I have been going over the discussion on
 https://issues.apache.org/jira/browse/CASSANDRA-749 about local secondary
 indexes. And interesting question on
 http://www.mail-archive.com/user@cassandra.apache.org/msg16966.html.  The
 discussion seems to assume that most common uses cases are ones with range
 queries.  Is this right?

 I am trying to understand the low cardinality reasoning and how the read
 gets executed.  I have following questions, hoping i can explain my question
 well :)

 1.  When a write request is received, it is written to the base CF and
 secondary index to secondary (hidden) CF. If this right, will the secondary
 index be written local the node or will it follow RP/OPP to write to nodes.
 2.  When a coordinator receives a read request with say predicate x=y where
 column x is the secondary index, how does the coordinator query relevant
 node(s)? How does it avoid sending it to all nodes if it is locally indexed?

 If there is any article/blog that can help understand this better, please
 let me know.

 Thanks again in advance.

 VR




Re: [RELEASE] Apache Cassandra 1.0.10 released

2012-05-08 Thread Sam Tunnicliffe
Hi Jonas,

the bug that was fixed in 4116 meant that the max timestamp recorded
for an sstable didn't consider any tombstones from row deletions. This
meant that from some queries, some sstables were not being read when
they should have been. I couldn't say categorically that this would
cause the deleted data to reappear in read results, but I can see how
it could do.

Cheers,
Sam

On 8 May 2012 10:15, Jonas Borgström jo...@borgstrom.se wrote:
 Hi,

 Can someone give some more details about the CASSANDRA-4116 bug fixed in
 this release? Could this cause resurrection of deleted data for example?

 https://issues.apache.org/jira/browse/CASSANDRA-4116

 / Jonas


 On 2012-05-08 11:04 , Sylvain Lebresne wrote:
 The Cassandra team is pleased to announce the release of Apache Cassandra
 version 1.0.10.

 Cassandra is a highly scalable second-generation distributed database,
 bringing together Dynamo's fully distributed design and Bigtable's
 ColumnFamily-based data model. You can read more here:

  http://cassandra.apache.org/

 Downloads of source and binary distributions are listed in our download
 section:

  http://cassandra.apache.org/download/

 This version is maintenance/bug fix release[1]. As always, please pay
 attention to the release notes[2] and Let us know[3] if you were to encounter
 any problem.

 Have fun!

 [1]: http://goo.gl/u8gIO (CHANGES.txt)
 [2]: http://goo.gl/mAHbY (NEWS.txt)
 [3]: https://issues.apache.org/jira/browse/CASSANDRA