Re: ssl certificate hot reloading test - cassandra 4.1

2024-04-15 Thread pabbireddy avinash
Thanks Andy for your reply . We will test the scenario you mentioned.

Regards
Avinash

On Mon, Apr 15, 2024 at 11:28 AM, Tolbert, Andy  wrote:

> Hi Avinash,
>
> As far as I understand it, if the underlying keystore/trustore(s)
> Cassandra is configured for is updated, this *will not* provoke
> Cassandra to interrupt existing connections, it's just that the new
> stores will be used for future TLS initialization.
>
> Via:
> https://cassandra.apache.org/doc/4.1/cassandra/operating/security.html#ssl-certificate-hot-reloading
>
> > When the files are updated, Cassandra will reload them and use them for
> subsequent connections
>
> I suppose one could do a rolling disablebinary/enablebinary (if it's
> only client connections) after you roll out a keystore/truststore
> change as a way of enforcing the existing connections to reestablish.
>
> Thanks,
> Andy
>
>
> On Mon, Apr 15, 2024 at 11:11 AM pabbireddy avinash
>  wrote:
> >
> > Dear Community,
> >
> > I hope this email finds you well. I am currently testing SSL certificate
> hot reloading on a Cassandra cluster running version 4.1 and encountered a
> situation that requires your expertise.
> >
> > Here's a summary of the process and issue:
> >
> > Reloading Process: We reloaded certificates signed by our in-house
> certificate authority into the cluster, which was initially running with
> self-signed certificates. The reload was done node by node.
> >
> > Truststore and Keystore: The truststore and keystore passwords are the
> same across the cluster.
> >
> > Unexpected Behavior: Despite the different truststore configurations for
> the self-signed and new CA certificates, we observed no breakdown in
> server-to-server communication during the reload. We did not upload the new
> CA cert into the old truststore.We anticipated interruptions due to the
> differing truststore configurations but did not encounter any.
> >
> > Post-Reload Changes: After reloading, we updated the cqlshrc file with
> the new CA certificate and key to connect to cqlsh.
> >
> > server_encryption_options:
> >
> > internode_encryption: all
> >
> > keystore: ~/conf/server-keystore.jks
> >
> > keystore_password: 
> >
> > truststore: ~/conf/server-truststore.jks
> >
> > truststore_password: 
> >
> > protocol: TLS
> >
> > algorithm: SunX509
> >
> > store_type: JKS
> >
> > cipher_suites: [TLS_RSA_WITH_AES_256_CBC_SHA]
> >
> > require_client_auth: true
> >
> > client_encryption_options:
> >
> > enabled: true
> >
> > keystore: ~/conf/server-keystore.jks
> >
> > keystore_password: 
> >
> > require_client_auth: true
> >
> > truststore: ~/conf/server-truststore.jks
> >
> > truststore_password: 
> >
> > protocol: TLS
> >
> > algorithm: SunX509
> >
> > store_type: JKS
> >
> > cipher_suites: [TLS_RSA_WITH_AES_256_CBC_SHA]
> >
> > Given this situation, I have the following questions:
> >
> > Could there be a reason for the continuity of server-to-server
> communication despite the different truststores?
> > Is there a possibility that the old truststore remains cached even after
> reloading the certificates on a node?
> > Have others encountered similar issues, and if so, what were your
> solutions?
> >
> > Any insights or suggestions would be greatly appreciated. Please let me
> know if further information is needed.
> >
> > Thank you
> >
> > Best regards,
> >
> > Avinash
>


ssl certificate hot reloading test - cassandra 4.1

2024-04-15 Thread pabbireddy avinash
Dear Community,

I hope this email finds you well. I am currently testing SSL certificate
hot reloading on a Cassandra cluster running version 4.1 and encountered a
situation that requires your expertise.

Here's a summary of the process and issue:

   1.

   Reloading Process: We reloaded certificates signed by our in-house
   certificate authority into the cluster, which was initially running with
   self-signed certificates. The reload was done node by node.
   2.

   Truststore and Keystore: The truststore and keystore passwords are the
   same across the cluster.
   3.

   Unexpected Behavior: Despite the different truststore configurations for
   the self-signed and new CA certificates, we observed no breakdown in
   server-to-server communication during the reload. We did not upload the *new
   CA cert* into the *old truststore.*We anticipated interruptions due to
   the differing truststore configurations but did not encounter any.
   4.

   Post-Reload Changes: After reloading, we updated the cqlshrc file with
   the new CA certificate and key to connect to cqlsh.

server_encryption_options:

internode_encryption: all

keystore: ~/conf/server-keystore.jks

keystore_password: 

truststore: ~/conf/server-truststore.jks

truststore_password: 

protocol: TLS

algorithm: SunX509

store_type: JKS

cipher_suites: [TLS_RSA_WITH_AES_256_CBC_SHA]

require_client_auth: true

client_encryption_options:

enabled: true

keystore: ~/conf/server-keystore.jks

keystore_password: 

require_client_auth: true

truststore: ~/conf/server-truststore.jks

truststore_password: 

protocol: TLS

algorithm: SunX509

store_type: JKS

cipher_suites: [TLS_RSA_WITH_AES_256_CBC_SHA]

Given this situation, I have the following questions:

   - Could there be a reason for the continuity of server-to-server
   communication despite the different truststores?
   - Is there a possibility that the old truststore remains cached even
   after reloading the certificates on a node?
   - Have others encountered similar issues, and if so, what were your
   solutions?

Any insights or suggestions would be greatly appreciated. Please let me
know if further information is needed.

Thank you

Best regards,

Avinash


Migrating from incremental repairs to full repairs apache cassandra 3.X

2019-01-29 Thread pabbireddy avinash
Hi

We would like to migrate from incremental repairs to regular full repairs
on cassandra cluster running on 3.11 apache cassandra . There is a
procedure for it for datastax mentioned inside the document mentioed below
but the nodetool option mentioned inside the document is not available for
apache cassandra . Please let me know if there is any workaround for this
issue.
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairMigrateFull.html


Regards
Avinash.


Re: system_auth permissions issue C* 2.0.14

2017-11-03 Thread pabbireddy avinash
Hi
We are seeing this issue on some nodes where even if we are providing
correct credentials we are seeing incorrect username/password exception &
when we try again with same credentials we are able to login .

[hostname ~ ]$ ./cqlsh -u  -p 

Traceback (most recent call last):
  File "/opt/xcal/apps/cassandra/bin/cqlsh", line 2094, in 
main(*read_options(sys.argv[1:], os.environ))
  File "/opt/xcal/apps/cassandra/bin/cqlsh", line 2077, in main
single_statement=options.execute)
  File "/opt/xcal/apps/cassandra/bin/cqlsh", line 492, in __init__
password=password, cql_version=cqlver, transport=transport)
  File
"/opt/xcal/apps/apache-cassandra-2.0.14/bin/../lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/connection.py",
line 143, in connect
  File
"/opt/xcal/apps/apache-cassandra-2.0.14/bin/../lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/connection.py",
line 59, in __init__
  File
"/opt/xcal/apps/apache-cassandra-2.0.14/bin/../lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/thrifteries.py",
line 157, in establish_connection
  File
"/opt/xcal/apps/apache-cassandra-2.0.14/bin/../lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py",
line 465, in login
  File
"/opt/xcal/apps/apache-cassandra-2.0.14/bin/../lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py",
line 486, in recv_login
cql.cassandra.ttypes.AuthenticationException:
AuthenticationException(why='Username and/or password are incorrect')
 [hostname ~ ]~ ]$ ./cqlsh -u  -p 
Connected to Cluster at host:9160.
[cqlsh 4.1.1 | Cassandra 2.0.14 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh> exit;

Regards,
Avinash.


On Fri, Nov 3, 2017 at 11:05 AM, pabbireddy avinash <
pabbireddyavin...@gmail.com> wrote:

> Hi,
> We are seeing system_auth related exceptions from application side on
> cassandra 2.0.14 .
>
>
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
> [jersey-common-2.14.jar:na]
> at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
> [jersey-common-2.14.jar:na]
> at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
> [jersey-common-2.14.jar:na]
> ... 33 lines omitted ...
> Caused by: com.datastax.driver.core.exceptions.UnauthorizedException:*
> User has no MODIFY permission on  parents*
> at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:101)
> ~[cassandra-driver-core-2.1.7.jar:na]
>
> When we check permissions on all the hosts we did not find any issues all
> the nodes have modify , select permissions for the user . We repaired
> system_auth on all the nodes but still we are seeing this issue time to
> time .We have RF= so that all nodes will have
> system_auth data .
>
>
> Please help me understand this issue .
>
> Regards,
> Avinash.
>
>


system_auth permissions issue C* 2.0.14

2017-11-03 Thread pabbireddy avinash
Hi,
We are seeing system_auth related exceptions from application side on
cassandra 2.0.14 .


at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
[jersey-common-2.14.jar:na]
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
[jersey-common-2.14.jar:na]
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
[jersey-common-2.14.jar:na]
... 33 lines omitted ...
Caused by: com.datastax.driver.core.exceptions.UnauthorizedException:* User
has no MODIFY permission on  so that all nodes will have
system_auth data .


Please help me understand this issue .

Regards,
Avinash.


Decommissioned nodes show as DOWN in Cassandra version 3.10

2017-06-12 Thread pabbireddy avinash
Hi

In the Cassandra version 3.10, after we decommission a node or datacenter,
we observe the decommissioned nodes marked as DOWN in the cluster when you
do a "nodetool describecluster". The nodes however do not show up in the
"nodetool status" command.
The decommissioned node also does not show up in the "system_peers" table
on the nodes.

The workaround we follow is rolling restart of the cluster, which removes
the decommissioned nodes from the "UNREACHABLE STATE", and shows the actual
state of the cluster. The workaround is tedious for huge clusters.


as anybody in the community observed similar issue?

Below are the observed logs

2017-06-12 18:23:29,209 [RMI TCP Connection(8)-127.0.0.1] INFO
StorageService.java:3938 - Announcing that I have left the ring for 3ms
2017-06-12 18:23:59,210 [RMI TCP Connection(8)-127.0.0.1] INFO
ThriftServer.java:139 - Stop listening to thrift clients
2017-06-12 18:23:59,215 [RMI TCP Connection(8)-127.0.0.1] INFO
Server.java:176 - Stop listening for CQL clients
2017-06-12 18:23:59,216 [RMI TCP Connection(8)-127.0.0.1] WARN
Gossiper.java:1514 - No local state, state is in silent shutdown, or node
hasn't joined, not announcing shutdown
2017-06-12 18:23:59,216 [RMI TCP Connection(8)-127.0.0.1] INFO
MessagingService.java:964 - Waiting for messaging service to quiesce
2017-06-12 18:23:59,217 [ACCEPT-/96.115.209.228] INFO
MessagingService.java:1314 - MessagingService has terminated the accept()
thread
2017-06-12 18:23:59,263 [RMI TCP Connection(8)-127.0.0.1] INFO
StorageService.java:1435 - DECOMMISSIONED



Regards,
Avinash.


Stable version apache cassandra 3.X /3.0.X

2017-05-31 Thread pabbireddy avinash
Hi,

We are planning to deploy a cassandra production cluster on 3.X /3.0.X .
Please let us know if there is any stable version  in 3.X/3.0.X that we
could deploy in production .

Regards,
Avinash.