Cassandra 3.11.1 java.lang.UnsatisfiedLinkError Exception when using Leveled Compaction

2017-10-27 Thread Bruce Tietjen
We are using 3.11.1 (which we recently upgraded from 3.11.0) and just
started experimenting with LeveledCompactionStrategy. After loading data
for 24 hours, we started getting the following error:

ERROR [CompactionExecutor:628] 2017-10-27 11:58:21,748
CassandraDaemon.java:228 - Exception in thread
Thread[CompactionExecutor:628,1,main] java.lang.UnsatisfiedLinkError: no
nio in java.library.path

This seems strange that we would encounter this kind error and are
wondering if we might be missing some Java classes not included in the
install or what else we might have done wrong.

It looks like writes to the cluster may have slowed to about 1/4 what it
had been doing sometime around the time when we started seeing this error.


We would be grateful for any pointers as to what to look at.

-- Thanks --

PS Following is more detail that might be helpful

We are running OpenJDK 8 on CentOS 7.

Following is the full traceback:
ERROR [CompactionExecutor:628] 2017-10-27 11:58:21,748
CassandraDaemon.java:228 - Exception in thread
Thread[CompactionExecutor:628,1,main]
java.lang.UnsatisfiedLinkError: no nio in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
~[na:1.8.0_144]
at java.lang.Runtime.loadLibrary0(Runtime.java:870) ~[na:1.8.0_144]
at java.lang.System.loadLibrary(System.java:1122) ~[na:1.8.0_144]
at sun.nio.fs.UnixCopyFile$2.run(UnixCopyFile.java:612)
~[na:1.8.0_144]
at sun.nio.fs.UnixCopyFile$2.run(UnixCopyFile.java:609)
~[na:1.8.0_144]
at java.security.AccessController.doPrivileged(Native Method)
~[na:1.8.0_144]
at sun.nio.fs.UnixCopyFile.(UnixCopyFile.java:609)
~[na:1.8.0_144]
at
sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
~[na:1.8.0_144]
at java.nio.file.Files.move(Files.java:1395) ~[na:1.8.0_144]
at
org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:207)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:189)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:177)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.rewriteSSTableMetadata(MetadataSerializer.java:160)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.mutateLevel(MetadataSerializer.java:136)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.compaction.LeveledManifest.add(LeveledManifest.java:165)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.compaction.LeveledManifest.replace(LeveledManifest.java:201)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.replaceSSTables(LeveledCompactionStrategy.java:327)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.compaction.CompactionStrategyManager.handleListChangedNotification(CompactionStrategyManager.java:494)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.compaction.CompactionStrategyManager.handleNotification(CompactionStrategyManager.java:555)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.lifecycle.Tracker.notifySSTablesChanged(Tracker.java:410)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.lifecycle.LifecycleTransaction.doCommit(LifecycleTransaction.java:227)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit(Transactional.java:116)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.io.sstable.SSTableRewriter.doCommit(SSTableRewriter.java:206)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit(Transactional.java:116)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.doCommit(CompactionAwareWriter.java:105)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit(Transactional.java:116)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit(Transactional.java:200)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:185)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.finish(CompactionAwareWriter.java:121)
~[apache-cassandra-3.11.1.jar:3.11.1]
at
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:220)
~[apache-cassandra-3.11.1.jar:3.11.1]
at

Re: Getting DigestMismatchExceptions despite setting read repair chances to zero

2017-10-27 Thread Jeff Jirsa


> On Oct 27, 2017, at 3:08 AM, Artur Siekielski  wrote:
> 
> I noticed that the DecoratedKey printed in the stack trace can be for a 
> different table. The arguments are a token and a partition key and they can 
> be the same for multiple tables. Is there a way to know for which table the 
> DigestMismatchException happens?
> 

No, the read repair stats we provide are not per table, so if it’s not in the 
log, it’s not apparent. Feel free to open a jira to ask for it to be added to 
the log message.

> Can the AsyncRepairRunner be triggered if read and writes for all other 
> tables are done with CL=LOCAL_QUORUM (RF=3)? I assumed in that case async 
> read repair is not done even if dclocal_read_repair_chance > 0. Could it be 
> that the async repair runs for that case and it's executed faster than the 
> background syncing to meet RF=3?
> 

Very likely. Async repair runner can be triggered if either 
(dclocal_)read_repair_chance is > 0. If you write at local_quorum, reads can 
definitely race (and even in that case, some writes can be dropped by load 
shedding or missed during network hiccups or GC pauses). 



> 
>> On 10/26/2017 12:19 PM, Artur Siekielski wrote:
>> Hi,
>> 
>> we have one table for which reads and writes are done with CL=ONE. The table 
>> contains counters. We wanted to disable async read repair for the table (to 
>> lessen cluster load and to avoid DigestMismatchExceptions in debug.log). 
>> After altering the table with read_repair_chance=0, 
>> dclocal_read_repair_chance=0, the Digest exceptions still happen:
>> 
>> 
>> DEBUG [ReadRepairStage:92] 2017-10-26 10:00:02,798 ReadCallback.java:242 - 
>> Digest mismatch:
>> org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
>> DecoratedKey(5238932067721150894, 7da6f64695d74899a91bd691321de534) 
>> (33f950054869a91d1ea225eae342499a vs 70d054183b9b001de5f71139aa65b8d9)
>> at 
>> org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92)
>>  ~[apache-cassandra-3.11.1.jar:3.11.1]
>> at 
>> org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233)
>>  ~[apache-cassandra-3.11.1.jar:3.11.1]
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  [na:1.8.0_141]
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  [na:1.8.0_141]
>> at 
>> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>>  [apache-cassandra-3.11.1.jar:3.11.1]
>> at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_141]
>> 
>> 
>> I have verified that the DecoratedKey arguments are for a row from the 
>> altered table (by checking token() of the partition key).
>> 
>> Shouldn't the settings disable AsyncRepairRunner for the table?
>> 
>> Is there an option to disable async read repair globally, or each table must 
>> be altered?
>> 
>> Cassandra version: 3.11.1
>> 
>> 
>> Thanks,
>> Artur
>> 
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>> 
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Materialized Views marked experimental

2017-10-27 Thread Jeff Jirsa


> On Oct 27, 2017, at 2:23 AM, Gábor Auth  wrote:
> 
> Hi,
> 
>> On Thu, Oct 26, 2017 at 11:10 PM Blake Eggleston  
>> wrote:
>> Following a discussion on dev@, the materialized view feature is being 
>> retroactively classified as experimental, and not recommended for new 
>> production uses. The next patch releases of 3.0, 3.11, and 4.0 will include 
>> CASSANDRA-13959, which will log warnings when materialized views are 
>> created, and introduce a yaml setting that will allow operators to disable 
>> their creation.
> 
> Will the experimental classification later be withdrawn (the issue can be 
> fixable)?
> Will the whole MV feature later be withdrawn (the issue can't be fixable)? :)
> 

The experimental warning will be withdrawn when there’s confidence among 
committers that the feature always does the right thing with your data. 

The algorithm SEEMS safe (with the caveats Blake mentioned), and people DO use 
it in production, we just want you to be aware that we’re not as confident in 
its safety as we are for the more mature database features.

Re: Why don't I see my spark jobs running in parallel in Cassandra/Spark DSE cluster?

2017-10-27 Thread Jon Haddad
Seems like a question better suited for the Spark mailing list, or the DSE 
support , not OSS Cassandra.

> On Oct 27, 2017, at 8:14 AM, Thakrar, Jayesh  
> wrote:
> 
> What you have is sequential and hence sequential processing.
> Also Spark/Scala are not parallel programming languages.
> But even if they were, statements are executed sequentially unless you 
> exploit the parallel/concurrent execution features.
>  
> Anyway, see if this works:
>  
> val (RDD1, RDD2) = (JavaFunctions.cassandraTable(...), 
> JavaFunctions.cassandraTable(...))
>  
> val (RDD3, RDD4) = (RDD1.flatMap(..), RDD2.flatMap(..))
>  
>  
> I am hoping that Spark being based on Scala, the behavior below will apply:
> scala> var x = 0
> x: Int = 0
>  
> scala> val (a,b) = (x + 1, x+1)
> a: Int = 1
> b: Int = 1
>  
>  
>  
> From: Cassa L 
> Date: Friday, October 27, 2017 at 1:50 AM
> To: Jörn Franke 
> Cc: user , 
> Subject: Re: Why don't I see my spark jobs running in parallel in 
> Cassandra/Spark DSE cluster?
>  
> No, I dont use Yarn.  This is standalone spark that comes with DataStax 
> Enterprise version of Cassandra.
>  
> On Thu, Oct 26, 2017 at 11:22 PM, Jörn Franke  > wrote:
> Do you use yarn ? Then you need to configure the queues with the right 
> scheduler and method.
> 
> On 27. Oct 2017, at 08:05, Cassa L  > wrote:
> 
> Hi,
> I have a spark job that has use case as below: 
> RRD1 and RDD2 read from Cassandra tables. These two RDDs then do some 
> transformation and after that I do a count on transformed data.
>  
> Code somewhat  looks like this:
>  
> RDD1=JavaFunctions.cassandraTable(...)
> RDD2=JavaFunctions.cassandraTable(...)
> RDD3 = RDD1.flatMap(..)
> RDD4 = RDD2.flatMap()
>  
> RDD3.count
> RDD4.count
>  
> In Spark UI I see count() functions are getting called one after another. How 
> do I make it parallel? I also looked at below discussion from Cloudera, but 
> it does not show how to run driver functions in parallel. Do I just add 
> Executor and run them in threads?
>  
> https://community.cloudera.com/t5/Advanced-Analytics-Apache-Spark/Getting-Spark-stages-to-run-in-parallel-inside-an-application/td-p/38515
>  
> 
>  
> Attaching UI snapshot here?
>  
>  
> Thanks.
> LCassa
>  



Cassandra Compaction Metrics - CompletedTasks vs TotalCompactionCompleted

2017-10-27 Thread Lucas Benevides
Dear community,

I am studying the behaviour of the Cassandra TimeWindowCompactionStragegy.
To do so I am watching some metrics. Two of these metrics are important:
Compaction.CompletedTasks, a gauge, and the TotalCompactionsCompleted, a
Meter.

According to the documentation (
http://cassandra.apache.org/doc/latest/operating/metrics.html#table-metrics
):
Completed Taks = Number of completed compactions since server [re]start.
TotalCompactionsCompleted = Throughput of completed compactions since
server [re]start.

As I realized, the TotalCompactionsCompleted, in the Meter object, has a
counter, which I supposed would be numerically close to the CompletedTasks
gauge. But they are very different, with the Completed Tasks being much
higher than the TotalCompactions Completed.

According to the code, in github (class metrics.CompactionMetrics.java):
Completed Taks - Number of completed compactions since server [re]start
TotalCompactionsCompleted - Total number of compactions since server
[re]start

Can you help me and explain the difference between these two metrics, as
they seem to have very distinct values, with the Completed Tasks being
around 1000 times the value of the counter in TotalCompactionsCompleted.

Thanks in Advance,
Lucas Benevides


Re: Why don't I see my spark jobs running in parallel in Cassandra/Spark DSE cluster?

2017-10-27 Thread Thakrar, Jayesh
What you have is sequential and hence sequential processing.
Also Spark/Scala are not parallel programming languages.
But even if they were, statements are executed sequentially unless you exploit 
the parallel/concurrent execution features.

Anyway, see if this works:

val (RDD1, RDD2) = (JavaFunctions.cassandraTable(...), 
JavaFunctions.cassandraTable(...))

val (RDD3, RDD4) = (RDD1.flatMap(..), RDD2.flatMap(..))


I am hoping that Spark being based on Scala, the behavior below will apply:
scala> var x = 0
x: Int = 0

scala> val (a,b) = (x + 1, x+1)
a: Int = 1
b: Int = 1



From: Cassa L 
Date: Friday, October 27, 2017 at 1:50 AM
To: Jörn Franke 
Cc: user , 
Subject: Re: Why don't I see my spark jobs running in parallel in 
Cassandra/Spark DSE cluster?

No, I dont use Yarn.  This is standalone spark that comes with DataStax 
Enterprise version of Cassandra.

On Thu, Oct 26, 2017 at 11:22 PM, Jörn Franke 
> wrote:
Do you use yarn ? Then you need to configure the queues with the right 
scheduler and method.

On 27. Oct 2017, at 08:05, Cassa L 
> wrote:
Hi,
I have a spark job that has use case as below:
RRD1 and RDD2 read from Cassandra tables. These two RDDs then do some 
transformation and after that I do a count on transformed data.

Code somewhat  looks like this:

RDD1=JavaFunctions.cassandraTable(...)
RDD2=JavaFunctions.cassandraTable(...)
RDD3 = RDD1.flatMap(..)
RDD4 = RDD2.flatMap()

RDD3.count
RDD4.count

In Spark UI I see count() functions are getting called one after another. How 
do I make it parallel? I also looked at below discussion from Cloudera, but it 
does not show how to run driver functions in parallel. Do I just add Executor 
and run them in threads?

https://community.cloudera.com/t5/Advanced-Analytics-Apache-Spark/Getting-Spark-stages-to-run-in-parallel-inside-an-application/td-p/38515

Attaching UI snapshot here?


Thanks.
LCassa



Re: Hinted handoff throttled even after "nodetool sethintedhandoffthrottlekb 0"

2017-10-27 Thread Andrew Bialecki
Bit more information. Using jmxterm and inspecting the state of a node when
it's "slow" playing hints, I can see the following from the node that has
hints to play:

$>get MaxHintsInProgress
#mbean = org.apache.cassandra.db:type=StorageProxy:
MaxHintsInProgress = 2048;

$>get HintsInProgress
#mbean = org.apache.cassandra.db:type=StorageProxy:
HintsInProgress = 0;

$>get TotalHints
#mbean = org.apache.cassandra.db:type=StorageProxy:
TotalHints = 129687;

Is there some throttling that would cause hints to not be played at all if,
for instance, the cluster has enough load or something related to a timeout
setting?

On Fri, Oct 27, 2017 at 1:49 AM, Andrew Bialecki <
andrew.biale...@klaviyo.com> wrote:

> We have a 96 node cluster running 3.11 with 256 vnodes each. We're running
> a rolling restart. As we restart nodes, we notice that each node takes a
> while to have all other nodes be marked as up and this corresponds to nodes
> that haven't finished playing hints.
>
> We looked at the hinted handoff throttling, noticed it was still the
> default of 1024, so we tried to turn it off by setting it to zero. Reading
> the source, it looks like that rate limiting won't take affect until the
> current set of hints have finished. So we made that change cluster wide and
> then restarted the next node. However, we still saw the same issue.
>
> Looking at iftop and network throughput, it's very low (~10kB/s) and
> therefore the few 100k of hints that accumulate while the node is restart
> end up take several minutes to get sent.
>
> Any other knobs we should be tuning to increase hinted handoff throughput?
> Or other reasons why hinted handoff runs so slowly?
>
> --
> Andrew Bialecki
>



-- 
Andrew Bialecki




Re: Getting DigestMismatchExceptions despite setting read repair chances to zero

2017-10-27 Thread Artur Siekielski
I noticed that the DecoratedKey printed in the stack trace can be for a 
different table. The arguments are a token and a partition key and they 
can be the same for multiple tables. Is there a way to know for which 
table the DigestMismatchException happens?


Can the AsyncRepairRunner be triggered if read and writes for all other 
tables are done with CL=LOCAL_QUORUM (RF=3)? I assumed in that case 
async read repair is not done even if dclocal_read_repair_chance > 0. 
Could it be that the async repair runs for that case and it's executed 
faster than the background syncing to meet RF=3?



On 10/26/2017 12:19 PM, Artur Siekielski wrote:

Hi,

we have one table for which reads and writes are done with CL=ONE. The 
table contains counters. We wanted to disable async read repair for 
the table (to lessen cluster load and to avoid 
DigestMismatchExceptions in debug.log). After altering the table with 
read_repair_chance=0, dclocal_read_repair_chance=0, the Digest 
exceptions still happen:



DEBUG [ReadRepairStage:92] 2017-10-26 10:00:02,798 
ReadCallback.java:242 - Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey(5238932067721150894, 7da6f64695d74899a91bd691321de534) 
(33f950054869a91d1ea225eae342499a vs 70d054183b9b001de5f71139aa65b8d9)
    at 
org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92) 
~[apache-cassandra-3.11.1.jar:3.11.1]
    at 
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233) 
~[apache-cassandra-3.11.1.jar:3.11.1]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[na:1.8.0_141]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[na:1.8.0_141]
    at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) 
[apache-cassandra-3.11.1.jar:3.11.1]

    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_141]


I have verified that the DecoratedKey arguments are for a row from the 
altered table (by checking token() of the partition key).


Shouldn't the settings disable AsyncRepairRunner for the table?

Is there an option to disable async read repair globally, or each 
table must be altered?


Cassandra version: 3.11.1


Thanks,
Artur


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Materialized Views marked experimental

2017-10-27 Thread Gábor Auth
Hi,

On Thu, Oct 26, 2017 at 11:10 PM Blake Eggleston 
wrote:

> Following a discussion on dev@, the materialized view feature is being
> retroactively classified as experimental, and not recommended for new
> production uses. The next patch releases of 3.0, 3.11, and 4.0 will include
> CASSANDRA-13959, which will log warnings when materialized views are
> created, and introduce a yaml setting that will allow operators to disable
> their creation.
>

Will the experimental classification later be withdrawn (the issue can be
fixable)?
Will the whole MV feature later be withdrawn (the issue can't be fixable)?
:)

Bye,
Gábor Auth


Re: Why don't I see my spark jobs running in parallel in Cassandra/Spark DSE cluster?

2017-10-27 Thread Cassa L
No, I dont use Yarn.  This is standalone spark that comes with DataStax
Enterprise version of Cassandra.

On Thu, Oct 26, 2017 at 11:22 PM, Jörn Franke  wrote:

> Do you use yarn ? Then you need to configure the queues with the right
> scheduler and method.
>
> On 27. Oct 2017, at 08:05, Cassa L  wrote:
>
> Hi,
> I have a spark job that has use case as below:
> RRD1 and RDD2 read from Cassandra tables. These two RDDs then do some
> transformation and after that I do a count on transformed data.
>
> Code somewhat  looks like this:
>
> RDD1=JavaFunctions.cassandraTable(...)
> RDD2=JavaFunctions.cassandraTable(...)
> RDD3 = RDD1.flatMap(..)
> RDD4 = RDD2.flatMap()
>
> RDD3.count
> RDD4.count
>
> In Spark UI I see count() functions are getting called one after another.
> How do I make it parallel? I also looked at below discussion from Cloudera,
> but it does not show how to run driver functions in parallel. Do I just add
> Executor and run them in threads?
>
> https://community.cloudera.com/t5/Advanced-Analytics-
> Apache-Spark/Getting-Spark-stages-to-run-in-parallel-
> inside-an-application/td-p/38515
>
> Attaching UI snapshot here?
>
>
> Thanks.
> LCassa
>
>


Re: server connection in authenticator

2017-10-27 Thread Horia Mocioi
Hello Justin and thank you for your answer.

Yes, I am aware of that mechanism.

What we need to accomplish is to add some extra validations to the
certificate in a new Authenticator and in order to get the certificates
for the current connection we need the ServerConnection object or the
sslHandler.

Regards,
Horia

On tor, 2017-10-26 at 22:33 +, Justin Cameron wrote:
> Hi Horia,
> 
> Are you aware that Cassandra already supports two-way SSL certificate
> authentication? Take a look at the require_client_auth option under
> client_encryption_options in cassandra.yaml: http://cassandra.apache.
> org/doc/latest/configuration/cassandra_config_file.html#client-
> encryption-options 
> 
> The caveat is that Cassandra role authorisation is not possible via
> this mechanism. If you need this then I suspect you're correct in
> that that some code will need to change.
> 
> Cheers,
> Justin
> 
> On Thu, 26 Oct 2017 at 17:50 Horia Mocioi 
> wrote:
> > Thank you Jeff & Harika.
> > 
> > Yes, I am aware of that mechanism. What we need to do is to add
> > some
> > extra validations on the certificate used for securing the
> > connection. 
> > 
> > So, in order to do this in our Authenticator, we need a way to grab
> > the
> > sslHandler which can be obtained from the ServerConnection. The
> > certificates can be obtained then from the sslHandler.
> > 
> > My question was if there was any other way to grab the
> > ServerConnection
> > in an Authenticator besides passing it as a parameter when building
> > the
> > negotiator, thus changing IAuthenticator and ServerConnection.
> > 
> > Thank you again,
> > Horia
> > 
> > On ons, 2017-10-25 at 17:13 +, Harika Vangapelli -T (hvangape -
> > AKRAYA INC at Cisco) wrote:
> > > Horia,
> > >
> > > By just changing Authenticator and Authorizer in cassandra.yaml
> > and
> > > adding custom libraries in /usr/share/cassandra/  you can plugin
> > to
> > > custom authentication
> > >
> > > sed -ri \
> > >    -e 's/^(authenticator:).*/\1
> > > 'com.cassandra.LdapCassandraAuthenticator'/' \
> > >    -e 's/^(authorizer:).*/\1
> > > 'com.cassandra.LdapCassandraAuthorizer'/' \
> > >    "cassandra.yaml"
> > >
> > > Copy custom jars > /usr/share/cassandra/
> > >  
> > >
> > >
> > > Harika Vangapelli
> > > Engineer - IT
> > > hvang...@cisco.com
> > > Tel: 
> > > Cisco Systems, Inc.
> > >
> > >
> > >
> > > United States
> > > cisco.com
> > >
> > >
> > > Think before you print.
> > > This email may contain confidential and privileged material for
> > the
> > > sole use of the intended recipient. Any review, use, distribution
> > or
> > > disclosure by others is strictly prohibited. If you are not the
> > > intended recipient (or authorized to receive for the recipient),
> > > please contact the sender by reply email and delete all copies of
> > > this message.
> > > Please click here for Company Registration Information.
> > >
> > >
> > > -Original Message-
> > > From: Horia Mocioi [mailto:horia.moc...@ericsson.com] 
> > > Sent: Wednesday, October 25, 2017 3:38 AM
> > > To: user@cassandra.apache.org
> > > Subject: server connection in authenticator
> > >
> > > Hello guys,
> > >
> > > We are building up an authenticator using certificates. So far we
> > > came up with a solution, but implies changing some files in
> > Cassandra
> > > code base in order to have the connection in the new
> > Authenticator.
> > >
> > > So, here are my questions:
> > > * how are you guys doing this?
> > > * is it possible to obtain the connection on the Authenticator
> > > without changing other files in the Cassandra code base, in that
> > > sense just creating a new Authenticator and set it up in
> > > cassandra.yaml?
> > >
> > > Regards,
> > > Horia
> -- 
> Justin Cameron
> Senior Software Engineer
> 
> 
> 
> This email has been sent on behalf of Instaclustr Pty. Limited
> (Australia) and Instaclustr Inc (USA).
> 
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do
> not copy or disclose its content, but please reply to this
> email immediately and highlight the error to the sender and then
> immediately delete the message.