Re: Cassandra 2FA

2018-07-10 Thread Stefan Podkowinski
You may want to keep an eye on the following ticket:
https://issues.apache.org/jira/browse/CASSANDRA-13404


On 09.07.2018 17:12, Vitali Dyachuk wrote:
> Hi,
> There is a certificate validation based on the mutual CA this is a 1st
> factor, the 2nd factor could be checking the common name of the client
> certificate, probably this requires writing a patch, but probably some
> has already done that ?
> 
> Vitali Djatsuk.

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: How to configure Cassandra to NOT use SSLv2?

2018-04-24 Thread Stefan Podkowinski
The hard-coded protocol selection has been remove in one of the 3.x
releases. You may want to consider updating to the latest 3.11 release.


On 24.04.18 19:21, Lou DeGenaro wrote:
> Here's is what I was told by IBM JVM Support:
>
> ...the string "SSLv2Hello" is not supported in IBM JVM but more
> importantly, the protocol SSLv2 is no longer a valid protocol in
> our JVM. We don't even have SSLv3 enabled by default due to the
> HIGH severity vulnerabilities this protocol has.
>
> Is there anything I can do to use IBM JVM and Cassandra with encryption?
>
> Thanks.
>
> Lou.
>
> On Tue, Apr 24, 2018 at 12:41 PM, Michael Shuler
> > wrote:
>
> Correct!
>
> Thanks for the trace, Lou.
>
> SSLFactory.java:67 specifies a list of protocols, including
> SSLv2Hello.
>
> "It [IBM JSSE] does not support specifying SSLv2Hello."
> 
> https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.security.component.80.doc/security-component/jsse2Docs/knowndiffsun.html
> 
> 
>
> Apache Cassandra is tested on Oracle JDK and OpenJDK. Use a supported
> version of either of those, and this problem should go away.
> Alternatively, do a custom build of Cassandra, if you must run a
> little-used JDK?
>
> Also, just for a little additional info, SSLv2Hello != SSLv2, so I do
> not believe that there is a worry about some weak protocol here.
> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4915862
> 
>
> -- 
> Kind regards,
> Michael
>
> On 04/24/2018 11:23 AM, Marcus Haarmann wrote:
> > OK, this is IBM JDK. The options might differ. I have been
> searching for
> > Oracle Java options.
> > You will need to consult the IBM documentation in this case.
> >
> > Marcus Haarmann
> >
> >
> 
> > *Von: *"Lou DeGenaro"  >
> > *An: *"user"  >
> > *Gesendet: *Dienstag, 24. April 2018 16:08:06
> > *Betreff: *Re: How to configure Cassandra to NOT use SSLv2?
> >
> > Thanks for your suggestions.  I tried using the -D shown below:
> >
> >     degenaro@bluej421:/users/degenaro/cassandra/bluej421>
> ./bin/cassandra
> >     degenaro@bluej421:/users/degenaro/cassandra/bluej421> numactl
> >     --interleave=all /share/ibm-jdk1.8/bin/java
> >     -Dhttps.protocols=TLSv1.2,TLSv1.1,SSLv2Hello
> >     -Xloggc:./bin/../logs/gc.log -XX:+UseParNewGC
> >     -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled
> >     -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1
> >     -XX:CMSInitiatingOccupancyFraction=75
> >     -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSWaitDuration=1...
> >     ...
> >     WARN  14:01:09 Filtering out [TLS_RSA_WITH_AES_128_CBC_SHA,
> >     TLS_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA,
> >     TLS_DHE_RSA_WITH_AES_256_CBC_SHA,
> >     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
> >     TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] as it isn't supported by
> the socket
> >     Exception (java.lang.IllegalArgumentException) encountered
> during
> >     startup: SSLv2Hello is not a recognized protocol.
> >     java.lang.IllegalArgumentException: SSLv2Hello is not a
> recognized
> >     protocol.
> >     at com.ibm.jsse2.S.a(S.java:112)
> >     at com.ibm.jsse2.S.b(S.java:136)
> >     at com.ibm.jsse2.S.(S.java:177)
> >     at com.ibm.jsse2.as
> .setEnabledProtocols(as.java:2)
> >     at
> >   
>  
> org.apache.cassandra.security.SSLFactory.getServerSocket(SSLFactory.java:67)
> >     at
> >     org.apache.cassandra.net
> 
> .MessagingService.getServerSockets(MessagingService.java:514)
> >     at
> >     org.apache.cassandra.net
> 
> .MessagingService.listen(MessagingService.java:498)
> >     at
> >     org.apache.cassandra.net
> 
> .MessagingService.listen(MessagingService.java:482)
> >     at
> >   
>  
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:765)
> >     at
> >   
>  
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:654)
> >     at
> >   
>  
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:534)
> >     at
> >   
>  
> 

Re: LWT broken?

2018-02-09 Thread Stefan Podkowinski
I'd not recommend using any consistency level but serial for reading
tables updated by LWT operations. Otherwise you might end up reading
inconsistent results.


On 09.02.18 08:06, Mahdi Ben Hamida wrote:
>
> Hello,
>
> I'm running a 2.0.17 cluster (I know, I know, need to upgrade) with 46
> nodes across 3 racks (& RF=3). I'm seeing that under high contention,
> LWT may actually not guarantee uniqueness. With a total of 16 million
> LWT transactions (with peak LWT concurrency around 5k/sec), I found 38
> conflicts that should have been impossible. I was wondering if there
> were any known issues that make LWT broken for this old version of
> cassandra.
>
> I use LWT to guarantee that a 128 bit number (hash) maps to a unique
> 64 bit number (id). There could be a large number of threads trying to
> allocate an id for a given hash.
>
> I do the following logic (slightly more complicated than this due to
> timeout handling)
>
>  1  existing_id = SELECT id FROM hash_id WHERE hash=computed_hash *|
> consistency = ONE*
>  2  if existing_id != null:
>  3    return existing_id
>  4  new_id = generateUniqueId()
>  5  result=INSERT INTO hash_id (id) VALUES(new_id) WHERE
> hash=computed_hash IF NOT EXIST | *consistency = QUORUM,
> serialConsistency = SERIAL*
>  6  if result == [applied] // ie we won LWT
>  7    return new_id
>  8  else// we lost LWT, fetch the winning value
>  9    existing_id = SELECT id FROM hash_id WHERE hash=computed_hash |
> consistency = ONE
> 10    return existing_id
>
> Is there anything flawed about this ?
> I do the read at line #1 and #9 at a consistency of ONE. Would that
> cause uncommitted changes to be seen (ie, dirty reads) ? Should it be
> a SERIAL consistency instead ? My understanding is that only one
> transaction will be able to apply the write (at quorum), so doing a
> read at consistency of one will either result in a null, or I would
> get the id that won the LWT race.
>
> Any help is appreciated. I've been banging my head on this issue
> (thinking it was a bug in the code) for some time now.
>
> -- 
> Mahdi.



Re: GDPR, Right to Be Forgotten, and Cassandra

2018-02-09 Thread Stefan Podkowinski
Deleting data "without undue delay" in Cassandra can be implemented by
using crypto shredding and pseudonymization strategies in your data
model. All you have to do is to make sure that throwing away a person's
data encryption key will make it impossible to restore personal data and
impossible to resolve any pseudonyms associated with that person.


On 09.02.18 17:10, Nicolas Guyomar wrote:
> Hi everyone,
>
> Because of GDPR we really face the need to support “Right to Be
> Forgotten” requests => https://gdpr-info.eu/art-17-gdpr/  stating that
> /"the controller shall have the obligation to erase personal data
> *without undue delay*"/
>
> Because I usually meet customers that do not have that much clients,
> modeling one partition per client is almost always possible, easing
> deletion by partition key.
>
> Then, appart from triggering a manual compaction on impacted tables
> using STCS, I do not see how I can be GDPR compliant.
>
> I'm kind of surprised not to find any thread on that matter on the ML,
> do you guys have any modeling strategy that would make it easier to
> get rid of data ? 
>
> Thank you for any given advice
>
> Nicolas



Re: Upgrade to 3.11.1 give SSLv2Hello is disabled error

2018-01-17 Thread Stefan Podkowinski
I think what this error indicates is that a client is trying to connect
using a SSLv2Hello handshake, while this protocol has been disabled on
the server side. Starting with the mentioned ticket, we use the JVM
default list of enabled protocols. What makes this issue a bit
confusing, is that starting with 1.7 SSLv2Hello should be disabled by
default on the client side, but not on the server side. Cassandra should
be able to accept SSLv2Hello connections from 3.0 nodes just fine. What
JRE do you use? Any custom ssl specific settings that might be effective
here?

On 16.01.2018 15:13, Tommy Stendahl wrote:
> Hi,
> 
> I have problems upgrading a cluster from 3.0.14 to 3.11.1 but when I
> upgrade the first node it fails to gossip.
> 
> I have server encryption enabled on all nodes with this setting:
> 
> server_encryption_options:
>     internode_encryption: all
>     keystore: /usr/share/cassandra/.ssl/server/keystore.jks
>     keystore_password: 'x'
>     truststore: /usr/share/cassandra/.ssl/server/truststore.jks
>     truststore_password: 'x'
>     protocol: TLSv1.2
>     cipher_suites:
> [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA]
> 
> 
> I get this error in the log:
> 
> 2018-01-16T14:41:19.671+0100 ERROR [ACCEPT-/10.61.204.16]
> MessagingService.java:1329 SSL handshake error for inbound connection
> from 30f93bf4[SSL_NULL_WITH_NULL_NULL:
> Socket[addr=/x.x.x.x,port=40583,localport=7001]]
> javax.net.ssl.SSLHandshakeException: SSLv2Hello is disabled
>     at
> sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:637)
> ~[na:1.8.0_152]
>     at sun.security.ssl.InputRecord.read(InputRecord.java:527)
> ~[na:1.8.0_152]
>     at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ~[na:1.8.0_152]
>     at
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> ~[na:1.8.0_152]
>     at
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:938)
> ~[na:1.8.0_152]
>     at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> ~[na:1.8.0_152]
>     at sun.security.ssl.AppInputStream.read(AppInputStream.java:71)
> ~[na:1.8.0_152]
>     at java.io.DataInputStream.readInt(DataInputStream.java:387)
> ~[na:1.8.0_152]
>     at
> org.apache.cassandra.net.MessagingService$SocketThread.run(MessagingService.java:1303)
> ~[apache-cassandra-3.11.1.jar:3.11.1]
> 
> I suspect that this has something to do with the change in
> CASSANDRA-10508. Any suggestions on how to get around this would be very
> much appreciated.
> 
> Thanks, /Tommy
> 
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: scylladb

2017-03-12 Thread Stefan Podkowinski
If someone would create a benchmark showing that Cassandra is 10x faster
than Aerospike, would that mean Cassandra is 100x faster than ScyllaDB?

Joking aside, I personally don't pay a lot of attention to any published
benchmarks and look at them as pure marketing material. What I'm
interested in instead is to learn why exactly one solution is faster
than the other and I have to say that Avi is doing a really good job
explaining the design motivations behind ScyllaDB in his presentations.

But the Aerospike comparison also has a good point by showing that you
probably always will be able to find a solution that is faster for a
certain work load. Therefor the most important step when looking for the
fastest datastore, is to first really understand your work load
characteristic. Unfortunately this is something people tend to skip and
instead get lost in controversial benchmark discussions, which are more
fun than thinking about your data model and talking to people about
projected long term load. Because if you do, you might realize that
those benchmark test scenarios (e.g. insert 1TB as fast as possible and
measure compaction times) aren't actually that relevant for your
application.


On 03/10/2017 05:58 PM, Bhuvan Rawal wrote:
> Agreed C++ gives an added advantage to talk to underlying hardware
> with better efficiency, it sound good but can a pice of code written
> in C++ give 1000% throughput than a Java app? Is TPC design 10X more
> performant than SEDA arch?
>
> And if C/C++ is indeed that fast how can Aerospike (which is itself
> written in C) claim to be 10X faster than Scylla
> here http://www.aerospike.com/benchmarks/scylladb-initial/ ?
> (Combining your's and aerospike's benchmarks it appears that Aerospike
> is 100X performant than C* - I highly doubt that!! )
>
> For a moment lets forget about evaluating 2 different databases, one
> can observe 10X performance difference between a mistuned cassandra
> cluster and one thats tuned as per data model - there are so many
> Tunables in yaml as well as table configs.
>
> Idea is - in order to strengthen your claim, you need to provide
> complete system metrics (Disk, CPU, Network), the OPS increase starts
> to decay along with the configs used. Having plain ops per second and
> 99p latency is blackbox.
>
> Regards,
> Bhuvan
>
> On Fri, Mar 10, 2017 at 12:47 PM, Avi Kivity  > wrote:
>
> ScyllaDB engineer here.
>
> C++ is really an enabling technology here. It is directly
> responsible for a small fraction of the gain by executing faster
> than Java.  But it is indirectly responsible for the gain by
> allowing us direct control over memory and threading.  Just as an
> example, Scylla starts by taking over almost all of the machine's
> memory, and dynamically assigning it to memtables, cache, and
> working memory needed to handle requests in flight.  Memory is
> statically partitioned across cores, allowing us to exploit NUMA
> fully.  You can't do these things in Java.
>
> I would say the major contributors to Scylla performance are:
>  - thread-per-core design
>  - replacement of the page cache with a row cache
>  - careful attention to many small details, each contributing a
> little, but with a large overall impact
>
> While I'm here I can say that performance is not the only goal
> here, it is stable and predictable performance over varying loads
> and during maintenance operations like repair, without any special
> tuning.  We measure the amount of CPU and I/O spent on foreground
> (user) and background (maintenance) tasks and divide them fairly. 
> This work is not complete but already makes operating Scylla a lot
> simpler.
>
>
> On 03/10/2017 01:42 AM, Kant Kodali wrote:
>> I dont think ScyllaDB performance is because of C++. The design
>> decisions in scylladb are indeed different from Cassandra such as
>> getting rid of SEDA and moving to TPC and so on. 
>>
>> If someone thinks it is because of C++ then just show the
>> benchmarks that proves it is indeed the C++ which gave 10X
>> performance boost as ScyllaDB claims instead of stating it.
>>
>>
>> On Thu, Mar 9, 2017 at 3:22 PM, Richard L. Burton III
>> > wrote:
>>
>> They spend an enormous amount of time focusing on
>> performance. You can expect them to continue on with their
>> optimization and keep crushing it.
>>
>> P.S., I don't work for ScyllaDB.  
>>
>> On Thu, Mar 9, 2017 at 6:02 PM, Rakesh Kumar
>> > > wrote:
>>
>> In all of their presentation they keep harping on the
>> fact that scylladb is written in C++ and does not carry
>> the overhead of Java.  Still the difference looks staggering.
>> 

Re: Resources for fire drills

2017-03-01 Thread Stefan Podkowinski
I've just created a page for this topic that we can use to collect some
content:
https://github.com/spodkowinski/cassandra-collab/blob/docs_firedrill/doc/source/operating/failure_scenarios.rst

I've invited both of you Malte and Benjamin as collaborators in github,
so you can either push changes or use the github editor for changes.

Let me know if that would work for you.

On 01.03.2017 13:56, benjamin roth wrote:
> @Doc:
> http://cassandra.apache.org/doc/latest/ is built from the git repo. So
> you can add documentation in doc/source and submit a patch.
> I personally think that is not the very best place or way to build a
> knowledge DB but thats what we have.
> 
> 
> 2017-03-01 13:39 GMT+01:00 Malte Pickhan <malte.pick...@zalando.de
> <mailto:malte.pick...@zalando.de>>:
> 
> Hi,
> 
> really cool that this discussion gets attention.
> 
> You are right my question was quite open.
> 
> For me it would already be helpful to compile a list like Ben
> started with scenarios that can happen to a cluster
> and what actions/strategies you have to take to resolve the incident
> without loosing data and having a healthy cluster.
> 
> Ideally we would add some kind of rating of hard the scenario is to
> be resolved so that teams can go through a kind of learning curve.
> 
> For the beginning I think it would already be sufficient to document
> the steps how you can get a cluster into the situation which has
> been described
> in the scenario.
> 
> Hope it’s a bit clearer now what I mean.
> 
> Is there some kind of community space where we could start a
> document for this purpose?
> 
> Best,
> 
> Malte
> 
> > On 1 Mar 2017, at 13:33, Stefan Podkowinski <s...@apache.org
> <mailto:s...@apache.org>> wrote:
> >
> > I've been thinking about this for a while, but haven't found a
> practical
> > solution yet, although the term "fire drill" leaves a lot of room for
> > interpretation. The most basic requirements I'd have for these kind of
> > trainings would start with automated cluster provisioning for each
> > scenario (either for teams or individuals) and provisioning of
> test data
> > for the cluster, with optionally some kind of load generator
> constantly
> > running in the background. I started to work on some Ansible scripts
> > that would do that on AWS a couple of months ago, but it turned out to
> > be a lot of work with all the details you have to take care of. So I'd
> > be happy to hear about any existing resources on that as well!
> >
> >
> > On 01.03.2017 10:59, Malte Pickhan wrote:
> >> Hi Cassandra users,
> >>
> >> I am looking for some resources/guides for firedrill scenarios
> with apache cassandra.
> >>
> >> Do you know anything like that?
> >>
> >> Best,
> >>
> >> Malte
> >>
> 
> 


Re: Resources for fire drills

2017-03-01 Thread Stefan Podkowinski
I've been thinking about this for a while, but haven't found a practical
solution yet, although the term "fire drill" leaves a lot of room for
interpretation. The most basic requirements I'd have for these kind of
trainings would start with automated cluster provisioning for each
scenario (either for teams or individuals) and provisioning of test data
for the cluster, with optionally some kind of load generator constantly
running in the background. I started to work on some Ansible scripts
that would do that on AWS a couple of months ago, but it turned out to
be a lot of work with all the details you have to take care of. So I'd
be happy to hear about any existing resources on that as well!


On 01.03.2017 10:59, Malte Pickhan wrote:
> Hi Cassandra users,
> 
> I am looking for some resources/guides for firedrill scenarios with apache 
> cassandra.
> 
> Do you know anything like that?
> 
> Best,
> 
> Malte
> 


Re: Point in time restore

2017-01-11 Thread Stefan Podkowinski
Hi Hannu

It should be as simple as copying the archived commit logs to the recovery
directory, specifying the point in time you like to restore from the logs
by using the 'restore_point_in_time' setting and afterwards starting the
node.

On Tue, Jan 10, 2017 at 7:45 PM, Hannu Kröger  wrote:

> Hello,
>
> Are there any guides how to do a point-in-time restore for Cassandra?
>
> All I have seen is this:
> http://docs.datastax.com/en/archived/cassandra/2.0/
> cassandra/configuration/configLogArchive_t.html
>
> That gives an idea how to store the data for restore but how to do an
> actual restore is still a mystery to me.
>
> Any pointers?
>
> Cheers,
> Hannu
>


Re: Cassandra 2.2, 3.0, and beyond

2015-06-11 Thread Stefan Podkowinski
 We are also extending our backwards compatibility policy to cover all 3.x 
 releases: you will be able to upgrade seamlessly from 3.1 to 3.7, for 
 instance, including cross-version repair.  

What will be the EOL policy for releases after 3.0? Given your example, will 
3.1 still see bugfixes at this point when I decide to upgrade to 3.7?


Re: Batch isolation within a single partition

2015-05-19 Thread Stefan Podkowinski
Multiple inserts for the same partition key within a batch will be consolidated 
into a single row update operation (since 2.0.6, 
#6737https://issues.apache.org/jira/browse/CASSANDRA-6737). Ie. you get the 
same row level isolationhttp://www.datastax.com/dev/blog/row-level-isolation 
guarantees as any single write operation on that key.


Von: Martin Krasser [mailto:krass...@googlemail.com]
Gesendet: Montag, 18. Mai 2015 12:32
An: user@cassandra.apache.org
Betreff: Batch isolation within a single partition

Hello,

I have an application that inserts multiple rows within a single partition (= 
all rows share the same partition key) using a BATCH statement. Is it possible 
that other clients can partially read that batch or is the batch application 
isolated i.e. other clients can only read all rows of that batch or none of 
them?

I understand that a BATCH update to multiple partitions is not isolated but I'm 
not sure if this is also the case for a single partition:

- The article Atomic batches in Cassandra 
1.2http://www.datastax.com/dev/blog/atomic-batches-in-cassandra-1-2 says that 
... we mean atomic in the database sense that if any part of the batch 
succeeds, all of it will. No other guarantees are implied; in particular, there 
is no isolation.

- On the other hand, the CQL 
BATCHhttps://cassandra.apache.org/doc/cql3/CQL.html#batchStmt docs at 
cassandra.apache.org mention that  ... the [batch] operations are still only 
isolated within a single partition which is a clear statement but doesn't it 
contradict the previous and the next one?

- The CQL 
BATCHhttp://docs.datastax.com/en/cql/3.1/cql/cql_reference/batch_r.html docs 
at docs.datastax.com mention that ... there is no batch isolation. Clients are 
able to read the first updated rows from the batch, while other rows are still 
being updated on the server. However, transactional row updates within a 
partition key are isolated: clients cannot read a partial update. Also, what 
does transactional row updates mean in this context? A lightweight 
transaction? Something else?

Thanks for any help,
Martin


AW: Leap sec

2015-05-18 Thread Stefan Podkowinski
This seems to be a good opportunity to dig a bit deeper into certain 
operational aspects of NTP. Some things to be aware of:

How NTP Operates [1]

It can take several minutes before ntpd will update the system time for the 
first time, while all processes are already started. This can be especially 
problematic for newly provisioned systems. You can speed up the time sync 
update by using the iburst keyword with the server configuration command [4].
It took 2-3 minutes on my testing VM without iburst to correct the time after 
startup. Definitely too long, as Cassandra would already have joined the 
cluster. With iburst it was only a few seconds.

Adjustments will be done by either by stepping or slewing the clock. This can 
happen forward and backwards(!). Stepping will set the corrected value right 
away. Slewing will make adjustments in small increments of at most 0.5ms/s by 
speeding the clock up or slowing it down. It will take at least 2000 seconds to 
adjust the clock by slewing 1 second.
* Time offsets  128ms (default) will be slewed
* Offsets  128ms will be stepped unless -x flag is used. The threshold value 
can be changed with a tinker option.
* Offsets  1000ms will cause ntpd to fail and expect the administrator to fix 
the issue (potential hardware error) unless the -g flag is used.

I think it’s fair to say that the –g options should be always set. I’m not 
fully sure about –x yet. Stepping the clock backwards is not a good idea of 
course. The best solution is probably to have –x set and create alerts on 
higher clock skews, that will prompt ops to resolve the situation manually.


Leap second awareness

Make sure your server is leap second aware in advance. You do not want to have 
the second corrected as part of a normal discrepancy detection process. Instead 
ntpd should be aware of the leap second in advance, so it can precisely 
schedule the adjustment.

There're two ways to make your ntpd instance aware of the upcoming leap second. 
This may happen though the upstream ntp server, which may propagate the leap 
second one day in advance. But this doesn't have to be the case. You need to 
find out if the server pool is configured correctly for this.
Another way to make your ntpd leap second aware is to use a custom file [2]. I 
had to modify the apparmor profile to make this work [3].

[1] http://doc.ntp.org/4.1.0/ntpd.htm
[2] http://support.ntp.org/bin/view/Support/ConfiguringNTP#Section_6.14.
[3] http://askubuntu.com/questions/571839/leapseconds-file-permission-denied
[4] http://doc.ntp.org/4.1.0/confopt.htm


Von: cass savy [mailto:casss...@gmail.com]
Gesendet: Freitag, 15. Mai 2015 19:25
An: user@cassandra.apache.org
Betreff: Leap sec

Just curious to know on how you are preparing Prod C* clusters for leap sec.

What are the workaorund other than upgrading kernel to 3.4+?
Are you upgrading clusters to Java 7 or higher on client and C* servers?



Java 8

2015-05-07 Thread Stefan Podkowinski
Hi

Are there any plans to support Java 8 for Cassandra 2.0, now that Java 7 is EOL?
Currently Java 7 is also recommended for 2.1. Are there any reasons not to 
recommend Java 8 for 2.1?

Thanks,
Stefan