Re: Is my range read query behaving strange ?

2019-06-11 Thread Laxmikant Upadhyay
Does range query ignore purgable tombstone (which crossed grace period) in
some cases?

On Tue, Jun 11, 2019, 2:56 PM Laxmikant Upadhyay 
wrote:

> In a 3 node cassandra 2.1.16 cluster where, one node has old mutation and
> two nodes have evict-able (crossed gc grace period) tombstone produced by
> TTL. A read range  query with local quorum return the old mutation as
> result. However expected result should be empty. Next time running the same
> query results no data as expected. Why this strange behaviour?
>
>
> *Steps to Reproduce :*
> Create a cassandra-2.1.16  3 node cluster. Disable hinted handoff for each
> node.
>
> #ccm node1 nodetool ring
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
>  Token
>
> 3074457345618258602
> 127.0.0.1  rack1   Up Normal  175.12 KB   100.00%
> -9223372036854775808
> 127.0.0.2  rack1   Up   Normal  177.87 KB   100.00%
> -3074457345618258603
> 127.0.0.3  rack1   Up   Normal  175.13 KB   100.00%
> 3074457345618258602
>
> #Connect to cqlsh and set CONISISTENCY LOCAL_QUORUM;
>
> cqlsh> CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = { 'class' :
> 'NetworkTopologyStrategy', 'datacenter1' : 3 };
> cqlsh> CREATE TABLE test.table1 (key text, col text, val text,PRIMARY KEY
> ((key), col));
> cqlsh> ALTER TABLE test.table1  with GC_GRACE_SECONDS = 120;
>
> cqlsh> INSERT INTO test.table1  (key, col, val) VALUES ('key2',
> 'abc','xyz');
>
> #ccm flush
>
> #ccm node3 stop
>
> cqlsh> INSERT INTO test.table1  (key, col, val) VALUES ('key2',
> 'abc','xyz') USING TTL 60;
>
> #ccm flush
>
> #wait for 3 min so that the tombstone crosses its gc grace period.
>
> #ccm node3 start
>
> cqlsh> select * from test.table1 where token (key) > 3074457345618258602
> and token (key) < -9223372036854775808 ;
>
>  key  | col | val
> --+-+-
>  key2 | abc | xyz
>
> (1 rows)
>
> #ccm flush
> -> Here read repair triggers and the old mutation moves to the one of the
> node where tombstone is present (not both the node)
>
>
> cqlsh> select * from test.vouchers where token (key) > 3074457345618258602
> and token (key) < -9223372036854775808 ;
>
>  key | col | val
> -+-+-
>
> (0 rows)
>
>
> --
>
> regards,
> Laxmikant Upadhyay
>
>


New row cache options proposition

2019-06-11 Thread Perez Felipe
Hi All,

I would like to propose a new feature in the Cassandra row cache.

The proposition is for the use-case when you have a table where most of the 
read access is done on a few columns and the table has millions of rows.

To improve cache memory size I have 2 possible improvement propositions:


1)  Add options to row cache to select which columns should be cached

2)  Create the concept of VIEW (as most DBs have), where you could have 
different cache strategy approaches for the same Table

Let me know your thoughts.

[]'s
Felipe


[cid:image001.png@01D4E0B3.C8055EF0]
Felipe Perez
Cloud OTA Engineer
Gemalto is now part of the Thales Group.
Please note that my new email address is 
felipe.pe...@thalesgroup.com
THALES
La Vigie, Avenue du Jujubier
ZI Athélia IV, 13705 La Ciotat Cedex, France
www.thalesgroup.com
[cid:image002.png@01D4E0B3.C8055EF0] 
[cid:image003.png@01D4E0B3.C8055EF0]   
[cid:image004.png@01D4E0B3.C8055EF0]   
[cid:image005.png@01D4E0B3.C8055EF0] 



This message and any attachments are intended solely for the addressees and may 
contain confidential information. Any unauthorized use or disclosure, either 
whole or partial, is prohibited.
E-mails are susceptible to alteration. Our company shall not be liable for the 
message if altered, changed or falsified. If you are not the intended recipient 
of this message, please delete it and notify the sender.
Although all reasonable efforts have been made to keep this transmission free 
from viruses, the sender will not be liable for damages caused by a transmitted 
virus.


Re: "4.0: TBD" -> "4.0: Est. Q4 2019"?

2019-06-11 Thread Scott Andreas
Thanks for starting this discussion, Sumanth! Added a round of comments as well.

Summarizing my non-binding feedback: I feel that many of the items under 
"Alpha" and "Beta" should be achieved prior to the release of an alpha, 
especially those related to correctness/safety, scope lock, feature 
completeness, deprecation, and backwards compatibility. Establishing a higher 
standard for official project releases (even at the alpha and beta stage) will 
help us really polish the final build together.

Ideally, I feel that contributors should have completed extensive 
testing/validation to ensure that no critical or severe bugs exist prior to the 
release of an alpha (e.g., data loss, consistency violations, incorrect 
responses to queries, etc). Perhaps we can add a line to this effect. 

Ensuring that we've met that bar prior to alpha will help us focus the final 
stages of the release on gathering feedback from users + developers to validate 
tooling and automation; compatibility with less commonly-used client libraries, 
testing new features, evaluating performance and stability under their 
workloads, etc.

– Scott

On 6/11/19, 6:45 AM, "Sumanth Pasupuleti"  
wrote:

Thanks for the feedback on the product stages/ release life cycle document.
I have incorporated the suggestions and looking for any additional feedback
folks may have.

https://docs.google.com/document/d/1bS6sr-HSrHFjZb0welife6Qx7u3ZDgRiAoENMLYlfz8/edit#

Thanks,
Sumanth

On Tue, May 28, 2019 at 10:43 PM Scott Andreas  wrote:

> Echoing Jon’s point here –
>
> JH: “My thinking is I'd like to be able to recommend 4.0.0 as a production
> ready
> database for business critical cases”
>
> I feel that this is a standard that is both appropriate and achievable,
> and one I’m legitimately excited about.
>
> Re: the current state of the test plan wiki in Confluence, I owe another
> pass through. There has been a lot of progress here, but I’ve let perfect
> be the enemy of the good in getting updates out. I’ll complete that pass
> later this week.
>
> Cheers,
>
> — Scott
>
> > On May 28, 2019, at 10:48 AM, Dinesh Joshi  wrote:
> >
> > +1. Wiki could be useful to document what the overall plan. Jira to
> track progress.
> >
> > Dinesh
> >
> >>> On May 28, 2019, at 10:20 AM, Joshua McKenzie 
> wrote:
> >>>
> >>>
> >>> The unofficial rule is to not upgrade to prod till .10 is cut.
> >>
> >> FWIW, I believe it's historically .6. Which is still not a great look
> for
> >> the project.
> >>
> >> There's a ton of work going into testing 4.0 already.
> >>
> >> While I intuitively and anecdotally (from the people I've backchanneled
> >> with) believe this to be true as well, the referenced wiki page[1] and
> >> jql[2] doesn't look like it's an up to date reflection of the testing
> >> efforts going on. Is there another place this information is stored /
> >> queryable we can surface to people to keep us all coordinated?
> >>
> >> [1]
> >>
> 
https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Quality%3A+Components+and+Test+Plans
> >> [2]
> >>
> 
https://issues.apache.org/jira/browse/CASSANDRA-14862?jql=project%20%3D%20CASSANDRA%20AND%20%20labels%20%3D%204.0-QA
> >>
> >> On Tue, May 28, 2019 at 12:57 PM sankalp kohli 
> >> wrote:
> >>
> >>> Hi Jon,
> >>>  When you say 4.0 release, how do u match it with 3.0 minor
> >>> releases. The unofficial rule is to not upgrade to prod till .10 is
> cut.
> >>> Also due to heavy investment in testing, I dont think it will take as
> long
> >>> as 3.0 but want to know what is your thinking with this.
> >>>
> >>> Thanks,
> >>> Sankalp
> >>>
>  On Tue, May 28, 2019 at 9:40 AM Jon Haddad  
wrote:
> 
>  Sept is a pretty long ways off.  I think the ideal case is we can
> >>> announce
>  4.0 release at the summit.  I'm not putting this as a "do or die"
> date,
> >>> and
>  I don't think we need to announce it or make promises.  Sticking with
> >>> "when
>  it's ready" is the right approach, but we need a target, and this is
> imo
> >>> a
>  good one.
> 
>  This date also gives us a pretty good runway.  We could cut our first
>  alphas in mid June / early July, betas in August and release in Sept.
>  There's a ton of work going into testing 4.0 already.
>  Landing CASSANDRA-15066 will put us in a pretty good spot.  We've
> >>> developed
>  tooling at TLP that will make it a lot easier to spin up dev clusters
> in
>  AWS as well as stress test them.  I've written about this a few times
> in
>  the past, and I'll have a few blog posts coming up that will help 
show
> >>> 

Re: "4.0: TBD" -> "4.0: Est. Q4 2019"?

2019-06-11 Thread Joshua McKenzie
Had a few more points of feedback for you Sumanth; trying to get some
clarity on how the PMC might function w/relation to the various votes and
definitions.

Thanks for the integration and continued effort on this - it's looking good!

On Tue, Jun 11, 2019 at 9:45 AM Sumanth Pasupuleti <
sumanth.pasupuleti...@gmail.com> wrote:

> Thanks for the feedback on the product stages/ release life cycle document.
> I have incorporated the suggestions and looking for any additional feedback
> folks may have.
>
> https://docs.google.com/document/d/1bS6sr-HSrHFjZb0welife6Qx7u3ZDgRiAoENMLYlfz8/edit#
>
> Thanks,
> Sumanth
>
> On Tue, May 28, 2019 at 10:43 PM Scott Andreas 
> wrote:
>
> > Echoing Jon’s point here –
> >
> > JH: “My thinking is I'd like to be able to recommend 4.0.0 as a
> production
> > ready
> > database for business critical cases”
> >
> > I feel that this is a standard that is both appropriate and achievable,
> > and one I’m legitimately excited about.
> >
> > Re: the current state of the test plan wiki in Confluence, I owe another
> > pass through. There has been a lot of progress here, but I’ve let perfect
> > be the enemy of the good in getting updates out. I’ll complete that pass
> > later this week.
> >
> > Cheers,
> >
> > — Scott
> >
> > > On May 28, 2019, at 10:48 AM, Dinesh Joshi  wrote:
> > >
> > > +1. Wiki could be useful to document what the overall plan. Jira to
> > track progress.
> > >
> > > Dinesh
> > >
> > >>> On May 28, 2019, at 10:20 AM, Joshua McKenzie 
> > wrote:
> > >>>
> > >>>
> > >>> The unofficial rule is to not upgrade to prod till .10 is cut.
> > >>
> > >> FWIW, I believe it's historically .6. Which is still not a great look
> > for
> > >> the project.
> > >>
> > >> There's a ton of work going into testing 4.0 already.
> > >>
> > >> While I intuitively and anecdotally (from the people I've
> backchanneled
> > >> with) believe this to be true as well, the referenced wiki page[1] and
> > >> jql[2] doesn't look like it's an up to date reflection of the testing
> > >> efforts going on. Is there another place this information is stored /
> > >> queryable we can surface to people to keep us all coordinated?
> > >>
> > >> [1]
> > >>
> >
> https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Quality%3A+Components+and+Test+Plans
> > >> [2]
> > >>
> >
> https://issues.apache.org/jira/browse/CASSANDRA-14862?jql=project%20%3D%20CASSANDRA%20AND%20%20labels%20%3D%204.0-QA
> > >>
> > >> On Tue, May 28, 2019 at 12:57 PM sankalp kohli <
> kohlisank...@gmail.com>
> > >> wrote:
> > >>
> > >>> Hi Jon,
> > >>>  When you say 4.0 release, how do u match it with 3.0 minor
> > >>> releases. The unofficial rule is to not upgrade to prod till .10 is
> > cut.
> > >>> Also due to heavy investment in testing, I dont think it will take as
> > long
> > >>> as 3.0 but want to know what is your thinking with this.
> > >>>
> > >>> Thanks,
> > >>> Sankalp
> > >>>
> >  On Tue, May 28, 2019 at 9:40 AM Jon Haddad 
> wrote:
> > 
> >  Sept is a pretty long ways off.  I think the ideal case is we can
> > >>> announce
> >  4.0 release at the summit.  I'm not putting this as a "do or die"
> > date,
> > >>> and
> >  I don't think we need to announce it or make promises.  Sticking
> with
> > >>> "when
> >  it's ready" is the right approach, but we need a target, and this is
> > imo
> > >>> a
> >  good one.
> > 
> >  This date also gives us a pretty good runway.  We could cut our
> first
> >  alphas in mid June / early July, betas in August and release in
> Sept.
> >  There's a ton of work going into testing 4.0 already.
> >  Landing CASSANDRA-15066 will put us in a pretty good spot.  We've
> > >>> developed
> >  tooling at TLP that will make it a lot easier to spin up dev
> clusters
> > in
> >  AWS as well as stress test them.  I've written about this a few
> times
> > in
> >  the past, and I'll have a few blog posts coming up that will help
> show
> > >>> this
> >  in more details.
> > 
> >  There's some other quality of life things we should try to hammer
> out
> >  before then.  Updating our default JVM settings would be nice, for
> >  example.  Improving documentation (the data modeling section in
> >  particular), fixing the dynamic snitch issues [1], and some
> > improvements
> > >>> to
> >  virtual tables like exposing the sstable metadata [2], and exposing
> > table
> >  statistics [3] come to mind.  The dynamic snitch improvement will
> help
> >  performance in a big way, and the virtual tables will go a long way
> to
> >  helping with quality of life.  I showed a few folks virtual tables
> at
> > the
> >  Accelerate conference last week and the missing table statistics
> was a
> > >>> big
> >  shock.  If we can get them in, it'll be a big help to operators.
> > 
> >  [1] https://issues.apache.org/jira/browse/CASSANDRA-14459
> >  [2] 

Re: "4.0: TBD" -> "4.0: Est. Q4 2019"?

2019-06-11 Thread Sumanth Pasupuleti
Thanks for the feedback on the product stages/ release life cycle document.
I have incorporated the suggestions and looking for any additional feedback
folks may have.
https://docs.google.com/document/d/1bS6sr-HSrHFjZb0welife6Qx7u3ZDgRiAoENMLYlfz8/edit#

Thanks,
Sumanth

On Tue, May 28, 2019 at 10:43 PM Scott Andreas  wrote:

> Echoing Jon’s point here –
>
> JH: “My thinking is I'd like to be able to recommend 4.0.0 as a production
> ready
> database for business critical cases”
>
> I feel that this is a standard that is both appropriate and achievable,
> and one I’m legitimately excited about.
>
> Re: the current state of the test plan wiki in Confluence, I owe another
> pass through. There has been a lot of progress here, but I’ve let perfect
> be the enemy of the good in getting updates out. I’ll complete that pass
> later this week.
>
> Cheers,
>
> — Scott
>
> > On May 28, 2019, at 10:48 AM, Dinesh Joshi  wrote:
> >
> > +1. Wiki could be useful to document what the overall plan. Jira to
> track progress.
> >
> > Dinesh
> >
> >>> On May 28, 2019, at 10:20 AM, Joshua McKenzie 
> wrote:
> >>>
> >>>
> >>> The unofficial rule is to not upgrade to prod till .10 is cut.
> >>
> >> FWIW, I believe it's historically .6. Which is still not a great look
> for
> >> the project.
> >>
> >> There's a ton of work going into testing 4.0 already.
> >>
> >> While I intuitively and anecdotally (from the people I've backchanneled
> >> with) believe this to be true as well, the referenced wiki page[1] and
> >> jql[2] doesn't look like it's an up to date reflection of the testing
> >> efforts going on. Is there another place this information is stored /
> >> queryable we can surface to people to keep us all coordinated?
> >>
> >> [1]
> >>
> https://cwiki.apache.org/confluence/display/CASSANDRA/4.0+Quality%3A+Components+and+Test+Plans
> >> [2]
> >>
> https://issues.apache.org/jira/browse/CASSANDRA-14862?jql=project%20%3D%20CASSANDRA%20AND%20%20labels%20%3D%204.0-QA
> >>
> >> On Tue, May 28, 2019 at 12:57 PM sankalp kohli 
> >> wrote:
> >>
> >>> Hi Jon,
> >>>  When you say 4.0 release, how do u match it with 3.0 minor
> >>> releases. The unofficial rule is to not upgrade to prod till .10 is
> cut.
> >>> Also due to heavy investment in testing, I dont think it will take as
> long
> >>> as 3.0 but want to know what is your thinking with this.
> >>>
> >>> Thanks,
> >>> Sankalp
> >>>
>  On Tue, May 28, 2019 at 9:40 AM Jon Haddad  wrote:
> 
>  Sept is a pretty long ways off.  I think the ideal case is we can
> >>> announce
>  4.0 release at the summit.  I'm not putting this as a "do or die"
> date,
> >>> and
>  I don't think we need to announce it or make promises.  Sticking with
> >>> "when
>  it's ready" is the right approach, but we need a target, and this is
> imo
> >>> a
>  good one.
> 
>  This date also gives us a pretty good runway.  We could cut our first
>  alphas in mid June / early July, betas in August and release in Sept.
>  There's a ton of work going into testing 4.0 already.
>  Landing CASSANDRA-15066 will put us in a pretty good spot.  We've
> >>> developed
>  tooling at TLP that will make it a lot easier to spin up dev clusters
> in
>  AWS as well as stress test them.  I've written about this a few times
> in
>  the past, and I'll have a few blog posts coming up that will help show
> >>> this
>  in more details.
> 
>  There's some other quality of life things we should try to hammer out
>  before then.  Updating our default JVM settings would be nice, for
>  example.  Improving documentation (the data modeling section in
>  particular), fixing the dynamic snitch issues [1], and some
> improvements
> >>> to
>  virtual tables like exposing the sstable metadata [2], and exposing
> table
>  statistics [3] come to mind.  The dynamic snitch improvement will help
>  performance in a big way, and the virtual tables will go a long way to
>  helping with quality of life.  I showed a few folks virtual tables at
> the
>  Accelerate conference last week and the missing table statistics was a
> >>> big
>  shock.  If we can get them in, it'll be a big help to operators.
> 
>  [1] https://issues.apache.org/jira/browse/CASSANDRA-14459
>  [2] https://issues.apache.org/jira/browse/CASSANDRA-14630
>  [3] https://issues.apache.org/jira/browse/CASSANDRA-14572
> 
> 
> 
> 
> > On Mon, May 27, 2019 at 2:36 PM Nate McCall 
> wrote:
> >
> > Hi Sumanth,
> > Thank you so much for taking the time to put this together.
> >
> > Cheers,
> > -Nate
> >
> > On Tue, May 28, 2019 at 3:27 AM Sumanth Pasupuleti <
> > sumanth.pasupuleti...@gmail.com> wrote:
> >
> >> I have taken an initial stab at documenting release types and exit
> > criteria
> >> in a google doc, to get us started, and to collaborate on.
> >>
> >>
> >
> 
> >>>
> 

Re: Java8 compatibility broken in 4.0

2019-06-11 Thread Attila Wind

Hi All,

Just FYI
I did a Java 8 build (dist - to be more spcific, that involves $ ant 
artifacts) from 4.0 trunk (after git pull) on Sunday morning (so 9th of 
June) - there were no issues for me

I did it on both Windows and Linux (Debian) machines

Maybe there were commits since then.. I have not checked... But if not 
then it is interesting...


cheers

Attila


On 11.06.2019 12:40 PM, Aleksey Yeshchenko wrote:

No, this is not intentional. At the very least for 4.0 we are going to keep 
compatibility with Java 8.


On 11 Jun 2019, at 10:04, Tommy Stendahl  wrote:

Hi,

I can't find a way to build 4.0 artifacts so I can run Cassandra using Java8. Building the 
artifacts ("ant artifacts" or "ant mvn-install") requires building with Java11 
but since CASSANDRA-15108 the result is not Java8 compatible anymore, is this intentional or was 
this done by mistake?

I also tried to change this back and force Java11 to build Java8 compatible 
code but it seams that there are some  changes in the ByetBuffer implementation 
in Java11 that breaks when execution in a Java8 jvm, when I started Cassandra 
it throws a NoSuchMethodError when trying to open the sstables.

2019-05-28T16:20:24.506+0200 [SSTableBatchOpen:4] ERROR 
o.a.c.i.s.format.SSTableReader$2:576 run Corrupt sstable 
/var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-2202-big=[CompressionInfo.db,
 TOC.txt, Statistics.db, Summary.db, Index.db, Data.db, Filter.db, 
Digest.crc32]; skipping table
org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
/var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-2202-big-Statistics.db
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:504)
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
at 
org.apache.cassandra.io.sstable.format.SSTableReader$2.run(SSTableReader.java:571)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at 
org.apache.cassandra.utils.memory.BufferPool.allocateDirectAligned(BufferPool.java:528)
at 
org.apache.cassandra.utils.memory.BufferPool.access$600(BufferPool.java:46)
at 
org.apache.cassandra.utils.memory.BufferPool$GlobalPool.allocateMoreChunks(BufferPool.java:279)
at 
org.apache.cassandra.utils.memory.BufferPool$GlobalPool.get(BufferPool.java:248)
at 
org.apache.cassandra.utils.memory.BufferPool$LocalPool.addChunkFromGlobalPool(BufferPool.java:354)
at 
org.apache.cassandra.utils.memory.BufferPool$LocalPool.get(BufferPool.java:397)
at 
org.apache.cassandra.utils.memory.BufferPool.maybeTakeFromPool(BufferPool.java:143)
at 
org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:115)
at org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:94)
at 
org.apache.cassandra.io.util.BufferManagingRebufferer.(BufferManagingRebufferer.java:45)
at 
org.apache.cassandra.io.util.BufferManagingRebufferer$Unaligned.(BufferManagingRebufferer.java:117)
at 
org.apache.cassandra.io.util.SimpleChunkReader.instantiateRebufferer(SimpleChunkReader.java:60)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:319)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:126)
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:500)
... 8 common frames omitted

The reason for this is that some methods in the ByteBuffer implementation in 
Java11 returns a Bytebuffer but in Java8 and before they returned a Buffer. I 
found a bit of information on Stackoverflow 
(https://stackoverflow.com/questions/48693695/java-nio-buffer-not-loading-clear-method-on-runtime)
 so there seams to be a way to fix this even if the resulting code looks ugly 
to me.

My question is, are we intentionally giving up Java8 comparability in 4.0 or do 
we still want to support Java8?

Regards,
Tommy


-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Re: Java8 compatibility broken in 4.0

2019-06-11 Thread Sam Tunnicliffe



> On 11 Jun 2019, at 10:04, Tommy Stendahl  wrote:
> 
> Hi,
> 
> I can't find a way to build 4.0 artifacts so I can run Cassandra using Java8. 
> Building the artifacts ("ant artifacts" or "ant mvn-install") requires 
> building with Java11 but since CASSANDRA-15108 the result is not Java8 
> compatible anymore, is this intentional or was this done by mistake?

This was unintentional, it looks like we just missed removing the 
“unless=java.version.8” from the artifacts & _artifacts-init targets in 
build.xml.
I’ve tested that with those removed, it’s just a matter of having the 
appropriate target jdk specified in $JAVA_HOME (and using -Duse.jdk11=true or 
$CASSANDRA_USE_JDK11=true if building with jdk11).

We’ll fix this in trunk, but in the meantime removing those conditions should 
unblock you.

Thanks,
Sam

> 
> I also tried to change this back and force Java11 to build Java8 compatible 
> code but it seams that there are some  changes in the ByetBuffer 
> implementation in Java11 that breaks when execution in a Java8 jvm, when I 
> started Cassandra it throws a NoSuchMethodError when trying to open the 
> sstables.
> 
> 2019-05-28T16:20:24.506+0200 [SSTableBatchOpen:4] ERROR 
> o.a.c.i.s.format.SSTableReader$2:576 run Corrupt sstable 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-2202-big=[CompressionInfo.db,
>  TOC.txt, Statistics.db, Summary.db, Index.db, Data.db, Filter.db, 
> Digest.crc32]; skipping table
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-2202-big-Statistics.db
>at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:504)
>at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
>at 
> org.apache.cassandra.io.sstable.format.SSTableReader$2.run(SSTableReader.java:571)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
>at 
> org.apache.cassandra.utils.memory.BufferPool.allocateDirectAligned(BufferPool.java:528)
>at 
> org.apache.cassandra.utils.memory.BufferPool.access$600(BufferPool.java:46)
>at 
> org.apache.cassandra.utils.memory.BufferPool$GlobalPool.allocateMoreChunks(BufferPool.java:279)
>at 
> org.apache.cassandra.utils.memory.BufferPool$GlobalPool.get(BufferPool.java:248)
>at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.addChunkFromGlobalPool(BufferPool.java:354)
>at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.get(BufferPool.java:397)
>at 
> org.apache.cassandra.utils.memory.BufferPool.maybeTakeFromPool(BufferPool.java:143)
>at 
> org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:115)
>at org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:94)
>at 
> org.apache.cassandra.io.util.BufferManagingRebufferer.(BufferManagingRebufferer.java:45)
>at 
> org.apache.cassandra.io.util.BufferManagingRebufferer$Unaligned.(BufferManagingRebufferer.java:117)
>at 
> org.apache.cassandra.io.util.SimpleChunkReader.instantiateRebufferer(SimpleChunkReader.java:60)
>at 
> org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:319)
>at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:126)
>at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:500)
>... 8 common frames omitted
> 
> The reason for this is that some methods in the ByteBuffer implementation in 
> Java11 returns a Bytebuffer but in Java8 and before they returned a Buffer. I 
> found a bit of information on Stackoverflow 
> (https://stackoverflow.com/questions/48693695/java-nio-buffer-not-loading-clear-method-on-runtime)
>  so there seams to be a way to fix this even if the resulting code looks ugly 
> to me.
> 
> My question is, are we intentionally giving up Java8 comparability in 4.0 or 
> do we still want to support Java8?
> 
> Regards,
> Tommy


-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Re: Java8 compatibility broken in 4.0

2019-06-11 Thread Aleksey Yeshchenko
No, this is not intentional. At the very least for 4.0 we are going to keep 
compatibility with Java 8.

> On 11 Jun 2019, at 10:04, Tommy Stendahl  wrote:
> 
> Hi,
> 
> I can't find a way to build 4.0 artifacts so I can run Cassandra using Java8. 
> Building the artifacts ("ant artifacts" or "ant mvn-install") requires 
> building with Java11 but since CASSANDRA-15108 the result is not Java8 
> compatible anymore, is this intentional or was this done by mistake?
> 
> I also tried to change this back and force Java11 to build Java8 compatible 
> code but it seams that there are some  changes in the ByetBuffer 
> implementation in Java11 that breaks when execution in a Java8 jvm, when I 
> started Cassandra it throws a NoSuchMethodError when trying to open the 
> sstables.
> 
> 2019-05-28T16:20:24.506+0200 [SSTableBatchOpen:4] ERROR 
> o.a.c.i.s.format.SSTableReader$2:576 run Corrupt sstable 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-2202-big=[CompressionInfo.db,
>  TOC.txt, Statistics.db, Summary.db, Index.db, Data.db, Filter.db, 
> Digest.crc32]; skipping table
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-2202-big-Statistics.db
>at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:504)
>at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
>at 
> org.apache.cassandra.io.sstable.format.SSTableReader$2.run(SSTableReader.java:571)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
>at 
> org.apache.cassandra.utils.memory.BufferPool.allocateDirectAligned(BufferPool.java:528)
>at 
> org.apache.cassandra.utils.memory.BufferPool.access$600(BufferPool.java:46)
>at 
> org.apache.cassandra.utils.memory.BufferPool$GlobalPool.allocateMoreChunks(BufferPool.java:279)
>at 
> org.apache.cassandra.utils.memory.BufferPool$GlobalPool.get(BufferPool.java:248)
>at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.addChunkFromGlobalPool(BufferPool.java:354)
>at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.get(BufferPool.java:397)
>at 
> org.apache.cassandra.utils.memory.BufferPool.maybeTakeFromPool(BufferPool.java:143)
>at 
> org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:115)
>at org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:94)
>at 
> org.apache.cassandra.io.util.BufferManagingRebufferer.(BufferManagingRebufferer.java:45)
>at 
> org.apache.cassandra.io.util.BufferManagingRebufferer$Unaligned.(BufferManagingRebufferer.java:117)
>at 
> org.apache.cassandra.io.util.SimpleChunkReader.instantiateRebufferer(SimpleChunkReader.java:60)
>at 
> org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:319)
>at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:126)
>at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:500)
>... 8 common frames omitted
> 
> The reason for this is that some methods in the ByteBuffer implementation in 
> Java11 returns a Bytebuffer but in Java8 and before they returned a Buffer. I 
> found a bit of information on Stackoverflow 
> (https://stackoverflow.com/questions/48693695/java-nio-buffer-not-loading-clear-method-on-runtime)
>  so there seams to be a way to fix this even if the resulting code looks ugly 
> to me.
> 
> My question is, are we intentionally giving up Java8 comparability in 4.0 or 
> do we still want to support Java8?
> 
> Regards,
> Tommy


-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Java8 compatibility broken in 4.0

2019-06-11 Thread Tommy Stendahl
Hi,

I can't find a way to build 4.0 artifacts so I can run Cassandra using Java8. 
Building the artifacts ("ant artifacts" or "ant mvn-install") requires building 
with Java11 but since CASSANDRA-15108 the result is not Java8 compatible 
anymore, is this intentional or was this done by mistake?

I also tried to change this back and force Java11 to build Java8 compatible 
code but it seams that there are some  changes in the ByetBuffer implementation 
in Java11 that breaks when execution in a Java8 jvm, when I started Cassandra 
it throws a NoSuchMethodError when trying to open the sstables.

2019-05-28T16:20:24.506+0200 [SSTableBatchOpen:4] ERROR 
o.a.c.i.s.format.SSTableReader$2:576 run Corrupt sstable 
/var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-2202-big=[CompressionInfo.db,
 TOC.txt, Statistics.db, Summary.db, Index.db, Data.db, Filter.db, 
Digest.crc32]; skipping table
org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
/var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-2202-big-Statistics.db
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:504)
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
at 
org.apache.cassandra.io.sstable.format.SSTableReader$2.run(SSTableReader.java:571)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: 
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at 
org.apache.cassandra.utils.memory.BufferPool.allocateDirectAligned(BufferPool.java:528)
at 
org.apache.cassandra.utils.memory.BufferPool.access$600(BufferPool.java:46)
at 
org.apache.cassandra.utils.memory.BufferPool$GlobalPool.allocateMoreChunks(BufferPool.java:279)
at 
org.apache.cassandra.utils.memory.BufferPool$GlobalPool.get(BufferPool.java:248)
at 
org.apache.cassandra.utils.memory.BufferPool$LocalPool.addChunkFromGlobalPool(BufferPool.java:354)
at 
org.apache.cassandra.utils.memory.BufferPool$LocalPool.get(BufferPool.java:397)
at 
org.apache.cassandra.utils.memory.BufferPool.maybeTakeFromPool(BufferPool.java:143)
at 
org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:115)
at org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:94)
at 
org.apache.cassandra.io.util.BufferManagingRebufferer.(BufferManagingRebufferer.java:45)
at 
org.apache.cassandra.io.util.BufferManagingRebufferer$Unaligned.(BufferManagingRebufferer.java:117)
at 
org.apache.cassandra.io.util.SimpleChunkReader.instantiateRebufferer(SimpleChunkReader.java:60)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:319)
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:126)
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:500)
... 8 common frames omitted

The reason for this is that some methods in the ByteBuffer implementation in 
Java11 returns a Bytebuffer but in Java8 and before they returned a Buffer. I 
found a bit of information on Stackoverflow 
(https://stackoverflow.com/questions/48693695/java-nio-buffer-not-loading-clear-method-on-runtime)
 so there seams to be a way to fix this even if the resulting code looks ugly 
to me.

My question is, are we intentionally giving up Java8 comparability in 4.0 or do 
we still want to support Java8?

Regards,
Tommy