RE: Cassandra 3.11 fails to start with JDK8u162

2018-01-18 Thread Steinmaurer, Thomas
Ben,

at least 3.0.14 starts up fine for me with 8u162.

Regards,
Thomas

From: Ben Wood [mailto:bw...@mesosphere.io]
Sent: Donnerstag, 18. Jänner 2018 23:24
To: user@cassandra.apache.org
Subject: Re: Cassandra 3.11 fails to start with JDK8u162

I'm correct in assuming 10091 didn't go into 3.0?

On Thu, Jan 18, 2018 at 2:32 AM, Steinmaurer, Thomas 
> 
wrote:
Sam,

thanks for the confirmation. Going back to u152 then.

Thomas

From: li...@beobal.com 
[mailto:li...@beobal.com] On Behalf Of Sam Tunnicliffe
Sent: Donnerstag, 18. Jänner 2018 10:16
To: user@cassandra.apache.org
Subject: Re: Cassandra 3.11 fails to start with JDK8u162

This isn't (wasn't) a known issue, but the way that CASSANDRA-10091 was 
implemented using internal JDK classes means it was always possible that a 
minor JVM version change could introduce incompatibilities (CASSANDRA-2967 is 
also relevant).
We did already know that we need to revisit the way this works in 4.0 for JDK9 
support (CASSANDRA-9608), so we should identify a more stable solution & apply 
that to both 3.11 and 4.0.
In the meantime, downgrading to 152 is the only real option.

I've opened https://issues.apache.org/jira/browse/CASSANDRA-14173 for this.

Thanks,
Sam


On 18 January 2018 at 08:43, Nicolas Guyomar 
> wrote:
Thank you Thomas for starting this thread, I'm having exactly the same issue on 
AWS EC2 RHEL-7.4_HVM-20180103-x86_64-2-Hourly2-GP2 (ami-dc13a4a1)  I was 
starting to bang my head on my desk !

So I'll try to downgrade back to 152 then !



On 18 January 2018 at 08:34, Steinmaurer, Thomas 
> 
wrote:
Hello,

after switching from JDK8u152 to JDK8u162, Cassandra fails with the following 
stack trace upon startup.

ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception 
encountered during startup
java.lang.AbstractMethodError: 
org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
at 
javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
 ~[na:1.8.0_162]
at 
javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
 ~[na:1.8.0_162]
at 
javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
 ~[na:1.8.0_162]
at 
org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
 ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
at 
org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
 [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) 
[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600) 
[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]

Is this a known issue?


Thanks,
Thomas

The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freistädterstraße
 
313


The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freistädterstraße 
313



--
Ben Wood
Software Engineer - Data Agility
Mesosphere
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an 

Re: Cassandra 3.11 fails to start with JDK8u162

2018-01-18 Thread Ben Wood
I'm correct in assuming 10091 didn't go into 3.0?

On Thu, Jan 18, 2018 at 2:32 AM, Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:

> Sam,
>
>
>
> thanks for the confirmation. Going back to u152 then.
>
>
>
> Thomas
>
>
>
> *From:* li...@beobal.com [mailto:li...@beobal.com] *On Behalf Of *Sam
> Tunnicliffe
> *Sent:* Donnerstag, 18. Jänner 2018 10:16
> *To:* user@cassandra.apache.org
> *Subject:* Re: Cassandra 3.11 fails to start with JDK8u162
>
>
>
> This isn't (wasn't) a known issue, but the way that CASSANDRA-10091 was
> implemented using internal JDK classes means it was always possible that a
> minor JVM version change could introduce incompatibilities (CASSANDRA-2967
> is also relevant).
>
> We did already know that we need to revisit the way this works in 4.0 for
> JDK9 support (CASSANDRA-9608), so we should identify a more stable solution
> & apply that to both 3.11 and 4.0.
>
> In the meantime, downgrading to 152 is the only real option.
>
>
>
> I've opened https://issues.apache.org/jira/browse/CASSANDRA-14173 for
> this.
>
>
>
> Thanks,
>
> Sam
>
>
>
>
>
> On 18 January 2018 at 08:43, Nicolas Guyomar 
> wrote:
>
> Thank you Thomas for starting this thread, I'm having exactly the same
> issue on AWS EC2 RHEL-7.4_HVM-20180103-x86_64-2-Hourly2-GP2
> (ami-dc13a4a1)  I was starting to bang my head on my desk !
>
>
>
> So I'll try to downgrade back to 152 then !
>
>
>
>
>
>
>
> On 18 January 2018 at 08:34, Steinmaurer, Thomas <
> thomas.steinmau...@dynatrace.com> wrote:
>
> Hello,
>
>
>
> after switching from JDK8u152 to JDK8u162, Cassandra fails with the
> following stack trace upon startup.
>
>
>
> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception
> encountered during startup
>
> java.lang.AbstractMethodError: org.apache.cassandra.utils.
> JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/
> RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/
> misc/ObjectInputFilter;)Ljava/rmi/Remote;
>
> at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
> ~[na:1.8.0_162]
>
> at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
> ~[na:1.8.0_162]
>
> at 
> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
> ~[na:1.8.0_162]
>
> at 
> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
> ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188)
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689)
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
>
>
> Is this a known issue?
>
>
>
>
>
> Thanks,
>
> Thomas
>
>
>
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freist
> 
> ädterstra
> 
> ße 313
> 
>
>
>
>
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freistädterstraße 313
> 
>



-- 
Ben Wood
Software Engineer - Data Agility
Mesosphere


Re: AbstractMethodError from JMXServerUtils after update from Java 1.8.0_112 to 1.8.0_162

2018-01-18 Thread Michael Shuler
https://issues.apache.org/jira/browse/CASSANDRA-14173

On 01/18/2018 03:29 PM, Stephen Rosenthal wrote:
> Hi,
> 
>  
> 
> I got the following error after updating my Cassandra system from Java
> 1.8.0_112 to 1.8.0_162:
> 
>  
> 
> java.lang.AbstractMethodError:
> org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
> 
>  
> 
> I did the usual Googling and here’s what I’ve concluded:
> 
>   * Between Java 1.8.0_152 and 1.8.0_162, the interface
> com.sun.jmx.remote.internal.RMIExporter was changed to add a 5^th
> argument to the exportObject method. The class in a Sun “internal”
> package so it probably wasn’t intended for use outside of the JDK.
>   * Cassandra references RMIExporter in
> org.apache.cassandra.utils.JMXServerUtils but uses only 4 arguments
> in the exportObject method:
> 
> https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/utils/JMXServerUtils.java
>   * I was running on Cassandra 3.11 but I also tried 3.11.1 and the
> problem remained.
> 
>  
> 
> I couldn’t find anyone else reporting this bug so I must be doing
> something different. Have others seen this bug? Or is it something
> obvious, i.e. does Cassandra not support running on Java 1.8.0_162 yet?
> 
>  
> 
> Thanks!
> 
> Stephen
> 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



AbstractMethodError from JMXServerUtils after update from Java 1.8.0_112 to 1.8.0_162

2018-01-18 Thread Stephen Rosenthal
Hi,

I got the following error after updating my Cassandra system from Java 
1.8.0_112 to 1.8.0_162:

java.lang.AbstractMethodError: 
org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;

I did the usual Googling and here’s what I’ve concluded:

  *   Between Java 1.8.0_152 and 1.8.0_162, the interface 
com.sun.jmx.remote.internal.RMIExporter was changed to add a 5th argument to 
the exportObject method. The class in a Sun “internal” package so it probably 
wasn’t intended for use outside of the JDK.
  *   Cassandra references RMIExporter in 
org.apache.cassandra.utils.JMXServerUtils but uses only 4 arguments in the 
exportObject method: 
https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/utils/JMXServerUtils.java
  *   I was running on Cassandra 3.11 but I also tried 3.11.1 and the problem 
remained.

I couldn’t find anyone else reporting this bug so I must be doing something 
different. Have others seen this bug? Or is it something obvious, i.e. does 
Cassandra not support running on Java 1.8.0_162 yet?

Thanks!
Stephen


Re: ALTER default_time_to_live

2018-01-18 Thread Vlad
Hi, thanks for answer!
I've read article about TWCS, and I don't understand how claim 

"When rows reach their TTL (10 minutes here), they turn into tombstones. Our 
table defines that tombstones can be purged 1 minute after they were created.If 
all rows are created with the same TTL, SSTables will get 100% droppable 
tombstones eventually and perform full SSTable deletion instead of purging 
tombstones through compaction."

goes with 
"Once the major compaction for a time window is completed, no further 
compaction of the data will ever occur."

In above example TTL is 10 minutes, but time window is only one. As far as I 
understand C* never compacts passed bucket. Does it check tombstones anyway? 

On Thursday, January 18, 2018 1:32 PM, Alain RODRIGUEZ  
wrote:
 

 
I set  default_time_to_live for existing table. Does it affect existing data?

No, it sets a default TTL for the future writes (that is no guarantee, as it 
can be overwritten in any specific query).

It seems data to be deleted, but after compaction, I don't see any disk space 
freed as expected

Indeed tombstones are responsible for tombstones eviction, yet there are some 
conditions to respect to be able to remove the tombstones (for consistency 
reasons). I detailed this last year, and even though the content is a bit old, 
main principles are still true and the tuning options are still relevant.
About deletes and tombstones: 
http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html
tl;dr: I would give a try to unchecked_tombstone_compaction: true. Maybe also 
consider using TWCS because of this "TTL is also ten days on one table and 100 
days on other.". But I really recommend you to understand how this all work to 
act wisely. My guess can be wrong.
About TWCS: http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
C*heers,---Alain Rodriguez - @arodream - 
alain@thelastpickle.comFrance / Spain
The Last Pickle - Apache Cassandra Consultinghttp://www.thelastpickle.com
2018-01-18 11:15 GMT+00:00 Vlad :

Hi,
I set  default_time_to_live for existing table. Does it affect existing data? 
It seems data to be deleted, but after compaction, I don't see any disk space 
freed as expected. Database has data for almost year, GC time is ten days, and 
TTL is also ten days on one table and 100 days on other.
 Cassandra version 3.11.0

Thanks.




   

Re: Repair giving error

2018-01-18 Thread Akshit Jain
Hi alain
Thanks for the response.
I'm using cassandra 3.10
nodetool status  shows all the nodes up
No schema disaggrement
port 7000 is open

Regards
Akshit Jain
9891724697

On Thu, Jan 18, 2018 at 4:53 PM, Alain RODRIGUEZ  wrote:

> Hello,
>
> I looks like a communication issue.
>
> What Cassandra version are you using?
> What's the result of 'nodetool status '?
> Any schema disagreement 'nodetool describecluster'?
> Is the port 7000 opened and the nodes communicating with each other?(Ping
> is not proving connection is up, even though it is good to know the machine
> is there and up :)).
> Any other errors you could see in the logs?
>
> You might want to consider this an open source project my coworkers have
> been working on (and are maintaining) called reaper that aims at making
> repairs more efficient and easy to manage as repair is one of the most
> tricky operation to handle for a Cassandra operator: http://cassandra-
> reaper.io/. I did not work on this project directly but we have good
> feedbacks and like this tool ourselves.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
>
>
>
> 2018-01-14 7:47 GMT+00:00 Akshit Jain :
>
>> ​I have a 10 node C* cluster with 4-5 keyspaces​.
>> I tried to perform nodetool repair one by one for each keyspace.
>> For some keyspaces the repair passed but for some it gave this error:
>> ​
>> I am not able to figure out what is causing this issue.The replica nodes
>> are up and I am able to ping them from this node.​
>> ​Any suggestions?​
>>
>> *Error I am getting on incremental repair:*
>>
>> *[2018-01-10 12:50:14,047] Did not get positive replies from all
>> endpoints. List of failed endpoint(s): [​a.b.c.d, ​e.f.g.h]*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *-- StackTrace --java.lang.RuntimeException: Repair job has failed with
>> the error message: [2018-01-10 12:50:14,047] Did not get positive replies
>> from all endpoints. List of failed endpoint(s): [​a.b.c.d, ​e.f.g.h]at
>> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:115)at
>> org.apache.cassandra.utils.pro
>> gress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)at
>> com.sun.jmx.remote.internal.Cl
>> ientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)at
>> com.sun.jmx.remote.internal.Cl
>> ientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)at
>> com.sun.jmx.remote.internal.Cl
>> ientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)at
>> com.sun.jmx.remote.internal.Cl
>> ientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)*
>>
>
>


Re: High read rate on hard-disk

2018-01-18 Thread Octavian Rinciog
Hy Alain,

Thank you for your response.

> - Other than the 'lock', Counters perform an implicit read before the write
> operation.

>From what I know there is one counter cache[1], that is used to read
the old values of the counters. According to [2], it is used only for
UPDATE requests


> I would say what you are seeing is expected with this use case. Also, I have
> never seen a use case where using RF = 1 is good idea (excepted for some
> testing maybe). Be aware this data is weak and can easily be lost (if it's a
> deliberate choice, ignore my comment). On the bright side, you have no
> entropy / consistency issues or need for repairs with RF = 1 :D.

Yes, indeed RF=1 policy is our choice (basically because we didn't
manage to scale the counter writes very good and we assumed that we
can loose some data)


[1]https://apache.googlesource.com/cassandra/+/refs/heads/trunk/src/java/org/apache/cassandra/db/CounterMutation.java#193
[2]https://issues.apache.org/jira/browse/CASSANDRA-12500?focusedCommentId=15464023=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-15464023


2018-01-18 12:51 GMT+02:00 Alain RODRIGUEZ :
> Hello Octavian,
>
>>
>>  I have a counter table(RF=1)
>>
>>  SELECT vs UPDATE requests ratio is 0.001. ( Read Count: 3771000, Write
>> Count: 3401236000, in one month)
>>
>> SELECT vs UPDATE requests ratio is 0.001. ( Read Count: 3771000, Write
>> Count: 3401236000, in one month)
>
>
>> The problem is that our read rate limit on our hard-disk is always near
>> 30MBps and our write rate limit is near 500KBps.
>
>
> I did not read all your numbers, but here are the internal details you could
> be missing:
>
> - Other than the 'lock', Counters perform an implicit read before the write
> operation. To increment, you need to know about past value. It was true last
> time I used them, I believe there is no real workaround and it's still the
> case today.
> - Writes do not hit the disk synchronously. Instead of this, they are stored
> in the Memtable and only flushed once, sequentially and efficiently. Then
> compactions manages to merge partitions after, asynchronously.
>
> I would say what you are seeing is expected with this use case. Also, I have
> never seen a use case where using RF = 1 is good idea (excepted for some
> testing maybe). Be aware this data is weak and can easily be lost (if it's a
> deliberate choice, ignore my comment). On the bright side, you have no
> entropy / consistency issues or need for repairs with RF = 1 :D.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2018-01-17 17:40 GMT+00:00 Octavian Rinciog :
>>
>> Hello!
>>
>> I am using Cassandra 3.10, on Ubuntu 14.04 and I have a counter
>> table(RF=1), with the following schema:
>>
>> CREATE TABLE edges (
>> src_id text,
>> src_type text,
>> source text
>> weight counter,
>> PRIMARY KEY ((src_id, src_type), source)
>> ) WITH
>>compaction = {'class':
>> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
>> 'max_threshold': '32', 'min_threshold': '4'}
>>
>> SELECT vs UPDATE requests ratio is 0.001. ( Read Count: 3771000, Write
>> Count: 3401236000, in one month)
>>
>> We have Counter Cache enabled:
>>
>> Counter Cache  : entries 1018782, size 256 MiB, capacity 256
>> MiB, 2799913189 hits, 3469459479 requests, 0.807 recent hit rate, 7200
>> save period in seconds
>>
>> The problem is that our read rate limit on our hard-disk is always
>> near 30MBps and our write rate limit is near 500KBps.
>>
>> One example of output of "iostat -x" is
>>
>> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>> sdb   0.06 1.04  263.652.04 28832.42   572.53
>> 146.07 0.361.350.74   81.16   1.27  33.81
>>
>> Also with iotop, we saw that are about 8 threads that each goes around
>> 3MB/s read rate.
>>
>> Total DISK READ :  22.73 M/s | Total DISK WRITE : 494.35 K/s
>> Actual DISK READ:  22.62 M/s | Actual DISK WRITE: 528.57 K/s
>>   TID  PRIO  USERDISK READ>  DISK WRITE  SWAPIN  IOCOMMAND
>> 14793 be/4 cassandra 3.061 M/s0.0010 B/s  0.00 % 93.27 % java
>> -Dcassandra.fd_max_interval_ms=400
>>
>> The output of strace on these threads is :
>>
>> strace -cp 14793
>> Process 14793 attached
>> ^CProcess 14793 detached
>> % time seconds  usecs/call callserrors syscall
>> -- --- --- - - 
>>  99.85   32.118518  57567288256251 futex
>>   0.150.048822   3 15339   write
>>   0.000.00   0 1   rt_sigreturn
>> -- --- --- - - 
>> 100.00   32.167340582628

Re: Best compaction strategy for counters tables

2018-01-18 Thread Octavian Rinciog
Hi Alain,

Thank you for your response.
In my case, counter is the main table, having almost 40% of all data.

Thank you for the recommendation about testing on one node.

2018-01-18 13:02 GMT+02:00 Alain RODRIGUEZ :
> Hello,
>
> I believe there is not a really specifically good strategy for counters.
> Often counter tables size is relatively low (compared to events / raw data).
> So depending on the workload you might want to pick one or the other. Given
> the high number of reads the table will have to face (during reads +
> writes), LCS might be a good choice if there is no better reason to pick
> another strategy. Be aware that LCS have the highest write amplification
> (Data will be written about 8 times on disk through compaction process) but
> should be in a very nice spot for reads, mostly touching one or a few
> SSTables.
>
> In the past I did not care much about the compaction strategy for counters
> as I considered it to be negligible in my case (counters were MB big tables
> out of a few TB for the entire dataset).
>
> You can always pick a strategy you think would work better, and test the
> change on a canary node (use JMX to apply on 1 node only), see how it goes.
> I found the doc for this on Datastax website. I hope this will help:
> https://support.datastax.com/hc/en-us/articles/213370546-Change-CompactionStrategy-and-sub-properties-via-JMX
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2018-01-17 16:14 GMT+00:00 Octavian Rinciog :
>>
>> Hello!
>> I am using Cassandra 3.10.
>> I have a counter table, with the following schema and RF=1
>>
>> CREATE TABLE edges (
>> src_id text,
>> src_type text,
>> source text
>> weight counter,
>> PRIMARY KEY ((src_id, src_type), source)
>> );
>>
>> SELECT vs UPDATE requests ratio for this table is 0.1
>> READ vs WRITE rate, given by iostat is 100:1.
>> Counter cache hit rate is 80%, so only for 20% UPDATE requests, the
>> hard-disk is touched.
>>
>> I want to ask you which compation strategy is best for this table
>> (SizeTieredCompactionStrategy or
>> LeveledCompactionStrategy).
>>
>> Thank you,
>> --
>> Octavian Rinciog
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>



-- 
Octavian Rinciog

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Alter composite column

2018-01-18 Thread Alexander Dejanovski
Compact storage only allows one column outside of the primary key so you'll
definitely need to recreate your table if you want to add columns.

Le jeu. 18 janv. 2018 à 12:18, Nicolas Guyomar 
a écrit :

> Well it should be as easy as following this :
> https://docs.datastax.com/en/cql/3.1/cql/cql_using/use_alter_add.html
>
> But I'm worried that your initial requirement was to change the clustering
> key, as Alexander stated, you need to create a new table and transfer your
> data in it
>
> On 18 January 2018 at 12:03, Joel Samuelsson 
> wrote:
>
>> It was indeed created with C* 1.X
>> Do you have any links or otherwise on how I would add the column4? I
>> don't want to risk destroying my data.
>>
>> Best regards,
>> Joel
>>
>> 2018-01-18 11:18 GMT+01:00 Nicolas Guyomar :
>>
>>> Hi Joel,
>>>
>>> You cannot alter a table primary key.
>>>
>>> You can however alter your existing table to only add column4 using
>>> cqlsh and cql, even if this table as created back with C* 1.X for instance
>>>
>>> On 18 January 2018 at 11:14, Joel Samuelsson 
>>> wrote:
>>>
 So to rephrase that in CQL terms I have a table like this:

 CREATE TABLE events (
 key text,
 column1 int,
 column2 int,
 column3 text,
 value text,
 PRIMARY KEY(key, column1, column2, column3)
 ) WITH COMPACT STORAGE

 and I'd like to change it to:
 CREATE TABLE events (
 key text,
 column1 int,
 column2 int,
 column3 text,
 column4 text,
 value text,
 PRIMARY KEY(key, column1, column2, column3, column4)
 ) WITH COMPACT STORAGE

 Is this possible?
 Best regards,
 Joel

 2018-01-12 16:53 GMT+01:00 Joel Samuelsson :

> Hi,
>
> I have an older system (C* 2.1) using Thrift tables on which I want to
> alter a column composite. Right now it looks like (int, int, string) but I
> want it to be (int, int, string, string). Is it possible to do this on a
> live cluster without deleting the old data? Can you point me to some
> documentation about this? I can't seem to find it any more.
>
> Best regards,
> Joel
>


>>>
>>
> --
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re: Is it recommended to enable debug log in production

2018-01-18 Thread Alain RODRIGUEZ
+1 on removing 'DEBUG' level.

I would stay at 'INFO' level, for the same reason, unless you are really
debugging something ;-).

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2018-01-16 19:22 GMT+00:00 Jon Haddad :

> In certain versions (2.2 specifically) I’ve seen a massive performance hit
> from the extra logging in some very specific circumstances.  In the case I
> looked at it was due to the added overhead of reflection.  The issue I
> found was resolved in 3.0 (I think), but I always disable DEBUG logging now
> anyways, just in case.
>
> > On Jan 16, 2018, at 11:01 AM, Jay Zhuang 
> wrote:
> >
> > Hi,
> >
> > Do you guys enable debug log in production? Is it recommended?
> >
> > By default, the cassandra log level is set to debug:
> > https://github.com/apache/cassandra/blob/trunk/conf/logback.xml#L100
> >
> > We’re using 3.0.x, which generates lots of Gossip messages:
> > FailureDetector.java:456 - Ignoring interval time of 2001193771 for /IP
> >
> > Probably we should back port https://github.com/apache/cassandra/commit/
> 9ac01baef5c8f689e96307da9b29314bc0672462
> > Other than that, do you guys see any other issue?
> >
> > Thanks,
> > Jay
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: Alter composite column

2018-01-18 Thread Joel Samuelsson
Yeah, I want column4 to appear in each cell name (rather than just once)
which I think would be the same as altering the primary key.

2018-01-18 12:18 GMT+01:00 Nicolas Guyomar :

> Well it should be as easy as following this : https://docs.datastax.com/
> en/cql/3.1/cql/cql_using/use_alter_add.html
>
> But I'm worried that your initial requirement was to change the clustering
> key, as Alexander stated, you need to create a new table and transfer your
> data in it
>
> On 18 January 2018 at 12:03, Joel Samuelsson 
> wrote:
>
>> It was indeed created with C* 1.X
>> Do you have any links or otherwise on how I would add the column4? I
>> don't want to risk destroying my data.
>>
>> Best regards,
>> Joel
>>
>> 2018-01-18 11:18 GMT+01:00 Nicolas Guyomar :
>>
>>> Hi Joel,
>>>
>>> You cannot alter a table primary key.
>>>
>>> You can however alter your existing table to only add column4 using
>>> cqlsh and cql, even if this table as created back with C* 1.X for instance
>>>
>>> On 18 January 2018 at 11:14, Joel Samuelsson 
>>> wrote:
>>>
 So to rephrase that in CQL terms I have a table like this:

 CREATE TABLE events (
 key text,
 column1 int,
 column2 int,
 column3 text,
 value text,
 PRIMARY KEY(key, column1, column2, column3)
 ) WITH COMPACT STORAGE

 and I'd like to change it to:
 CREATE TABLE events (
 key text,
 column1 int,
 column2 int,
 column3 text,
 column4 text,
 value text,
 PRIMARY KEY(key, column1, column2, column3, column4)
 ) WITH COMPACT STORAGE

 Is this possible?
 Best regards,
 Joel

 2018-01-12 16:53 GMT+01:00 Joel Samuelsson :

> Hi,
>
> I have an older system (C* 2.1) using Thrift tables on which I want to
> alter a column composite. Right now it looks like (int, int, string) but I
> want it to be (int, int, string, string). Is it possible to do this on a
> live cluster without deleting the old data? Can you point me to some
> documentation about this? I can't seem to find it any more.
>
> Best regards,
> Joel
>


>>>
>>
>


Re: ALTER default_time_to_live

2018-01-18 Thread Alain RODRIGUEZ
>
> I set  default_time_to_live for existing table. Does it affect existing
> data?


No, it sets a default TTL for the future writes (that is no guarantee, as
it can be overwritten in any specific query).

It seems data to be deleted, but after compaction, I don't see any disk
> space freed as expected


Indeed tombstones are responsible for tombstones eviction, yet there are
some conditions to respect to be able to remove the tombstones (for
consistency reasons). I detailed this last year, and even though the
content is a bit old, main principles are still true and the tuning options
are still relevant.

*About deletes and tombstones: *
http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html

*tl;dr: *I would give a try to *unchecked_tombstone_compaction: true*.
Maybe also consider using TWCS because of this "TTL is also ten days on one
table and 100 days on other.". But I really recommend you to understand how
this all work to act wisely. My guess can be wrong.

*About TWCS*: http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2018-01-18 11:15 GMT+00:00 Vlad :

> Hi,
>
> I set  default_time_to_live for existing table. Does it affect existing
> data? It seems data to be deleted, but after compaction, I don't see any
> disk space freed as expected. Database has data for almost year, GC time is
> ten days, and TTL is also ten days on one table and 100 days on other.
>
>  Cassandra version 3.11.0
>
> Thanks.
>


Re: Repair giving error

2018-01-18 Thread Alain RODRIGUEZ
Hello,

I looks like a communication issue.

What Cassandra version are you using?
What's the result of 'nodetool status '?
Any schema disagreement 'nodetool describecluster'?
Is the port 7000 opened and the nodes communicating with each other?(Ping
is not proving connection is up, even though it is good to know the machine
is there and up :)).
Any other errors you could see in the logs?

You might want to consider this an open source project my coworkers have
been working on (and are maintaining) called reaper that aims at making
repairs more efficient and easy to manage as repair is one of the most
tricky operation to handle for a Cassandra operator:
http://cassandra-reaper.io/. I did not work on this project directly but we
have good feedbacks and like this tool ourselves.

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com




2018-01-14 7:47 GMT+00:00 Akshit Jain :

> ​I have a 10 node C* cluster with 4-5 keyspaces​.
> I tried to perform nodetool repair one by one for each keyspace.
> For some keyspaces the repair passed but for some it gave this error:
> ​
> I am not able to figure out what is causing this issue.The replica nodes
> are up and I am able to ping them from this node.​
> ​Any suggestions?​
>
> *Error I am getting on incremental repair:*
>
> *[2018-01-10 12:50:14,047] Did not get positive replies from all
> endpoints. List of failed endpoint(s): [​a.b.c.d, ​e.f.g.h]*
>
>
>
>
>
>
>
>
>
> *-- StackTrace --java.lang.RuntimeException: Repair job has failed with
> the error message: [2018-01-10 12:50:14,047] Did not get positive replies
> from all endpoints. List of failed endpoint(s): [​a.b.c.d, ​e.f.g.h]at
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:115)at
> org.apache.cassandra.utils.pro
> gress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)at
> com.sun.jmx.remote.internal.Cl
> ientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)at
> com.sun.jmx.remote.internal.Cl
> ientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)at
> com.sun.jmx.remote.internal.Cl
> ientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)at
> com.sun.jmx.remote.internal.Cl
> ientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)*
>


Re: Alter composite column

2018-01-18 Thread Nicolas Guyomar
Well it should be as easy as following this :
https://docs.datastax.com/en/cql/3.1/cql/cql_using/use_alter_add.html

But I'm worried that your initial requirement was to change the clustering
key, as Alexander stated, you need to create a new table and transfer your
data in it

On 18 January 2018 at 12:03, Joel Samuelsson 
wrote:

> It was indeed created with C* 1.X
> Do you have any links or otherwise on how I would add the column4? I don't
> want to risk destroying my data.
>
> Best regards,
> Joel
>
> 2018-01-18 11:18 GMT+01:00 Nicolas Guyomar :
>
>> Hi Joel,
>>
>> You cannot alter a table primary key.
>>
>> You can however alter your existing table to only add column4 using cqlsh
>> and cql, even if this table as created back with C* 1.X for instance
>>
>> On 18 January 2018 at 11:14, Joel Samuelsson 
>> wrote:
>>
>>> So to rephrase that in CQL terms I have a table like this:
>>>
>>> CREATE TABLE events (
>>> key text,
>>> column1 int,
>>> column2 int,
>>> column3 text,
>>> value text,
>>> PRIMARY KEY(key, column1, column2, column3)
>>> ) WITH COMPACT STORAGE
>>>
>>> and I'd like to change it to:
>>> CREATE TABLE events (
>>> key text,
>>> column1 int,
>>> column2 int,
>>> column3 text,
>>> column4 text,
>>> value text,
>>> PRIMARY KEY(key, column1, column2, column3, column4)
>>> ) WITH COMPACT STORAGE
>>>
>>> Is this possible?
>>> Best regards,
>>> Joel
>>>
>>> 2018-01-12 16:53 GMT+01:00 Joel Samuelsson :
>>>
 Hi,

 I have an older system (C* 2.1) using Thrift tables on which I want to
 alter a column composite. Right now it looks like (int, int, string) but I
 want it to be (int, int, string, string). Is it possible to do this on a
 live cluster without deleting the old data? Can you point me to some
 documentation about this? I can't seem to find it any more.

 Best regards,
 Joel

>>>
>>>
>>
>


ALTER default_time_to_live

2018-01-18 Thread Vlad
Hi,
I set  default_time_to_live for existing table. Does it affect existing data? 
It seems data to be deleted, but after compaction, I don't see any disk space 
freed as expected. Database has data for almost year, GC time is ten days, and 
TTL is also ten days on one table and 100 days on other.
 Cassandra version 3.11.0

Thanks.


Re: Alter composite column

2018-01-18 Thread Joel Samuelsson
It was indeed created with C* 1.X
Do you have any links or otherwise on how I would add the column4? I don't
want to risk destroying my data.

Best regards,
Joel

2018-01-18 11:18 GMT+01:00 Nicolas Guyomar :

> Hi Joel,
>
> You cannot alter a table primary key.
>
> You can however alter your existing table to only add column4 using cqlsh
> and cql, even if this table as created back with C* 1.X for instance
>
> On 18 January 2018 at 11:14, Joel Samuelsson 
> wrote:
>
>> So to rephrase that in CQL terms I have a table like this:
>>
>> CREATE TABLE events (
>> key text,
>> column1 int,
>> column2 int,
>> column3 text,
>> value text,
>> PRIMARY KEY(key, column1, column2, column3)
>> ) WITH COMPACT STORAGE
>>
>> and I'd like to change it to:
>> CREATE TABLE events (
>> key text,
>> column1 int,
>> column2 int,
>> column3 text,
>> column4 text,
>> value text,
>> PRIMARY KEY(key, column1, column2, column3, column4)
>> ) WITH COMPACT STORAGE
>>
>> Is this possible?
>> Best regards,
>> Joel
>>
>> 2018-01-12 16:53 GMT+01:00 Joel Samuelsson :
>>
>>> Hi,
>>>
>>> I have an older system (C* 2.1) using Thrift tables on which I want to
>>> alter a column composite. Right now it looks like (int, int, string) but I
>>> want it to be (int, int, string, string). Is it possible to do this on a
>>> live cluster without deleting the old data? Can you point me to some
>>> documentation about this? I can't seem to find it any more.
>>>
>>> Best regards,
>>> Joel
>>>
>>
>>
>


Re: Best compaction strategy for counters tables

2018-01-18 Thread Alain RODRIGUEZ
Hello,

I believe there is not a really specifically good strategy for counters.
Often counter tables size is relatively low (compared to events / raw
data). So depending on the workload you might want to pick one or the
other. Given the high number of reads the table will have to face (during
reads + writes), LCS might be a good choice if there is no better reason to
pick another strategy. Be aware that LCS have the highest write
amplification (Data will be written about 8 times on disk through
compaction process) but should be in a very nice spot for reads, mostly
touching one or a few SSTables.

In the past I did not care much about the compaction strategy for counters
as I considered it to be negligible in my case (counters were MB big tables
out of a few TB for the entire dataset).

You can always pick a strategy you think would work better, and test the
change on a canary node (use JMX to apply on 1 node only), see how it goes.
I found the doc for this on Datastax website. I hope this will help:
https://support.datastax.com/hc/en-us/articles/213370546-Change-CompactionStrategy-and-sub-properties-via-JMX

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2018-01-17 16:14 GMT+00:00 Octavian Rinciog :

> Hello!
> I am using Cassandra 3.10.
> I have a counter table, with the following schema and RF=1
>
> CREATE TABLE edges (
> src_id text,
> src_type text,
> source text
> weight counter,
> PRIMARY KEY ((src_id, src_type), source)
> );
>
> SELECT vs UPDATE requests ratio for this table is 0.1
> READ vs WRITE rate, given by iostat is 100:1.
> Counter cache hit rate is 80%, so only for 20% UPDATE requests, the
> hard-disk is touched.
>
> I want to ask you which compation strategy is best for this table
> (SizeTieredCompactionStrategy or
> LeveledCompactionStrategy).
>
> Thank you,
> --
> Octavian Rinciog
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: High read rate on hard-disk

2018-01-18 Thread Alain RODRIGUEZ
Hello Octavian,


>  I have a counter table(RF=1)

 SELECT vs UPDATE requests ratio is 0.001. ( Read Count: 3771000, Write Count:
> 3401236000, in one month)

SELECT vs UPDATE requests ratio is 0.001. ( Read Count: 3771000, Write
> Count: 3401236000, in one month)


The problem is that our read rate limit on our hard-disk is always near
> 30MBps and our write rate limit is near 500KBps.


I did not read all your numbers, but here are the internal details you
could be missing:

- Other than the 'lock', Counters perform an implicit read before the write
operation. To increment, you need to know about past value. It was true
last time I used them, I believe there is no real workaround and it's still
the case today.
- Writes do not hit the disk synchronously. Instead of this, they are
stored in the Memtable and only flushed once, sequentially and efficiently.
Then compactions manages to merge partitions after, asynchronously.

I would say what you are seeing is expected with this use case. Also, I
have never seen a use case where using RF = 1 is good idea (excepted for
some testing maybe). Be aware this data is weak and can easily be lost (if
it's a deliberate choice, ignore my comment). On the bright side, you have
no entropy / consistency issues or need for repairs with RF = 1 :D.

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2018-01-17 17:40 GMT+00:00 Octavian Rinciog :

> Hello!
>
> I am using Cassandra 3.10, on Ubuntu 14.04 and I have a counter
> table(RF=1), with the following schema:
>
> CREATE TABLE edges (
> src_id text,
> src_type text,
> source text
> weight counter,
> PRIMARY KEY ((src_id, src_type), source)
> ) WITH
>compaction = {'class':
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
> 'max_threshold': '32', 'min_threshold': '4'}
>
> SELECT vs UPDATE requests ratio is 0.001. ( Read Count: 3771000, Write
> Count: 3401236000, in one month)
>
> We have Counter Cache enabled:
>
> Counter Cache  : entries 1018782, size 256 MiB, capacity 256
> MiB, 2799913189 hits, 3469459479 requests, 0.807 recent hit rate, 7200
> save period in seconds
>
> The problem is that our read rate limit on our hard-disk is always
> near 30MBps and our write rate limit is near 500KBps.
>
> One example of output of "iostat -x" is
>
> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> sdb   0.06 1.04  263.652.04 28832.42   572.53
> 146.07 0.361.350.74   81.16   1.27  33.81
>
> Also with iotop, we saw that are about 8 threads that each goes around
> 3MB/s read rate.
>
> Total DISK READ :  22.73 M/s | Total DISK WRITE : 494.35 K/s
> Actual DISK READ:  22.62 M/s | Actual DISK WRITE: 528.57 K/s
>   TID  PRIO  USERDISK READ>  DISK WRITE  SWAPIN  IOCOMMAND
> 14793 be/4 cassandra 3.061 M/s0.0010 B/s  0.00 % 93.27 % java
> -Dcassandra.fd_max_interval_ms=400
>
> The output of strace on these threads is :
>
> strace -cp 14793
> Process 14793 attached
> ^CProcess 14793 detached
> % time seconds  usecs/call callserrors syscall
> -- --- --- - - 
>  99.85   32.118518  57567288256251 futex
>   0.150.048822   3 15339   write
>   0.000.00   0 1   rt_sigreturn
> -- --- --- - - 
> 100.00   32.167340582628256251 total
>
>
> Despite that iotop shows that this thread is reading with 3MB/s, there
> is no read syscall in strace.
>
> I want to ask if actually the futex is responsible for the read rate
> and how can we debug this problem further ?
>
> Btw, there are no compaction tasks in progress and there are no SELECT
> queries in progress.
>
> Also, I know that for each update, a lock is obtained[1]
>
> Thank you,
>
> [1]https://apache.googlesource.com/cassandra/+/
> refs/heads/trunk/src/java/org/apache/cassandra/db/CounterMutation.java#121
> --
> Octavian Rinciog
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


RE: Cassandra 3.11 fails to start with JDK8u162

2018-01-18 Thread Steinmaurer, Thomas
Sam,

thanks for the confirmation. Going back to u152 then.

Thomas

From: li...@beobal.com [mailto:li...@beobal.com] On Behalf Of Sam Tunnicliffe
Sent: Donnerstag, 18. Jänner 2018 10:16
To: user@cassandra.apache.org
Subject: Re: Cassandra 3.11 fails to start with JDK8u162

This isn't (wasn't) a known issue, but the way that CASSANDRA-10091 was 
implemented using internal JDK classes means it was always possible that a 
minor JVM version change could introduce incompatibilities (CASSANDRA-2967 is 
also relevant).
We did already know that we need to revisit the way this works in 4.0 for JDK9 
support (CASSANDRA-9608), so we should identify a more stable solution & apply 
that to both 3.11 and 4.0.
In the meantime, downgrading to 152 is the only real option.

I've opened https://issues.apache.org/jira/browse/CASSANDRA-14173 for this.

Thanks,
Sam


On 18 January 2018 at 08:43, Nicolas Guyomar 
> wrote:
Thank you Thomas for starting this thread, I'm having exactly the same issue on 
AWS EC2 RHEL-7.4_HVM-20180103-x86_64-2-Hourly2-GP2 (ami-dc13a4a1)  I was 
starting to bang my head on my desk !

So I'll try to downgrade back to 152 then !



On 18 January 2018 at 08:34, Steinmaurer, Thomas 
> 
wrote:
Hello,

after switching from JDK8u152 to JDK8u162, Cassandra fails with the following 
stack trace upon startup.

ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception 
encountered during startup
java.lang.AbstractMethodError: 
org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
at 
javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
 ~[na:1.8.0_162]
at 
javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
 ~[na:1.8.0_162]
at 
javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
 ~[na:1.8.0_162]
at 
org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
 ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
at 
org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
 [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) 
[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600) 
[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]

Is this a known issue?


Thanks,
Thomas

The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freistädterstraße
 
313


The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freistädterstraße 313


Re: Alter composite column

2018-01-18 Thread Nicolas Guyomar
Hi Joel,

You cannot alter a table primary key.

You can however alter your existing table to only add column4 using cqlsh
and cql, even if this table as created back with C* 1.X for instance

On 18 January 2018 at 11:14, Joel Samuelsson 
wrote:

> So to rephrase that in CQL terms I have a table like this:
>
> CREATE TABLE events (
> key text,
> column1 int,
> column2 int,
> column3 text,
> value text,
> PRIMARY KEY(key, column1, column2, column3)
> ) WITH COMPACT STORAGE
>
> and I'd like to change it to:
> CREATE TABLE events (
> key text,
> column1 int,
> column2 int,
> column3 text,
> column4 text,
> value text,
> PRIMARY KEY(key, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE
>
> Is this possible?
> Best regards,
> Joel
>
> 2018-01-12 16:53 GMT+01:00 Joel Samuelsson :
>
>> Hi,
>>
>> I have an older system (C* 2.1) using Thrift tables on which I want to
>> alter a column composite. Right now it looks like (int, int, string) but I
>> want it to be (int, int, string, string). Is it possible to do this on a
>> live cluster without deleting the old data? Can you point me to some
>> documentation about this? I can't seem to find it any more.
>>
>> Best regards,
>> Joel
>>
>
>


Re: Alter composite column

2018-01-18 Thread Alexander Dejanovski
Hi Joel,

Sadly it's not possible to alter the primary key of a table in Cassandra.
That would require to rewrite all data on disk to match the new
partitioning and/or clustering.

You need to create a new table and transfer all data from the old one
programmatically.

Cheers,

Le jeu. 18 janv. 2018 à 11:14, Joel Samuelsson 
a écrit :

> So to rephrase that in CQL terms I have a table like this:
>
> CREATE TABLE events (
> key text,
> column1 int,
> column2 int,
> column3 text,
> value text,
> PRIMARY KEY(key, column1, column2, column3)
> ) WITH COMPACT STORAGE
>
> and I'd like to change it to:
> CREATE TABLE events (
> key text,
> column1 int,
> column2 int,
> column3 text,
> column4 text,
> value text,
> PRIMARY KEY(key, column1, column2, column3, column4)
> ) WITH COMPACT STORAGE
>
> Is this possible?
> Best regards,
> Joel
>
> 2018-01-12 16:53 GMT+01:00 Joel Samuelsson :
>
>> Hi,
>>
>> I have an older system (C* 2.1) using Thrift tables on which I want to
>> alter a column composite. Right now it looks like (int, int, string) but I
>> want it to be (int, int, string, string). Is it possible to do this on a
>> live cluster without deleting the old data? Can you point me to some
>> documentation about this? I can't seem to find it any more.
>>
>> Best regards,
>> Joel
>>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re: Alter composite column

2018-01-18 Thread Joel Samuelsson
So to rephrase that in CQL terms I have a table like this:

CREATE TABLE events (
key text,
column1 int,
column2 int,
column3 text,
value text,
PRIMARY KEY(key, column1, column2, column3)
) WITH COMPACT STORAGE

and I'd like to change it to:
CREATE TABLE events (
key text,
column1 int,
column2 int,
column3 text,
column4 text,
value text,
PRIMARY KEY(key, column1, column2, column3, column4)
) WITH COMPACT STORAGE

Is this possible?
Best regards,
Joel

2018-01-12 16:53 GMT+01:00 Joel Samuelsson :

> Hi,
>
> I have an older system (C* 2.1) using Thrift tables on which I want to
> alter a column composite. Right now it looks like (int, int, string) but I
> want it to be (int, int, string, string). Is it possible to do this on a
> live cluster without deleting the old data? Can you point me to some
> documentation about this? I can't seem to find it any more.
>
> Best regards,
> Joel
>


Re: Cassandra 3.11 fails to start with JDK8u162

2018-01-18 Thread Sam Tunnicliffe
This isn't (wasn't) a known issue, but the way that CASSANDRA-10091 was
implemented using internal JDK classes means it was always possible that a
minor JVM version change could introduce incompatibilities (CASSANDRA-2967
is also relevant).
We did already know that we need to revisit the way this works in 4.0 for
JDK9 support (CASSANDRA-9608), so we should identify a more stable solution
& apply that to both 3.11 and 4.0.
In the meantime, downgrading to 152 is the only real option.

I've opened https://issues.apache.org/jira/browse/CASSANDRA-14173 for this.

Thanks,
Sam


On 18 January 2018 at 08:43, Nicolas Guyomar 
wrote:

> Thank you Thomas for starting this thread, I'm having exactly the same
> issue on AWS EC2 RHEL-7.4_HVM-20180103-x86_64-2-Hourly2-GP2
> (ami-dc13a4a1)  I was starting to bang my head on my desk !
>
> So I'll try to downgrade back to 152 then !
>
>
>
> On 18 January 2018 at 08:34, Steinmaurer, Thomas <
> thomas.steinmau...@dynatrace.com> wrote:
>
>> Hello,
>>
>>
>>
>> after switching from JDK8u152 to JDK8u162, Cassandra fails with the
>> following stack trace upon startup.
>>
>>
>>
>> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception
>> encountered during startup
>>
>> java.lang.AbstractMethodError: org.apache.cassandra.utils.JMX
>> ServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/
>> rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServer
>> SocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
>>
>> at 
>> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
>> ~[na:1.8.0_162]
>>
>> at 
>> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
>> ~[na:1.8.0_162]
>>
>> at 
>> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
>> ~[na:1.8.0_162]
>>
>> at 
>> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
>> ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>> at 
>> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
>> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>> at 
>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188)
>> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>> at 
>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>> at 
>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689)
>> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>>
>>
>>
>> Is this a known issue?
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Thomas
>>
>>
>> The contents of this e-mail are intended for the named addressee only. It
>> contains information that may be confidential. Unless you are the named
>> addressee or an authorized designee, you may not copy or use it, or
>> disclose it to anyone else. If you received it in error please notify us
>> immediately and then destroy it. Dynatrace Austria GmbH (registration
>> number FN 91482h) is a company registered in Linz whose registered office
>> is at 4040 Linz, Austria, Freist
>> 
>> ädterstra
>> 
>> ße 313
>> 
>>
>
>


Re: Cassandra 3.11 fails to start with JDK8u162

2018-01-18 Thread Nicolas Guyomar
Thank you Thomas for starting this thread, I'm having exactly the same
issue on AWS EC2 RHEL-7.4_HVM-20180103-x86_64-2-Hourly2-GP2 (ami-dc13a4a1)
I was starting to bang my head on my desk !

So I'll try to downgrade back to 152 then !



On 18 January 2018 at 08:34, Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:

> Hello,
>
>
>
> after switching from JDK8u152 to JDK8u162, Cassandra fails with the
> following stack trace upon startup.
>
>
>
> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception
> encountered during startup
>
> java.lang.AbstractMethodError: org.apache.cassandra.utils.
> JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/
> RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/
> misc/ObjectInputFilter;)Ljava/rmi/Remote;
>
> at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
> ~[na:1.8.0_162]
>
> at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
> ~[na:1.8.0_162]
>
> at 
> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
> ~[na:1.8.0_162]
>
> at 
> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
> ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188)
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689)
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>
>
>
> Is this a known issue?
>
>
>
>
>
> Thanks,
>
> Thomas
>
>
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freist
> 
> ädterstra
> 
> ße 313
> 
>