[jira] [Updated] (CASSANDRA-14204) Nodetool garbagecollect AssertionError

2018-01-30 Thread Vincent White (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent White updated CASSANDRA-14204:
--
Status: Patch Available  (was: Open)

> Nodetool garbagecollect AssertionError
> --
>
> Key: CASSANDRA-14204
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14204
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vincent White
>Priority: Minor
> Fix For: 3.11.x, 4.x
>
>
> When manually running a garbage collection compaction across a table with 
> unrepaired sstables and only_purge_repaired_tombstones set to true an 
> assertion error is thrown. This is because the unrepaired sstables aren't 
> being removed from the transaction as they are filtered out in 
> filterSSTables().
> ||3.11||trunk||
> |[branch|https://github.com/vincewhite/cassandra/commit/e13c822736edd3df3403c02e8ef90816f158cde2]|[branch|https://github.com/vincewhite/cassandra/commit/cc8828576404e72504d9b334be85f84c90e77aa7]|
> The stacktrace:
> {noformat}
> -- StackTrace --
> java.lang.AssertionError
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.parallelAllSSTableOperation(CompactionManager.java:339)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.performGarbageCollection(CompactionManager.java:476)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.garbageCollect(ColumnFamilyStore.java:1579)
>   at 
> org.apache.cassandra.service.StorageService.garbageCollect(StorageService.java:3069)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
>   at sun.rmi.transport.Transport$1.run(Transport.java:200)
>   at sun.rmi.transport.Transport$1.run(Transport.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}

[jira] [Created] (CASSANDRA-14204) Nodetool garbagecollect AssertionError

2018-01-30 Thread Vincent White (JIRA)
Vincent White created CASSANDRA-14204:
-

 Summary: Nodetool garbagecollect AssertionError
 Key: CASSANDRA-14204
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14204
 Project: Cassandra
  Issue Type: Bug
Reporter: Vincent White
 Fix For: 3.11.x, 4.x


When manually running a garbage collection compaction across a table with 
unrepaired sstables and only_purge_repaired_tombstones set to true an assertion 
error is thrown. This is because the unrepaired sstables aren't being removed 
from the transaction as they are filtered out in filterSSTables().
||3.11||trunk||
|[branch|https://github.com/vincewhite/cassandra/commit/e13c822736edd3df3403c02e8ef90816f158cde2]|[branch|https://github.com/vincewhite/cassandra/commit/cc8828576404e72504d9b334be85f84c90e77aa7]|

The stacktrace:
{noformat}
-- StackTrace --
java.lang.AssertionError
at 
org.apache.cassandra.db.compaction.CompactionManager.parallelAllSSTableOperation(CompactionManager.java:339)
at 
org.apache.cassandra.db.compaction.CompactionManager.performGarbageCollection(CompactionManager.java:476)
at 
org.apache.cassandra.db.ColumnFamilyStore.garbageCollect(ColumnFamilyStore.java:1579)
at 
org.apache.cassandra.service.StorageService.garbageCollect(StorageService.java:3069)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14203) unable to run compactions in cassandra 3.9 version

2018-01-30 Thread chinta kiran (JIRA)
chinta kiran created CASSANDRA-14203:


 Summary: unable to run compactions in cassandra 3.9 version
 Key: CASSANDRA-14203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14203
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
Reporter: chinta kiran


HI Team, We are unable to run compaction on this server.

The space details are as follows:

 

WARN  [CompactionExecutor:8279] 2018-01-30 22:02:53,889 CompactionTask.java:91 
- Insufficient space to compact all requested files 
BigTableReader(path='/fs4/cassandra/data/asm_log/event_group-3b5782d08e4411e6842917253f111990/mc-78218-big-Data.db'),
 
BigTableReader(path='/fs3/cassandra/data/asm_log/event_group-3b5782d08e4411e6842917253f111990/mc-78217-big-Data.db')
ERROR [CompactionExecutor:8279] 2018-01-30 22:02:53,890 
CassandraDaemon.java:226 - Exception in thread 
Thread[CompactionExecutor:8279,1,main]
java.lang.RuntimeException: Not enough space for compaction, estimated sstables 
= 1, expected write size = 92227100392
    at 
org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:310)
 ~[apache-cassandra-3.9.jar:3.9]
    at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:126)
 ~[apache-cassandra-3.9.jar:3.9]
    at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-3.9.jar:3.9]
    at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82)
 ~[apache-cassandra-3.9.jar:3.9]
    at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
 ~[apache-cassandra-3.9.jar:3.9]
    at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
 ~[apache-cassandra-3.9.jar:3.9]
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_152]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_152]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_152]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[na:1.8.0_152]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8460) Make it possible to move non-compacting sstables to slow/big storage in DTCS

2018-01-30 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346255#comment-16346255
 ] 

Lerh Chuan Low commented on CASSANDRA-8460:
---

I've tentatively started work on this, and it's turning out to be a relatively 
bigger code change than I was originally expecting, so would really love to get 
some feedback from the community who knows more (and review my initial 
patches). 

{{CompactionAwareWriter}}, {{DiskBoundaryManager}}, {{Directories}} and 
{{CompactionStrategyManager}} needs to know about archives. I've gone ahead and 
created a new Enumeration for `DirectoryType` that can be either ARCHIVE or 
STANDARD. 

{{CompactionAwareWriter}} always calls {{maybeSwitchWriter(Decorated Key)}} 
before calling {{realAppend}}. This is to handle the JBOD case, 
{{maybeSwitchWriter}} helps the writer write to the right location depending on 
the key to make sure keys do not overlap across directories. So it needs to 
have knowledge on which {{diskBoundaries}} it is actually using so as not to 
get into the situation where it can't differentiate between an actual archive 
disk and an actual JBOD disk. 

It would be wise to re-use the logic in {{diskBoundaries}} to also handle the 
case when the archive directory has been configured as JBOD, so 
{{DiskBoundaryManager}} now also needs to know about archive directories. When 
it tries to {{getWriteableLocations}} or generate disk boundaries, it should be 
able to differentiate between archive and non-archive. 

The same goes for {{CompactionStrategyManager}}. We still need to be able to 
run separate compaction strategy instances in the archive directory to handle 
the case of repairs and streaming (so archived SSTables don't just accumulate 
indefinitely). Here's where I am not sure which way to proceed forward. 

Option 1: 
Have it so that {{ColumnFamilyStore}} still only maintains one CSM and DBM and 
one {{Directories}}. CSM, DBM and {{Directories}} all start knowing about the 
existence of an archive directory; this can either be an extra field, or an 
EnumMap:

{code}
new EnumMap(Directories.DirectoryType.class){{
put(Directories.DirectoryType.STANDARD, 
cfs.getDiskBoundaries(Directories.DirectoryType.STANDARD));
put(Directories.DirectoryType.ARCHIVE, 
cfs.getDiskBoundaries(Directories.DirectoryType.ARCHIVE));
}}
{code}

The worry here for me is that some things may subtly break even as I fix up 
everything else that gets logged as errors...The CSM's own internal fields of 
{{repaired}}, {{unrepaired}} and {{pendingRepaired}} will also need to become 
maps, otherwise the individual instances will again become confused, being 
unable to differentiate between an actual JBOD disk or an archive disk. Some of 
the APIs, e.g reload, shutdown, enable etc will all need some smarts on which 
directory type is needed (in some cases it won't matter). Every consumer of 
these APIs will also need to be updated. 

Here's how it looks like in an initial go: 
https://github.com/apache/cassandra/compare/trunk...juiceblender:cassandra-8460?expand=1

Option 2:
Have it so that {{ColumnFamilyStore}} keeps 2 CSMs and 2 DBMs, of which the 
archiving equivalents are {{null}} if not applicable/reloaded. In this case 
there's a reasonable level of confidence that each CSM and BDM will just 'do 
the right thing', regardless whether it's an archive or not. In this case then 
every call to getting DBM or CSM (and there are a lot for getting CSM) will 
need to be evaluated and checked. 

Here's how it looks like in an initial go: 
https://github.com/apache/cassandra/compare/trunk...juiceblender:cassandra-8460-single-csm?expand=1

Both still have work on them (Scrubber, relocate SSTables, what happens when 
the archiving is turned off etc), but before I continue down the track, just 
wondering if anyone can point out which way is better/this is all misguided and 
, in the event this are the changes that need to happen (I can't seem to find a 
way for just TWCS to be aware that there's an archive directory, CFS needs to 
know as well), is this still worth the complexity introduced? 

[~pavel.trukhanov] Re "Why can't we simply allow a CS instance to spread across 
two disks - SSD
and corresponding archival HDD" -> I think in this case you're back in the 
situation where you can have data resurrected. You can have other replicas 
compact away tombstones (because the CS can see both directories) and then have 
your last remaining replica, before it manages to, get its SSD with the 
tombstone corrupted. Upon replacing the SSD with a new one and issuing repair, 
the tombstone is resurrected. Of course, this can be mitigated by making it 
clear to operators that every time there's a corrupt disk, every single disk 
needs to be replaced. 

Even if we did so, there will still be large code changes to make CSM and DBM 
be able to differenti

[jira] [Updated] (CASSANDRA-14202) Assertion error on sstable open during startup should invoke disk failure policy

2018-01-30 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14202:

Reviewer: Blake Eggleston

> Assertion error on sstable open during startup should invoke disk failure 
> policy
> 
>
> Key: CASSANDRA-14202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14202
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We should catch all exceptions when opening sstables on startup and invoke 
> the disk failure policy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14190) Non-disruptive seed node list reload

2018-01-30 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14190:

   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Patch Available)

committed as sha {{bfecdf52054a4da472af22b0c35c5db5f1132bbc}}. Thanks!

> Non-disruptive seed node list reload
> 
>
> Key: CASSANDRA-14190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration, Lifecycle
>Reporter: Samuel Fink
>Assignee: Samuel Fink
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 14190-trunk.patch, 14190-trunk.patch, 14190-trunk.patch
>
>
> Add a mechanism for reloading the Gossiper in-memory seed node IP list 
> without requiring a service restart.
> The Gossiper keeps an in-memory copy of the seed node IP list and uses it 
> during a gossip round to determine if the random node that was gossiped to is 
> a seed node and for picking a seed node to gossip to in maybeGossipToSeed.
> Currently the Gossiper seed node list is only updated when an endpoint is 
> removed, at the start of a shadow round, and on startup. Those scenarios 
> don’t handle the case of seed nodes changing IP addresses (eg. DHCP lease 
> changes) or additional seed nodes being added to the cluster.
> As described in CASSANDRA-3829 the current way to ensure that all nodes in 
> the cluster have the same seed node list when there has been a change is to 
> do a rolling restart of every node in the cluster. In large clusters rolling 
> restarts can be very complicated to manage and can have performance impacts 
> because the caches get flushed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Add nodetool getseeds and reloadseeds commands

2018-01-30 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/trunk 69db2359e -> bfecdf520


Add nodetool getseeds and reloadseeds commands

patch by Samuel Fink; reviewed by jasobrown for CASSANDRA-14190


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bfecdf52
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bfecdf52
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bfecdf52

Branch: refs/heads/trunk
Commit: bfecdf52054a4da472af22b0c35c5db5f1132bbc
Parents: 69db235
Author: Samuel Fink 
Authored: Wed Jan 24 13:28:54 2018 -0500
Committer: Jason Brown 
Committed: Tue Jan 30 16:32:09 2018 -0800

--
 CHANGES.txt |   1 +
 .../cassandra/config/DatabaseDescriptor.java|  10 ++
 src/java/org/apache/cassandra/gms/Gossiper.java |  83 +--
 .../org/apache/cassandra/gms/GossiperMBean.java |   5 +
 .../org/apache/cassandra/tools/NodeProbe.java   |  10 ++
 .../org/apache/cassandra/tools/NodeTool.java|   2 +
 .../cassandra/tools/nodetool/GetSeeds.java  |  44 ++
 .../cassandra/tools/nodetool/ReloadSeeds.java   |  47 +++
 .../org/apache/cassandra/gms/GossiperTest.java  | 140 ++-
 9 files changed, 332 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bfecdf52/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e28ffd9..a2e3654 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Non-disruptive seed node list reload (CASSANDRA-14190)
  * Nodetool tablehistograms to print statics for all the tables 
(CASSANDRA-14185)
  * Migrate dtests to use pytest and python3 (CASSANDRA-14134)
  * Allow storage port to be configurable per node (CASSANDRA-7544)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bfecdf52/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 9012e3a..8e831cf 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1661,6 +1661,16 @@ public class DatabaseDescriptor
 return 
ImmutableSet.builder().addAll(seedProvider.getSeeds()).build();
 }
 
+public static SeedProvider getSeedProvider()
+{
+return seedProvider;
+}
+
+public static void setSeedProvider(SeedProvider newSeedProvider)
+{
+seedProvider = newSeedProvider;
+}
+
 public static InetAddress getListenAddress()
 {
 return listenAddress;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bfecdf52/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index eb6c500..a4e46f2 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -35,6 +35,7 @@ import com.google.common.collect.ImmutableMap;
 import com.google.common.util.concurrent.Uninterruptibles;
 
 import org.apache.cassandra.locator.InetAddressAndPort;
+import org.apache.cassandra.locator.SeedProvider;
 import org.apache.cassandra.utils.CassandraVersion;
 import org.apache.cassandra.utils.Pair;
 import org.slf4j.Logger;
@@ -89,7 +90,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 public final static int intervalInMillis = 1000;
 public final static int QUARANTINE_DELAY = StorageService.RING_DELAY * 2;
 private static final Logger logger = 
LoggerFactory.getLogger(Gossiper.class);
-public static final Gossiper instance = new Gossiper();
+public static final Gossiper instance = new Gossiper(true);
 
 // Timestamp to prevent processing any in-flight messages for we've not 
send any SYN yet, see CASSANDRA-12653.
 volatile long firstSynSendAt = 0L;
@@ -199,7 +200,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 }
 }
 
-private Gossiper()
+Gossiper(boolean registerJmx)
 {
 // half of QUARATINE_DELAY, to ensure justRemovedEndpoints has enough 
leeway to prevent re-gossip
 fatClientTimeout = (QUARANTINE_DELAY / 2);
@@ -207,14 +208,17 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 FailureDetector.instance.registerFailureDetectionEventListener(this);
 
 // Register this instance with JMX
-try
-{
-MBeanServer mbs = ManagementFactory.getPlatformMBeanServe

[jira] [Comment Edited] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-01-30 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345901#comment-16345901
 ] 

Ariel Weisberg edited comment on CASSANDRA-7544 at 1/30/18 10:15 PM:
-

bq. I see that the protocol version is incremented but there are no edits to 
the native_protocol spec.  Oversight?
The native protocol didn't change per se. The clients use the protocol version 
to select the correct system tables when querying metadata. It's definitely 
something I could include in the spec, although historically the spec is silent 
WRT to the contents of Cassandra system tables.

bq. Also, is this the right place to change the default protocol? Shouldn't 
that be a separate discussion?
What is the default protocol? You mean move v5 out of beta? I'm not sure how we 
intended that to work. We don't have trunk releases so what is the expectation 
there from the perspective of clients?

We could put it in back in beta and update the code elsewhere that relies on 
the beta functionality to be correct to use the beta version (cqlsh, 
sstableloader, maybe the hadoop stuff).

Then when we move it out of beta we have to remove the use beta flag otherwise 
we will be releasing utilities that use the next beta version (v6) of the 
protocol. Seems a bit like the tail wagging the dog, but then end result is 
similar.


was (Author: aweisberg):
bq. I see that the protocol version is incremented but there are no edits to 
the native_protocol spec.  Oversight?
The native protocol didn't change per se. The clients use the protocol version 
to select the correct system tables when querying metadata. It's definitely 
something I could include in the spec, although historically the spec is 
incomplete WRT to the contents of Cassandra system tables.

bq. Also, is this the right place to change the default protocol? Shouldn't 
that be a separate discussion?
What is the default protocol? You mean move v5 out of beta? I'm not sure how we 
intended that to work. We don't have trunk releases so what is the expectation 
there from the perspective of clients?

We could put it in back in beta and update the code elsewhere that relies on 
the beta functionality to be correct to use the beta version (cqlsh, 
sstableloader, maybe the hadoop stuff).

Then when we move it out of beta we have to remove the use beta flag otherwise 
we will be releasing utilities that use the next beta version (v6) of the 
protocol. Seems a bit like the tail wagging the dog, but then end result is 
similar.

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-01-30 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345901#comment-16345901
 ] 

Ariel Weisberg commented on CASSANDRA-7544:
---

bq. I see that the protocol version is incremented but there are no edits to 
the native_protocol spec.  Oversight?
The native protocol didn't change per se. The clients use the protocol version 
to select the correct system tables when querying metadata. It's definitely 
something I could include in the spec, although historically the spec is 
incomplete WRT to the contents of Cassandra system tables.

bq. Also, is this the right place to change the default protocol? Shouldn't 
that be a separate discussion?
What is the default protocol? You mean move v5 out of beta? I'm not sure how we 
intended that to work. We don't have trunk releases so what is the expectation 
there from the perspective of clients?

We could put it in back in beta and update the code elsewhere that relies on 
the beta functionality to be correct to use the beta version (cqlsh, 
sstableloader, maybe the hadoop stuff).

Then when we move it out of beta we have to remove the use beta flag otherwise 
we will be releasing utilities that use the next beta version (v6) of the 
protocol. Seems a bit like the tail wagging the dog, but then end result is 
similar.

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-01-30 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345873#comment-16345873
 ] 

Jonathan Ellis commented on CASSANDRA-7544:
---

I see that the protocol version is incremented but there are no edits to the 
native_protocol spec.  Oversight?

Also, is this the right place to change the default protocol?  Shouldn't that 
be a separate discussion?

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7190) Add schema to snapshot manifest

2018-01-30 Thread Abhishek Dharmapurikar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345784#comment-16345784
 ] 

Abhishek Dharmapurikar commented on CASSANDRA-7190:
---

Does the schema.cql work for dropped static columns?
I looked around in the code and checked with a test that we do not add whether 
the column was static in the dropped column. 

Steps 

a) Create a table which includes a static column. Add few entries to it.
b) A nodetool flush (creating a mc-1 table)
c) Drop the static column
d) A nodetool snapshot.
The newly created schema.cql file would have this column created as a regular 
column. If schema is created with this and sstableloader is run it fails with 
stream error (likely the server doesn't expect a static column as streamed by 
the sstableloader).
Any way around this? 

> Add schema to snapshot manifest
> ---
>
> Key: CASSANDRA-7190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Materialized Views, Tools
>Reporter: Jonathan Ellis
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, lhf
> Fix For: 3.0.9, 3.10
>
>
> followup from CASSANDRA-6326



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345738#comment-16345738
 ] 

Paulo Motta edited comment on CASSANDRA-14092 at 1/30/18 8:25 PM:
--

bq.  Moreover, this resurrection is temporary and persists only until the 
SSTable is involved in a compaction. At that point, the expiration date causes 
a purge and the data disappears again. This is definitely not cool and if we do 
fix up the data, it has to stay fixed up.

This was definitely not supposed to happen, I'll take a look at it. Would you 
have an easy repro for this?

Thanks for taking the time to review this [~beobal] and for the comments 
[~jjordan]!


was (Author: pauloricardomg):
bq.  Moreover, this resurrection is temporary and persists only until the 
SSTable is involved in a compaction. At that point, the expiration date causes 
a purge and the data disappears again. This is definitely not cool and if we do 
fix up the data, it has to stay fixed up.

This was definitely not supposed to happen, I'll take a look at it. Would you 
have an easy repro for this?

Thanks for taking the time to review this [~beobal]!

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345738#comment-16345738
 ] 

Paulo Motta commented on CASSANDRA-14092:
-

bq.  Moreover, this resurrection is temporary and persists only until the 
SSTable is involved in a compaction. At that point, the expiration date causes 
a purge and the data disappears again. This is definitely not cool and if we do 
fix up the data, it has to stay fixed up.

This was definitely not supposed to happen, I'll take a look at it. Would you 
have an easy repro for this?

Thanks for taking the time to review this [~beobal]!

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345734#comment-16345734
 ] 

Sam Tunnicliffe commented on CASSANDRA-14092:
-

bq. I think our default should be to cap with warnings.  What is a user going 
to do when they get the error?  They are going to just go change there code to 
pick a new date that is still real big, so I don't see a reason to fail things.

Yes, I imagine that's exactly what they will do, but the user will fully aware 
that it's happening, not having their data changed on them. We ought to be 
being conservative by default here.

bq. We need a way to let users recover the data that was silently dropped.  I 
could see an argument for a -D to let the data stay dropped, rather than 
recovering it, but I definitely think our default here should be to recover the 
lost data.

I don't agree. It's definitely bad that the data was silently dropped, but like 
I said we (the db) has no way of knowing what (application) decisions have been 
taken based on the visible state of the database. I think it's pretty clear 
that if the data appeared to be gone at any point (and we have no good reason 
to assume that the client has not observed such a state) that it must stay 
gone. 

bq. Does this actually happen?

Yes, of course I tested it.

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345733#comment-16345733
 ] 

Paulo Motta commented on CASSANDRA-14092:
-

bq. Whether we should allow this resurrection is IMO highly questionable, not 
knowing what decisions have been taken outside of the database based on that 
data not being present/visible. My preference would be for gone data to stay 
gone, with another -D flag to turn on post-insert capping of the expiration 
date.

I don't expect us to take longer than a few weeks (at worst a couple of months) 
to come up with a permanent solution for this, our goal here is to prevent 
users from inadvertedly losing data in the time frame where the permanent fix 
is not available. People that detect that data have gone missing without 
issuing a deletion, will be aware of these issues and will likely take 
additional measures to remediate it - like reducing the TTL or update their 
application to correctly handle the data in the affected time period. I 
personally find it highly unlikely that this scenario will be a problem in 
practice,  as long we properly document and communicate the problem on 
NEWS.txt, users that were affected and care about their data will likely take 
additional measure to recover their state. 

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345706#comment-16345706
 ] 

Paulo Motta edited comment on CASSANDRA-14092 at 1/30/18 8:02 PM:
--

bq. I have serious concerns about the default action here being to fix up the 
data. IMO, the default action should be to reject client requests which attempt 
to set a TTL that's going to push expiration/deletion time past the threshold. 
As mentioned on the dev list there might be clients out there that can't easily 
tolerate that, so my suggestion would be that we enable the capping of the 
expiration date (at insert/update time only and with a client + log warning) by 
means of a -D flag. All of which is definitely annoying and ugly, but probably 
not too controversial.

My reasoning for this is that once we fix the issue permanently, the idea is to 
restore/recompute the correct localExpirationTime via (timestamp/1000 + ttl). 
This is obviously not perfect as the timestamp could be provided by the client, 
so there could be some slight variance here, but if someone is setting a TTL 20 
years in advance, I think it is able to tolerate a few seconds or even minutes 
of difference in the expiration time. I don't think the case where the client 
is using a different timestamp format plus overflowing TTL is realistic enough 
that it will create problems, but we can also protect against this and perhaps 
provide and option to opt out of fix by default behavior if necessary.


was (Author: pauloricardomg):
bq. I have serious concerns about the default action here being to fix up the 
data. IMO, the default action should be to reject client requests which attempt 
to set a TTL that's going to push expiration/deletion time past the threshold. 
As mentioned on the dev list there might be clients out there that can't easily 
tolerate that, so my suggestion would be that we enable the capping of the 
expiration date (at insert/update time only and with a client + log warning) by 
means of a -D flag. All of which is definitely annoying and ugly, but probably 
not too controversial.

My reasoning for this is that once we fix the issue permanently, the idea is to 
restore/recompute the correct localDeletionTime via (timestamp/1000 + ttl). 
This is obviously not perfect as the timestamp could be provided by the client, 
so there could be some slight variance here, but if someone is setting a TTL 20 
years in advance, I think it is able to tolerate a few seconds or even minutes 
of difference in the expiration time. I don't think the case where the client 
is using a different timestamp format plus overflowing TTL is realistic enough 
that it will create problems, but we can also protect against this and perhaps 
provide and option to opt out of fix by default behavior if necessary.

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345706#comment-16345706
 ] 

Paulo Motta commented on CASSANDRA-14092:
-

bq. I have serious concerns about the default action here being to fix up the 
data. IMO, the default action should be to reject client requests which attempt 
to set a TTL that's going to push expiration/deletion time past the threshold. 
As mentioned on the dev list there might be clients out there that can't easily 
tolerate that, so my suggestion would be that we enable the capping of the 
expiration date (at insert/update time only and with a client + log warning) by 
means of a -D flag. All of which is definitely annoying and ugly, but probably 
not too controversial.

My reasoning for this is that once we fix the issue permanently, the idea is to 
restore/recompute the correct localDeletionTime via (timestamp/1000 + ttl). 
This is obviously not perfect as the timestamp could be provided by the client, 
so there could be some slight variance here, but if someone is setting a TTL 20 
years in advance, I think it is able to tolerate a few seconds or even minutes 
of difference in the expiration time. I don't think the case where the client 
is using a different timestamp format plus overflowing TTL is realistic enough 
that it will create problems, but we can also protect against this and perhaps 
provide and option to opt out of fix by default behavior if necessary.

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345668#comment-16345668
 ] 

Jeremiah Jordan commented on CASSANDRA-14092:
-

{quote}I have serious concerns about the default action here being to fix up 
the data. IMO, the default action should be to reject client requests which 
attempt to set a TTL that's going to push expiration/deletion time past the 
threshold. As mentioned on the dev list there might be clients out there that 
can't easily tolerate that, so my suggestion would be that we enable the 
capping of the expiration date (at insert/update time only and with a client + 
log warning) by means of a -D flag. All of which is definitely annoying and 
ugly, but probably not too controversial.
{quote}
I think our default should be to cap with warnings.  What is a user going to do 
when they get the error?  They are going to just go change there code to pick a 
new date that is still real big, so I don't see a reason to fail things.
{quote}Whether we should allow this resurrection is IMO highly questionable, 
not knowing what decisions have been taken outside of the database based on 
that data not being present/visible.
{quote}
We need a way to let users recover the data that was silently dropped.  I could 
see an argument for a -D to let the data stay dropped, rather than recovering 
it, but I definitely think our default here should be to recover the lost data.
{quote}Moreover, this resurrection is temporary and persists only until the 
SSTable is involved in a compaction. At that point, the expiration date causes 
a purge and the data disappears again. This is definitely not cool and if we do 
fix up the data, it has to stay fixed up.
{quote}
Does this actually happen?  The expiration date will be fixed up when the cell 
is loaded, so on compaction it should be written back out with the new time?  
If this is not what happens then it is an over sight in the patch and should be 
fixed.

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345640#comment-16345640
 ] 

Sam Tunnicliffe commented on CASSANDRA-14092:
-

This is a nasty problem because all the potential solutions are bad and I 
should say that I've only gone in depth on the 3.0 patch as of yet, but I 
assume that the other versions are functionally equivalent.

I have serious concerns about the default action here being to fix up the data. 
IMO, the default action should be to reject client requests which attempt to 
set a TTL that's going to push expiration/deletion time past the threshold. As 
mentioned on the dev list there might be clients out there that can't easily 
tolerate that, so my suggestion would be that we enable the capping of the 
expiration date (at insert/update time only and with a client + log warning) by 
means of a -D flag. All of which is definitely annoying and ugly, but probably 
not too controversial.

However, there is another issue as the current patch can also lead to 
previously 'gone' data being resurrected without warning or notification. If an 
SSTable contains data with an overflowed expiration, from the client's 
perspective that data is not present. Applying the patch before the data is 
purged fixes up the expiration date, capping it at the limit date and so the 
previously gone data will once again be returned in query results. Whether we 
should allow this resurrection is IMO highly questionable, not knowing what 
decisions have been taken outside of the database based on that data not being 
present/visible. My preference would be for gone data to stay gone, with 
another -D flag to turn on post-insert capping of the expiration date.

Moreover, this resurrection is temporary and persists only until the SSTable is 
involved in a compaction. At that point, the expiration date causes a purge and 
the data disappears again. This is definitely not cool and if we do fix up the 
data, it has to stay fixed up.


> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13981) Enable Cassandra for Persistent Memory

2018-01-30 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345604#comment-16345604
 ] 

Jason Brown edited comment on CASSANDRA-13981 at 1/30/18 6:59 PM:
--

[~pree] you do not have push privileges to the apache repos; that's for Apache 
committers. If you can just create a branch on your own github account, and 
create a PR (perhaps against the sha you based the work on), that should be 
enough to get started here.

As an arbitrary example, here's [my 
branch|https://github.com/jasobrown/cassandra/tree/13993] for CASSANDRA-13993. 
(Sorry if this is pedantic or obvious)


was (Author: jasobrown):
[~pree] you do not have push privileges to the apache repos; that's for Apache 
committers. If you can just create a branch on your own github account, and 
create a PR (perhaps against the sha you based the work on), that should be 
enough to get started here.

> Enable Cassandra for Persistent Memory 
> ---
>
> Key: CASSANDRA-13981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13981
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Preetika Tyagi
>Assignee: Preetika Tyagi
>Priority: Major
> Fix For: 4.0
>
> Attachments: in-mem-cassandra-1.0.patch, readme.txt
>
>
> Currently, Cassandra relies on disks for data storage and hence it needs data 
> serialization, compaction, bloom filters and partition summary/index for 
> speedy access of the data. However, with persistent memory, data can be 
> stored directly in the form of Java objects and collections, which can 
> greatly simplify the retrieval mechanism of the data. What we are proposing 
> is to make use of faster and scalable B+ tree-based data collections built 
> for persistent memory in Java (PCJ: https://github.com/pmem/pcj) and enable a 
> complete in-memory version of Cassandra, while still keeping the data 
> persistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13981) Enable Cassandra for Persistent Memory

2018-01-30 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345604#comment-16345604
 ] 

Jason Brown commented on CASSANDRA-13981:
-

[~pree] you do not have push privileges to the apache repos; that's for Apache 
committers. If you can just create a branch on your own github account, and 
create a PR (perhaps against the sha you based the work on), that should be 
enough to get started here.

> Enable Cassandra for Persistent Memory 
> ---
>
> Key: CASSANDRA-13981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13981
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Preetika Tyagi
>Assignee: Preetika Tyagi
>Priority: Major
> Fix For: 4.0
>
> Attachments: in-mem-cassandra-1.0.patch, readme.txt
>
>
> Currently, Cassandra relies on disks for data storage and hence it needs data 
> serialization, compaction, bloom filters and partition summary/index for 
> speedy access of the data. However, with persistent memory, data can be 
> stored directly in the form of Java objects and collections, which can 
> greatly simplify the retrieval mechanism of the data. What we are proposing 
> is to make use of faster and scalable B+ tree-based data collections built 
> for persistent memory in Java (PCJ: https://github.com/pmem/pcj) and enable a 
> complete in-memory version of Cassandra, while still keeping the data 
> persistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14173) JDK 8u161 breaks JMX integration

2018-01-30 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14173:

Status: Ready to Commit  (was: Patch Available)

> JDK 8u161 breaks JMX integration
> 
>
> Key: CASSANDRA-14173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14173
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.11.2
>
>
> {\{org.apache.cassandra.utils.JMXServerUtils}} which is used to 
> programatically configure the JMX server and RMI registry (CASSANDRA-2967, 
> CASSANDRA-10091) depends on some JDK internal classes/interfaces. A change to 
> one of these, introduced in Oracle JDK 1.8.0_162 is incompatible, which means 
> we cannot build using that JDK version. Upgrading the JVM on a node running 
> 3.6+ will result in Cassandra being unable to start.
> {noformat}
> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception 
> encountered during startup
> java.lang.AbstractMethodError: 
> org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
>  ~[na:1.8.0_162]
>     at 
> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
>  ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]{noformat}
> This is also a problem for CASSANDRA-9608, as the internals are completely 
> re-organised in JDK9, so a more stable solution that can be applied to both 
> JDK8 & JDK9 is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14173) JDK 8u161 breaks JMX integration

2018-01-30 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345602#comment-16345602
 ] 

Jason Brown commented on CASSANDRA-14173:
-

bq. It only affects versions above 3.6

wfm - go ahead and commit to 3.11 and trunk.

> JDK 8u161 breaks JMX integration
> 
>
> Key: CASSANDRA-14173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14173
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.11.2
>
>
> {\{org.apache.cassandra.utils.JMXServerUtils}} which is used to 
> programatically configure the JMX server and RMI registry (CASSANDRA-2967, 
> CASSANDRA-10091) depends on some JDK internal classes/interfaces. A change to 
> one of these, introduced in Oracle JDK 1.8.0_162 is incompatible, which means 
> we cannot build using that JDK version. Upgrading the JVM on a node running 
> 3.6+ will result in Cassandra being unable to start.
> {noformat}
> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception 
> encountered during startup
> java.lang.AbstractMethodError: 
> org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
>  ~[na:1.8.0_162]
>     at 
> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
>  ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]{noformat}
> This is also a problem for CASSANDRA-9608, as the internals are completely 
> re-organised in JDK9, so a more stable solution that can be applied to both 
> JDK8 & JDK9 is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14201) Add a few options to nodetool verify

2018-01-30 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345593#comment-16345593
 ] 

Jeff Jirsa commented on CASSANDRA-14201:


Maybe related: CASSANDRA-9947


> Add a few options to nodetool verify
> 
>
> Key: CASSANDRA-14201
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14201
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> {{nodetool verify}} currently invokes the disk failure policy when it finds a 
> corrupt sstable - we should add an option to avoid that. It should also have 
> an option to check if all sstables are the latest version to be able to run 
> {{nodetool verify}} as a pre-upgrade check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14198) Nodetool command to list out all the connected users

2018-01-30 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa resolved CASSANDRA-14198.

Resolution: Duplicate

> Nodetool command to list out all the connected users
> 
>
> Key: CASSANDRA-14198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14198
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Major
>
> Create a node tool command to figure out all the connected users at a given 
> time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13891) fromJson(null) throws java.lang.NullPointerException on Cassandra

2018-01-30 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345403#comment-16345403
 ] 

Jeff Jirsa commented on CASSANDRA-13891:


{quote}
I have created a CircleCI account via Github authentication, but it looks like 
it tries to build a only master and even so it spews an error message. Could 
you point me at links where I can find how to solve those issues?
{quote}

Once you create a new account, you put the circle yml into the new branch, and 
on push, circle should see you push to the new branch and build it.

If you make the account before you push, it probably won't try to build the new 
branch. Just re-push to trigger the build.


> fromJson(null) throws java.lang.NullPointerException on Cassandra
> -
>
> Key: CASSANDRA-13891
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13891
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.11
>Reporter: Marcel Villet
>Assignee: Edward Ribeiro
>Priority: Minor
> Attachments: CASSANDRA-13891.patch
>
>
> Basically, {{fromJson}} throws a {{java.lang.NullPointerException}} when NULL 
> is passed, instead of just returning a NULL itself. Say I create a UDT and a 
> table as follows:
> {code:java}
> create type type1
> (
> id int,
> name text
> );
> create table table1
> (
> id int,
> t FROZEN,
> primary key (id)
> );{code}
> And then try and insert a row as such:
> {{insert into table1 (id, t) VALUES (1, fromJson(null));}}
> I get the error: {{java.lang.NullPointerException}}
> This works as expected: {{insert into table1 (id, t) VALUES (1, null);}}
> Programmatically, one does not always know when a UDT will be null, hence me 
> expecting {{fromJson}} to just return NULL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13981) Enable Cassandra for Persistent Memory

2018-01-30 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345399#comment-16345399
 ] 

Preetika Tyagi commented on CASSANDRA-13981:


[~jasobrown] Shall I create a branch and push it to the existing Cassandra 
GitHub? I believe it will show up in the list of branches on GitHub: 
[https://github.com/apache/cassandra]. Just wanted to confirm.

 

> Enable Cassandra for Persistent Memory 
> ---
>
> Key: CASSANDRA-13981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13981
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Preetika Tyagi
>Assignee: Preetika Tyagi
>Priority: Major
> Fix For: 4.0
>
> Attachments: in-mem-cassandra-1.0.patch, readme.txt
>
>
> Currently, Cassandra relies on disks for data storage and hence it needs data 
> serialization, compaction, bloom filters and partition summary/index for 
> speedy access of the data. However, with persistent memory, data can be 
> stored directly in the form of Java objects and collections, which can 
> greatly simplify the retrieval mechanism of the data. What we are proposing 
> is to make use of faster and scalable B+ tree-based data collections built 
> for persistent memory in Java (PCJ: https://github.com/pmem/pcj) and enable a 
> complete in-memory version of Cassandra, while still keeping the data 
> persistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14195) CommitLogSegmentManagerCDCTest is flaky

2018-01-30 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-14195:

   Resolution: Fixed
Fix Version/s: 4.0
   Status: Resolved  (was: Ready to Commit)

[Committed|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=69db2359ee0889cb4a57aec179b9821ff442d26b].
 Thanks Jay!

> CommitLogSegmentManagerCDCTest is flaky
> ---
>
> Key: CASSANDRA-14195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14195
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Ariel Weisberg
>Assignee: Jay Zhuang
>Priority: Minor
> Fix For: 4.0
>
>
> This fails fairly reliably in CircleCI and in a few minutes if you run it in 
> a loop on a MacOS laptop.
> I see two failures.
> {noformat}
> [junit] Testcase: 
> testRetainLinkOnDiscardCDC(org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest):
>Caused an ERROR
> [junit] Rejecting mutation to keyspace cql_test_keyspace. Free up space 
> in build/test/cassandra/cdc_raw:0 by processing CDC logs.
> [junit] org.apache.cassandra.exceptions.CDCWriteException: Rejecting 
> mutation to keyspace cql_test_keyspace. Free up space in 
> build/test/cassandra/cdc_raw:0 by processing CDC logs.
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDC.throwIfForbidden(CommitLogSegmentManagerCDC.java:136)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDC.allocate(CommitLogSegmentManagerCDC.java:108)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:272)
> [junit]   at 
> org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:604)
> [junit]   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:481)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:191)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:196)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:205)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest.testRetainLinkOnDiscardCDC(CommitLogSegmentManagerCDCTest.java:256)
> {noformat}
> and
> {noformat}
> [junit] Testcase: 
> testCompletedFlag(org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest):
> FAILED
> [junit] Index file not written: 
> build/test/cassandra/cdc_raw:0/CommitLog-7-1517005121474_cdc.idx
> [junit] junit.framework.AssertionFailedError: Index file not written: 
> build/test/cassandra/cdc_raw:0/CommitLog-7-1517005121474_cdc.idx
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest.testCompletedFlag(CommitLogSegmentManagerCDCTest.java:210)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14195) CommitLogSegmentManagerCDCTest is flaky

2018-01-30 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-14195:

Status: Ready to Commit  (was: Patch Available)

> CommitLogSegmentManagerCDCTest is flaky
> ---
>
> Key: CASSANDRA-14195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14195
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Ariel Weisberg
>Assignee: Jay Zhuang
>Priority: Minor
>
> This fails fairly reliably in CircleCI and in a few minutes if you run it in 
> a loop on a MacOS laptop.
> I see two failures.
> {noformat}
> [junit] Testcase: 
> testRetainLinkOnDiscardCDC(org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest):
>Caused an ERROR
> [junit] Rejecting mutation to keyspace cql_test_keyspace. Free up space 
> in build/test/cassandra/cdc_raw:0 by processing CDC logs.
> [junit] org.apache.cassandra.exceptions.CDCWriteException: Rejecting 
> mutation to keyspace cql_test_keyspace. Free up space in 
> build/test/cassandra/cdc_raw:0 by processing CDC logs.
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDC.throwIfForbidden(CommitLogSegmentManagerCDC.java:136)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDC.allocate(CommitLogSegmentManagerCDC.java:108)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:272)
> [junit]   at 
> org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:604)
> [junit]   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:481)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:191)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:196)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:205)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest.testRetainLinkOnDiscardCDC(CommitLogSegmentManagerCDCTest.java:256)
> {noformat}
> and
> {noformat}
> [junit] Testcase: 
> testCompletedFlag(org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest):
> FAILED
> [junit] Index file not written: 
> build/test/cassandra/cdc_raw:0/CommitLog-7-1517005121474_cdc.idx
> [junit] junit.framework.AssertionFailedError: Index file not written: 
> build/test/cassandra/cdc_raw:0/CommitLog-7-1517005121474_cdc.idx
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest.testCompletedFlag(CommitLogSegmentManagerCDCTest.java:210)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Reset CDCSpaceInMB after each test

2018-01-30 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7e362e78c -> 69db2359e


Reset CDCSpaceInMB after each test

patch by Jay Zhuang; reviewed by jmckenzie for CASSANDRA-14195


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69db2359
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69db2359
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69db2359

Branch: refs/heads/trunk
Commit: 69db2359ee0889cb4a57aec179b9821ff442d26b
Parents: 7e362e7
Author: Jay Zhuang 
Authored: Sun Jan 28 15:43:32 2018 -0800
Committer: Josh McKenzie 
Committed: Tue Jan 30 10:02:25 2018 -0500

--
 .../db/commitlog/CommitLogSegmentManagerCDCTest.java | 11 +++
 1 file changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69db2359/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDCTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDCTest.java
 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDCTest.java
index d9bf493..8c0647c 100644
--- 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDCTest.java
+++ 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDCTest.java
@@ -188,6 +188,8 @@ public class CommitLogSegmentManagerCDCTest extends 
CQLTester
 {
 createTable("CREATE TABLE %s (idx int, data text, primary key(idx)) 
WITH cdc=true;");
 CommitLogSegment initialSegment = 
CommitLog.instance.segmentManager.allocatingFrom();
+Integer originalCDCSize = DatabaseDescriptor.getCDCSpaceInMB();
+
 DatabaseDescriptor.setCDCSpaceInMB(8);
 try
 {
@@ -202,6 +204,10 @@ public class CommitLogSegmentManagerCDCTest extends 
CQLTester
 {
 // pass. Expected since we'll have a file or two linked on restart 
of CommitLog due to replay
 }
+finally
+{
+DatabaseDescriptor.setCDCSpaceInMB(originalCDCSize);
+}
 
 CommitLog.instance.forceRecycleAllSegments();
 
@@ -275,6 +281,7 @@ public class CommitLogSegmentManagerCDCTest extends 
CQLTester
 {
 // Assert.assertEquals(0, new 
File(DatabaseDescriptor.getCDCLogLocation()).listFiles().length);
 String table_name = createTable("CREATE TABLE %s (idx int, data text, 
primary key(idx)) WITH cdc=true;");
+Integer originalCDCSize = DatabaseDescriptor.getCDCSpaceInMB();
 
 DatabaseDescriptor.setCDCSpaceInMB(8);
 TableMetadata ccfm = 
Keyspace.open(keyspace()).getColumnFamilyStore(table_name).metadata();
@@ -292,6 +299,10 @@ public class CommitLogSegmentManagerCDCTest extends 
CQLTester
 {
 // pass
 }
+finally
+{
+DatabaseDescriptor.setCDCSpaceInMB(originalCDCSize);
+}
 
 CommitLog.instance.sync(true);
 CommitLog.instance.stopUnsafe(false);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14195) CommitLogSegmentManagerCDCTest is flaky

2018-01-30 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345079#comment-16345079
 ] 

Joshua McKenzie commented on CASSANDRA-14195:
-

Not yet; env is super rusty. I should be able to get to it today.

 

Thankfully it's in a bit of the code that's not changing frequently so I can 
get away with being slow. =/

 

Sorry about that [~jay.zhuang]!

> CommitLogSegmentManagerCDCTest is flaky
> ---
>
> Key: CASSANDRA-14195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14195
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Ariel Weisberg
>Assignee: Jay Zhuang
>Priority: Minor
>
> This fails fairly reliably in CircleCI and in a few minutes if you run it in 
> a loop on a MacOS laptop.
> I see two failures.
> {noformat}
> [junit] Testcase: 
> testRetainLinkOnDiscardCDC(org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest):
>Caused an ERROR
> [junit] Rejecting mutation to keyspace cql_test_keyspace. Free up space 
> in build/test/cassandra/cdc_raw:0 by processing CDC logs.
> [junit] org.apache.cassandra.exceptions.CDCWriteException: Rejecting 
> mutation to keyspace cql_test_keyspace. Free up space in 
> build/test/cassandra/cdc_raw:0 by processing CDC logs.
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDC.throwIfForbidden(CommitLogSegmentManagerCDC.java:136)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDC.allocate(CommitLogSegmentManagerCDC.java:108)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:272)
> [junit]   at 
> org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:604)
> [junit]   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:481)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:191)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:196)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:205)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest.testRetainLinkOnDiscardCDC(CommitLogSegmentManagerCDCTest.java:256)
> {noformat}
> and
> {noformat}
> [junit] Testcase: 
> testCompletedFlag(org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest):
> FAILED
> [junit] Index file not written: 
> build/test/cassandra/cdc_raw:0/CommitLog-7-1517005121474_cdc.idx
> [junit] junit.framework.AssertionFailedError: Index file not written: 
> build/test/cassandra/cdc_raw:0/CommitLog-7-1517005121474_cdc.idx
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest.testCompletedFlag(CommitLogSegmentManagerCDCTest.java:210)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13891) fromJson(null) throws java.lang.NullPointerException on Cassandra

2018-01-30 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345050#comment-16345050
 ] 

Benjamin Lerer commented on CASSANDRA-13891:


{quote}I have never used CircleCI before (the project is using this to build 
and test C* right?), so I am lost about how to build and run the CI on my 2.2 
patched branch.

I have created a CircleCI account via Github authentication, but it looks like 
it tries to build a only master and even so it spews an error message. Could 
you point me at links where I can find how to solve those issues?{quote}

Unfortunately, I cannot help you here. I always used our internal CI.

> fromJson(null) throws java.lang.NullPointerException on Cassandra
> -
>
> Key: CASSANDRA-13891
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13891
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.11
>Reporter: Marcel Villet
>Assignee: Edward Ribeiro
>Priority: Minor
> Attachments: CASSANDRA-13891.patch
>
>
> Basically, {{fromJson}} throws a {{java.lang.NullPointerException}} when NULL 
> is passed, instead of just returning a NULL itself. Say I create a UDT and a 
> table as follows:
> {code:java}
> create type type1
> (
> id int,
> name text
> );
> create table table1
> (
> id int,
> t FROZEN,
> primary key (id)
> );{code}
> And then try and insert a row as such:
> {{insert into table1 (id, t) VALUES (1, fromJson(null));}}
> I get the error: {{java.lang.NullPointerException}}
> This works as expected: {{insert into table1 (id, t) VALUES (1, null);}}
> Programmatically, one does not always know when a UDT will be null, hence me 
> expecting {{fromJson}} to just return NULL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13993) Add optional startup delay to wait until peers are ready

2018-01-30 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345006#comment-16345006
 ] 

Jason Brown commented on CASSANDRA-13993:
-

bq. I think the new/unknown messages should just be ignored at 
MessageDeliveryTask#run()

Even though these are new messages, and we don't have CASSANDRA-13283 in 
pre-4.0, I don't think 3.0/3.11 will fail to deserialize on 3.0/3.11 as the new 
Ping/Pong messages will get the next cardinal value from the {{Verbs}} enum (in 
4.0), and it looks like we have some "UNUSED_" slots in the enum for safety. 
Thus a 3.11 node could successfully deserialize the {{PingMessage}}, but it 
won't have a {{VerbHandler}} to send back a {{PongMessage}}. This is acceptable 
as the connection will be successfully established (one way, at least), and the 
message won't deserialize incorrectly and thus throw away the connection.

This would only be a transient issue during upgrade to 4.0.

However, I need to test this, but at least the initial code reading seems 
reasonable.


> Add optional startup delay to wait until peers are ready
> 
>
> Key: CASSANDRA-13993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13993
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Lifecycle
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
> Fix For: 4.x
>
>
> When bouncing a node in a large cluster, is can take a while to recognize the 
> rest of the cluster as available. This is especially true if using TLS on 
> internode messaging connections. The bouncing node (and any clients connected 
> to it) may see a series of Unavailable or Timeout exceptions until the node 
> is 'warmed up' as connecting to the rest of the cluster is asynchronous from 
> the rest of the startup process.
> There are two aspects that drive a node's ability to successfully communicate 
> with a peer after a bounce:
> - marking the peer as 'alive' (state that is held in gossip). This affects 
> the unavailable exceptions
> - having both open outbound and inbound connections open and ready to each 
> peer. This affects timeouts.
> Details of each of these mechanisms are described in the comments below.
> This ticket proposes adding a mechanism, optional and configurable, to delay 
> opening the client native protocol port until some percentage of the peers in 
> the cluster is marked alive and connected to/from. Thus while we potentially 
> slow down startup (delay opening the client port), we alleviate the chance 
> that queries made by clients don't hit transient unavailable/timeout 
> exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14195) CommitLogSegmentManagerCDCTest is flaky

2018-01-30 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344938#comment-16344938
 ] 

Jason Brown commented on CASSANDRA-14195:
-

Did any one commit yet? :)

> CommitLogSegmentManagerCDCTest is flaky
> ---
>
> Key: CASSANDRA-14195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14195
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Ariel Weisberg
>Assignee: Jay Zhuang
>Priority: Minor
>
> This fails fairly reliably in CircleCI and in a few minutes if you run it in 
> a loop on a MacOS laptop.
> I see two failures.
> {noformat}
> [junit] Testcase: 
> testRetainLinkOnDiscardCDC(org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest):
>Caused an ERROR
> [junit] Rejecting mutation to keyspace cql_test_keyspace. Free up space 
> in build/test/cassandra/cdc_raw:0 by processing CDC logs.
> [junit] org.apache.cassandra.exceptions.CDCWriteException: Rejecting 
> mutation to keyspace cql_test_keyspace. Free up space in 
> build/test/cassandra/cdc_raw:0 by processing CDC logs.
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDC.throwIfForbidden(CommitLogSegmentManagerCDC.java:136)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDC.allocate(CommitLogSegmentManagerCDC.java:108)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:272)
> [junit]   at 
> org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:604)
> [junit]   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:481)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:191)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:196)
> [junit]   at org.apache.cassandra.db.Mutation.apply(Mutation.java:205)
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest.testRetainLinkOnDiscardCDC(CommitLogSegmentManagerCDCTest.java:256)
> {noformat}
> and
> {noformat}
> [junit] Testcase: 
> testCompletedFlag(org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest):
> FAILED
> [junit] Index file not written: 
> build/test/cassandra/cdc_raw:0/CommitLog-7-1517005121474_cdc.idx
> [junit] junit.framework.AssertionFailedError: Index file not written: 
> build/test/cassandra/cdc_raw:0/CommitLog-7-1517005121474_cdc.idx
> [junit]   at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManagerCDCTest.testCompletedFlag(CommitLogSegmentManagerCDCTest.java:210)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14173) JDK 8u161 breaks JMX integration

2018-01-30 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344916#comment-16344916
 ] 

Sam Tunnicliffe commented on CASSANDRA-14173:
-

It only affects versions above 3.6 

> JDK 8u161 breaks JMX integration
> 
>
> Key: CASSANDRA-14173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14173
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.11.2
>
>
> {\{org.apache.cassandra.utils.JMXServerUtils}} which is used to 
> programatically configure the JMX server and RMI registry (CASSANDRA-2967, 
> CASSANDRA-10091) depends on some JDK internal classes/interfaces. A change to 
> one of these, introduced in Oracle JDK 1.8.0_162 is incompatible, which means 
> we cannot build using that JDK version. Upgrading the JVM on a node running 
> 3.6+ will result in Cassandra being unable to start.
> {noformat}
> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception 
> encountered during startup
> java.lang.AbstractMethodError: 
> org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
>  ~[na:1.8.0_162]
>     at 
> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
>  ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]{noformat}
> This is also a problem for CASSANDRA-9608, as the internals are completely 
> re-organised in JDK9, so a more stable solution that can be applied to both 
> JDK8 & JDK9 is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14173) JDK 8u161 breaks JMX integration

2018-01-30 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344904#comment-16344904
 ] 

Jason Brown commented on CASSANDRA-14173:
-

+1. I think we should commit this to 3.0, as well. wdyt?

> JDK 8u161 breaks JMX integration
> 
>
> Key: CASSANDRA-14173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14173
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.11.2
>
>
> {\{org.apache.cassandra.utils.JMXServerUtils}} which is used to 
> programatically configure the JMX server and RMI registry (CASSANDRA-2967, 
> CASSANDRA-10091) depends on some JDK internal classes/interfaces. A change to 
> one of these, introduced in Oracle JDK 1.8.0_162 is incompatible, which means 
> we cannot build using that JDK version. Upgrading the JVM on a node running 
> 3.6+ will result in Cassandra being unable to start.
> {noformat}
> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception 
> encountered during startup
> java.lang.AbstractMethodError: 
> org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
>  ~[na:1.8.0_162]
>     at 
> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
>  ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]{noformat}
> This is also a problem for CASSANDRA-9608, as the internals are completely 
> re-organised in JDK9, so a more stable solution that can be applied to both 
> JDK8 & JDK9 is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-01-30 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-14092:

Reviewer: Sam Tunnicliffe

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14198) Nodetool command to list out all the connected users

2018-01-30 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344787#comment-16344787
 ] 

Romain Hardouin commented on CASSANDRA-14198:
-

Duplicate of https://issues.apache.org/jira/browse/CASSANDRA-13665 ?

> Nodetool command to list out all the connected users
> 
>
> Key: CASSANDRA-14198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14198
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Major
>
> Create a node tool command to figure out all the connected users at a given 
> time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14197) SSTable upgrade should be automatic

2018-01-30 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344668#comment-16344668
 ] 

Marcus Eriksson commented on CASSANDRA-14197:
-

[tests|https://circleci.com/gh/krummas/cassandra/225]

> SSTable upgrade should be automatic
> ---
>
> Key: CASSANDRA-14197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14197
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> Upgradesstables should run automatically on node upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14202) Assertion error on sstable open during startup should invoke disk failure policy

2018-01-30 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344626#comment-16344626
 ] 

Marcus Eriksson edited comment on CASSANDRA-14202 at 1/30/18 8:19 AM:
--

patch 
[here|https://github.com/krummas/cassandra/commits/marcuse/handle_throwable], 
[tests|https://circleci.com/gh/krummas/cassandra/223]


was (Author: krummas):
patch 
[here|https://github.com/krummas/cassandra/commits/marcuse/handle_throwable]

> Assertion error on sstable open during startup should invoke disk failure 
> policy
> 
>
> Key: CASSANDRA-14202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14202
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We should catch all exceptions when opening sstables on startup and invoke 
> the disk failure policy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14201) Add a few options to nodetool verify

2018-01-30 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344665#comment-16344665
 ] 

Marcus Eriksson commented on CASSANDRA-14201:
-

tests should end up here: https://circleci.com/gh/krummas/cassandra/224

> Add a few options to nodetool verify
> 
>
> Key: CASSANDRA-14201
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14201
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> {{nodetool verify}} currently invokes the disk failure policy when it finds a 
> corrupt sstable - we should add an option to avoid that. It should also have 
> an option to check if all sstables are the latest version to be able to run 
> {{nodetool verify}} as a pre-upgrade check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14158) [DTEST] [TRUNK] repair_test.py::test_dead_coordinator is flaky due to JMX connection error from nodetool

2018-01-30 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14158:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed, thanks

> [DTEST] [TRUNK] repair_test.py::test_dead_coordinator is flaky due to JMX 
> connection error from nodetool
> 
>
> Key: CASSANDRA-14158
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14158
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Marcus Eriksson
>Priority: Major
>
> repair_test.py::test_dead_coordinator is flaky due to occasionally failing 
> when a JMX connection error is propagated by nodetool.
> the test has failed 4+ times for the same reason.
> latest failure can be found in the artifacts for the following circleci run:
> [https://circleci.com/gh/mkjellman/cassandra/538]
> I *think* that this might be expected behavior for this test and we just need 
> to catch any ToolError exceptions thrown and only fail if included stack is 
> for any error other than "JMX connection closed."
> {code}
> stderr: error: [2018-01-10 07:07:55,178] JMX connection closed. You should 
> check server log for repair status of keyspace system_traces(Subsequent 
> keyspaces are not going to be repaired).
> -- StackTrace --
> java.io.IOException: [2018-01-10 07:07:55,178] JMX connection closed. You 
> should check server log for repair status of keyspace 
> system_traces(Subsequent keyspaces are not going to be repaired).
>   at 
> org.apache.cassandra.tools.RepairRunner.handleConnectionFailed(RepairRunner.java:104)
>   at 
> org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:86)
>   at 
> javax.management.NotificationBroadcasterSupport.handleNotification(NotificationBroadcasterSupport.java:275)
>   at 
> javax.management.NotificationBroadcasterSupport$SendNotifJob.run(NotificationBroadcasterSupport.java:352)
>   at 
> javax.management.NotificationBroadcasterSupport$1.execute(NotificationBroadcasterSupport.java:337)
>   at 
> javax.management.NotificationBroadcasterSupport.sendNotification(NotificationBroadcasterSupport.java:248)
>   at 
> javax.management.remote.rmi.RMIConnector.sendNotification(RMIConnector.java:441)
>   at 
> javax.management.remote.rmi.RMIConnector.access$1200(RMIConnector.java:121)
>   at 
> javax.management.remote.rmi.RMIConnector$RMIClientCommunicatorAdmin.gotIOException(RMIConnector.java:1531)
>   at 
> javax.management.remote.rmi.RMIConnector$RMINotifClient.fetchNotifs(RMIConnector.java:1352)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.fetchOneNotif(ClientNotifForwarder.java:655)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.fetchNotifs(ClientNotifForwarder.java:607)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:471)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14158) [DTEST] [TRUNK] repair_test.py::test_dead_coordinator is flaky due to JMX connection error from nodetool

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344650#comment-16344650
 ] 

ASF GitHub Bot commented on CASSANDRA-14158:


Github user asfgit closed the pull request at:

https://github.com/apache/cassandra-dtest/pull/16


> [DTEST] [TRUNK] repair_test.py::test_dead_coordinator is flaky due to JMX 
> connection error from nodetool
> 
>
> Key: CASSANDRA-14158
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14158
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Marcus Eriksson
>Priority: Major
>
> repair_test.py::test_dead_coordinator is flaky due to occasionally failing 
> when a JMX connection error is propagated by nodetool.
> the test has failed 4+ times for the same reason.
> latest failure can be found in the artifacts for the following circleci run:
> [https://circleci.com/gh/mkjellman/cassandra/538]
> I *think* that this might be expected behavior for this test and we just need 
> to catch any ToolError exceptions thrown and only fail if included stack is 
> for any error other than "JMX connection closed."
> {code}
> stderr: error: [2018-01-10 07:07:55,178] JMX connection closed. You should 
> check server log for repair status of keyspace system_traces(Subsequent 
> keyspaces are not going to be repaired).
> -- StackTrace --
> java.io.IOException: [2018-01-10 07:07:55,178] JMX connection closed. You 
> should check server log for repair status of keyspace 
> system_traces(Subsequent keyspaces are not going to be repaired).
>   at 
> org.apache.cassandra.tools.RepairRunner.handleConnectionFailed(RepairRunner.java:104)
>   at 
> org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:86)
>   at 
> javax.management.NotificationBroadcasterSupport.handleNotification(NotificationBroadcasterSupport.java:275)
>   at 
> javax.management.NotificationBroadcasterSupport$SendNotifJob.run(NotificationBroadcasterSupport.java:352)
>   at 
> javax.management.NotificationBroadcasterSupport$1.execute(NotificationBroadcasterSupport.java:337)
>   at 
> javax.management.NotificationBroadcasterSupport.sendNotification(NotificationBroadcasterSupport.java:248)
>   at 
> javax.management.remote.rmi.RMIConnector.sendNotification(RMIConnector.java:441)
>   at 
> javax.management.remote.rmi.RMIConnector.access$1200(RMIConnector.java:121)
>   at 
> javax.management.remote.rmi.RMIConnector$RMIClientCommunicatorAdmin.gotIOException(RMIConnector.java:1531)
>   at 
> javax.management.remote.rmi.RMIConnector$RMINotifClient.fetchNotifs(RMIConnector.java:1352)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.fetchOneNotif(ClientNotifForwarder.java:655)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.fetchNotifs(ClientNotifForwarder.java:607)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:471)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
>   at 
> com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Catch and ignore ToolError in test_dead_coordinator in repair_tests.py

2018-01-30 Thread marcuse
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 49b2dda4e -> 39e223fe8


Catch and ignore ToolError in test_dead_coordinator in repair_tests.py

Patch by marcuse; reviewed by Sam Tunnicliffe for CASSANDRA-14158

Closes #16


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/39e223fe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/39e223fe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/39e223fe

Branch: refs/heads/master
Commit: 39e223fe82709e7557f7b2a86763c3ba011bbeac
Parents: 49b2dda
Author: Marcus Eriksson 
Authored: Mon Jan 15 10:45:24 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Jan 30 08:58:57 2018 +0100

--
 repair_tests/repair_test.py | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/39e223fe/repair_tests/repair_test.py
--
diff --git a/repair_tests/repair_test.py b/repair_tests/repair_test.py
index 59910e0..53ab2af 100644
--- a/repair_tests/repair_test.py
+++ b/repair_tests/repair_test.py
@@ -1090,13 +1090,19 @@ class TestRepair(BaseRepairTest):
 cluster.populate(3).start(wait_for_binary_proto=True)
 node1, node2, node3 = cluster.nodelist()
 node1.stress(['write', 'n=100k', '-schema', 'replication(factor=3)', 
'-rate', 'threads=30'])
-if cluster.version() >= "2.2":
-t1 = threading.Thread(target=node1.repair)
-t1.start()
+def run_repair():
+try:
+if cluster.version() >= "2.2":
+node1.repair()
+else:
+node1.nodetool('repair keyspace1 standard1 -inc -par')
+except ToolError:
+debug("got expected exception during repair, ignoring")
+t1 = threading.Thread(target=run_repair)
+t1.start()
+if cluster.version() > "2.2":
 node2.watch_log_for('Validating ValidationRequest', 
filename='debug.log')
 else:
-t1 = threading.Thread(target=node1.nodetool, args=('repair 
keyspace1 standard1 -inc -par',))
-t1.start()
 node1.watch_log_for('requesting merkle trees', 
filename='system.log')
 time.sleep(2)
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org