[jira] [Commented] (CASSANDRA-14212) Back port CASSANDRA-13080 to 3.11.2 (Use new token allocation for non bootstrap case as well)

2018-02-01 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349912#comment-16349912
 ] 

mck commented on CASSANDRA-14212:
-

|| branch || testall || dtest ||
| 
[cassandra-3.11_13080|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_13080]
   | 
[testall|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_13080]
 | 
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/266]
 |

> Back port CASSANDRA-13080 to 3.11.2 (Use new token allocation for non 
> bootstrap case as well)
> -
>
> Key: CASSANDRA-14212
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14212
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: mck
>Assignee: mck
>Priority: Major
>
> Backport CASSANDRA-13080 to 3.11.x
>  
> The patch applies without conflict to the {{cassandra-3.11}} and equally 
> concerns to users of Cassandra-3.11.1
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-02-01 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349833#comment-16349833
 ] 

Paulo Motta commented on CASSANDRA-14092:
-

{quote}To clarify, CAP and CAP_NOWARN will cap the expiry but we'll have NO 
INTENTION of ever fixing it in an upgrade?
{quote}
We're not promising to fix the correct TTL in an upgrade, but we could do it in 
some cases, but I prefer to leave this decision for later.
{quote}Or would they have to do a scrub to convert anything that got capped to 
its actual TTL?
{quote}
The scrub is just to fix SSTables of affected systems that overflowed from 
2018-01-19T03:14:06+00:00 until the upgrade and were backed up. As I said 
before, we're not doing any promise yet regarding honoring the actual TTL when 
it's capped. But if we were to implement this would probably be done during 
upgradesstables and not scrub.
{quote}I think it's worth pointing out that REJECT is a ticking time bomb.
{quote}
I agree but I don't feel strongly about the default, because the policies will 
be clearly specified in big letters in the NEWS.txt which is the document which 
everyone should read before upgrading, so if you don't want applications to 
break in your organization just change your policy to CAP.

Do you mind proof-reading the NEWS.txt and check if something is not clear/can 
be improved? Thanks!

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-02-01 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349824#comment-16349824
 ] 

Paulo Motta commented on CASSANDRA-14092:
-

Thanks for the quick turnaround [~beobal]! See follow-up below:
{quote}The wording of the NEWS.txt entry is good, I do wonder if we should 
maybe place it right at the top of the file rather than just in the 3.0.16 
section for extra emphasis. Any thoughts on that?
{quote}
Good idea, I did this and also updated the text to contemplate the possibility 
of data loss before this patch and how to fix it with scrub:
{noformat}
MAXIMUM TTL EXPIRATION DATE NOTICE
---

The maximum expiration timestamp that can be represented by the storage engine 
is 2038-01-19T03:14:06+00:00,
which means that inserts with TTL that expire after this date are not currently 
supported.

Prior to 3.0.16 in the 3.0.X series and 3.11.2 in the 3.11 series, there was no 
protection against INSERTS
with TTL expiring after the maximum supported date, causing the expiration time 
field to overflow and the
records to expire immediately. Expired records due to overflow may have been 
removed permanently after a
compaction. The 2.1.X and 2.2.X series are not subject to data loss due to this 
issue if assertions are enabled,
since an AssertionError is thrown during INSERT when the expiration time field 
overflows on these versions.

In practice this issue will affect only users that use very large TTLs, close 
to the maximum allowed value of
63072 seconds (20 years), starting from 2018-01-19T03:14:06+00:00. As time 
progresses, the maximum supported
TTL will be gradually reduced as the the maximum expiration date approaches. 
For instance, a user on an affected
version on 2028-01-19T03:14:06 with a TTL of 10 years will be affected by this 
bug, so we urge users of very
large TTLs to upgrade to a version where this issue is addressed as soon as 
possible.

Potentially affected users should inspect their SSTables and search for 
negative min local deletion times to
detect this issue. SSTables in this state must be backed up immediately, as 
they are subject to data loss
during auto-compactions, and may be recovered by running the sstablescrub tool 
from versions 3.0.16+ and/or 3.11.2+.

The Cassandra project plans to fix this limitation in newer versions, but while 
the fix is not available, operators
can decide which policy to apply when dealing with inserts with TTL exceeding 
the maximum supported expiration date:
  - REJECT: this is the default policy and will reject any requests with 
expiration date timestamp after 2038-01-19T03:14:06+00:00.
  - CAP: any insert with TTL expiring after 2038-01-19T03:14:06+00:00 will 
expire on 2038-01-19T03:14:06+00:00 and the client will receive a warning.
  - CAP_NOWARN: same as previous, except that the client warning will not be 
emitted.

These policies may be specified via the 
-Dcassandra.expiration_date_overflow_policy=POLICY startup option which can be 
set in the jvm.options file.

See CASSANDRA-14092 for more details about this issue.
{noformat}
Please let me know what do you think of the updated text. We should also 
probably publish this text (or a subset of it) during the release announcement 
e-mail.

While writing the text above, I figured that there is also a remote possibility 
of data loss in 2.1/2.2 if assertions are disabled, but didn't backport the 
scrub recovery since it was not a straightforward backport and I didn't think 
it was worth the effort right now. We can always do that later if necessary, 
the most important thing right now is to ship the policies. To reflect this I 
updated the 4th paragraph on 2.1 and 2.2 to:
{noformat}
2.1.X / 2.2.X users in the conditions above should not be subject to data loss 
unless assertions are disabled, in which
case the suspect SSTables must be backed up immediately and manually recovered, 
as they are subject to data loss
during auto-compaction.
{noformat}
 
{quote}I also have one piece of feedback on the policies; I don't see any 
benefit in being able to turn off logging of capped expirations (especially 
since we're using NoSpamLogger) but I do I think the client warning is useful.
{quote}
I agree and updated the patch with this suggestion, but at the same time I 
think advanced operators may want to control the periodicity of the logging, so 
I created a property 
{{cassandra.expiration_overflow_warning_interval_minutes=5}} to control this.
  
{quote}I also noticed that the logging of a parse error/invalid value for the 
policy sysprop is at DEBUG in the current patches, but it might be sensible to 
draw a bit more attention to that if it happens.
{quote}
Agreed, changed the logging to WARN.

I finished the cleanup of the patch and already provided a version for all 
branches. The 2.1 and 2.2 versions are pretty much the same, as well as the 
3.0/3.11/trunk, 

[jira] [Updated] (CASSANDRA-14212) Back port CASSANDRA-13080 to 3.11.2 (Use new token allocation for non bootstrap case as well)

2018-02-01 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14212:

Description: 
Backport CASSANDRA-13080 to 3.11.x

 

The patch applies without conflict to the {{cassandra-3.11}} and equally 
concerns to users of Cassandra-3.11.1

 

  was:
Backport CASSANDRA-13080 to 3.11.x

 


> Back port CASSANDRA-13080 to 3.11.2 (Use new token allocation for non 
> bootstrap case as well)
> -
>
> Key: CASSANDRA-14212
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14212
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: mck
>Priority: Major
>
> Backport CASSANDRA-13080 to 3.11.x
>  
> The patch applies without conflict to the {{cassandra-3.11}} and equally 
> concerns to users of Cassandra-3.11.1
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14212) Back port CASSANDRA-13080 to 3.11.2 (Use new token allocation for non bootstrap case as well)

2018-02-01 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck reassigned CASSANDRA-14212:
---

Assignee: mck

> Back port CASSANDRA-13080 to 3.11.2 (Use new token allocation for non 
> bootstrap case as well)
> -
>
> Key: CASSANDRA-14212
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14212
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: mck
>Assignee: mck
>Priority: Major
>
> Backport CASSANDRA-13080 to 3.11.x
>  
> The patch applies without conflict to the {{cassandra-3.11}} and equally 
> concerns to users of Cassandra-3.11.1
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14212) Back port CASSANDRA-13080 to 3.11.2 (Use new token allocation for non bootstrap case as well)

2018-02-01 Thread mck (JIRA)
mck created CASSANDRA-14212:
---

 Summary: Back port CASSANDRA-13080 to 3.11.2 (Use new token 
allocation for non bootstrap case as well)
 Key: CASSANDRA-14212
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14212
 Project: Cassandra
  Issue Type: Improvement
Reporter: mck


Backport CASSANDRA-13080 to 3.11.x

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13665) nodetool clientlist

2018-02-01 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349581#comment-16349581
 ] 

Chris Lohfink commented on CASSANDRA-13665:
---

The method was named incorrectly before, {{addCounter}} in {{ClientMetric}} 
actually created a Gauge, not a Counter. The JMX attribute name will remain 
"Value" so no difference to existing tooling.

> nodetool clientlist
> ---
>
> Key: CASSANDRA-13665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13665
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jon Haddad
>Assignee: Chris Lohfink
>Priority: Major
>
> There should exist a nodetool command that lists each client connection. 
> Ideally it would display the following:
>  * host
>  * protocol version
>  * user logged in as
>  * current keyspace
>  * total queries executed
>  * ssl connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14173) JDK 8u161 breaks JMX integration

2018-02-01 Thread Yogeshkumar More (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349556#comment-16349556
 ] 

Yogeshkumar More commented on CASSANDRA-14173:
--

Hi Sam,

Thanks for the fix.

May i know when it is planned for release?

We are under some pressure to upgrade java version to u162. 

Thanks,

Yogesh.

 

> JDK 8u161 breaks JMX integration
> 
>
> Key: CASSANDRA-14173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14173
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.11.2
>
>
> {\{org.apache.cassandra.utils.JMXServerUtils}} which is used to 
> programatically configure the JMX server and RMI registry (CASSANDRA-2967, 
> CASSANDRA-10091) depends on some JDK internal classes/interfaces. A change to 
> one of these, introduced in Oracle JDK 1.8.0_162 is incompatible, which means 
> we cannot build using that JDK version. Upgrading the JVM on a node running 
> 3.6+ will result in Cassandra being unable to start.
> {noformat}
> ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception 
> encountered during startup
> java.lang.AbstractMethodError: 
> org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135)
>  ~[na:1.8.0_162]
>     at 
> javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405)
>  ~[na:1.8.0_162]
>     at 
> org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104)
>  ~[apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>  [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.11.2-SNAPSHOT.jar:3.11.2-SNAPSHOT]{noformat}
> This is also a problem for CASSANDRA-9608, as the internals are completely 
> re-organised in JDK9, so a more stable solution that can be applied to both 
> JDK8 & JDK9 is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13665) nodetool clientlist

2018-02-01 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349548#comment-16349548
 ] 

Jason Brown commented on CASSANDRA-13665:
-

Thanks for the patch. Overall it looks fine, but a few things:
 - in {{NativeTransportService}}, you've changed the metric type for 
{{connectedNativeClients}} from a counter to a gauge. Why? will this break 
existing tooling?

nits:
 - NodeProbe - when you hit the {{default}} case, please log the metric name 
that was not found
 - ClientStats - missing apache license header
 - Server - replace the {{""+conn.getVersion().asInt()}} string conversion to 
{{String.valueOf()}}
 - {{CassandraDaemon}} - {{getNativeTransportService()}} doesn't seem to be 
used anywhere. Should it be deleted?

> nodetool clientlist
> ---
>
> Key: CASSANDRA-13665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13665
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jon Haddad
>Assignee: Chris Lohfink
>Priority: Major
>
> There should exist a nodetool command that lists each client connection. 
> Ideally it would display the following:
>  * host
>  * protocol version
>  * user logged in as
>  * current keyspace
>  * total queries executed
>  * ssl connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-02-01 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349529#comment-16349529
 ] 

Kurt Greaves commented on CASSANDRA-14092:
--

To clarify, CAP and CAP_NOWARN will cap the expiry but we'll have NO INTENTION 
of ever fixing it in an upgrade? Or would they have to do a scrub to convert 
anything that got capped to its actual TTL?

I think it's worth pointing out that REJECT is a ticking time bomb. The main 
concern being people who are still running anything <4.0 when their TTL's 
breach 2038-01-19 (which could be literally at any time). If the default was 
CAP and warn, fixing after upgrade then at least we wouldn't be bound to break 
peoples application in the future, and we'd still have almost 20 years to get 
everyone off these versions without breaking their applications.

> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13933) Handle mutateRepaired failure in nodetool verify

2018-02-01 Thread Sumanth Pasupuleti (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349506#comment-16349506
 ] 

Sumanth Pasupuleti commented on CASSANDRA-13933:


[~krummas] Submitted the patch. Please let me know if you have any comments.

Couple of things:
 # Ended up going with an info log as against warn log (where I can potentially 
log the exception details). Curious to know your thoughts.
 # Three tests failed; wasnt sure if they are related, but digging through

||Class||Name||Status||Type||Time(s)||
|BatchlogTest|testSerialization|Failure|expected:<10> but was:<0>
 
{{junit.framework.AssertionFailedError: expected:<10> but was:<0>}}
{{at 
org.apache.cassandra.batchlog.BatchlogTest.testSerialization(BatchlogTest.java:95)}}|0.024|
|CommitLogDescriptorTest|testVersions|Failure|expected:<11> but was:<12>
 
{{junit.framework.AssertionFailedError: expected:<11> but was:<12>}}
{{at 
org.apache.cassandra.db.commitlog.CommitLogDescriptorTest.testVersions(CommitLogDescriptorTest.java:84)}}|0.284|
|ShadowRoundTest|testDelayedResponse|Failure|expected: but was:
 
{{junit.framework.AssertionFailedError: expected: but was:}}
{{at 
org.apache.cassandra.gms.ShadowRoundTest.testDelayedResponse(ShadowRoundTest.java:106)}}|

> Handle mutateRepaired failure in nodetool verify
> 
>
> Key: CASSANDRA-13933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13933
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Sumanth Pasupuleti
>Priority: Major
>  Labels: lhf
> Attachments: CASSANDRA-13933-trunk.txt
>
>
> See comment here: 
> https://issues.apache.org/jira/browse/CASSANDRA-13922?focusedCommentId=16189875=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16189875



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11163) Summaries are needlessly rebuilt when the BF FP ratio is changed

2018-02-01 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349501#comment-16349501
 ] 

Kurt Greaves commented on CASSANDRA-11163:
--

Patches for each branch below. So far I've only got a green test run for trunk. 
Failures for 3.0 and 3.11 seem flaky/unrelated so will keep trying.
|[trunk|https://github.com/apache/cassandra/compare/trunk...kgreav:14166-trunk]|[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...kgreav:14166-3.11]|[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...kgreav:14166-3.0]|
|[utests|https://circleci.com/gh/kgreav/cassandra/66]|

This patch also solves CASSANDRA-14166. Basically I've completely stopped 
regeneration of Summaries on startup (if BFFP is changed), and also stopped the 
behaviour implemented in CASSANDRA-5015 which would also regenerate the 
bloomfilters on startup.
There's definitely no reason to regenerate Summaries in this case, and as 
previously mentioned it's not great regenerating the bloomfilter unless you're 
going to persist it. I have added persistence for the bloomfilter (when it is 
regenerated), however I think it's a bad idea to do this on startup as it will 
likely be more time consuming than regenerating the summaries.

If an operator chooses to these can be regenerated through {{upgradesstables 
-a}}. Albeit that's not super efficient if you're just updating the 
bloomfilter, however I think it's good enough for the moment and a potential 
follow up ticket would be to add a nodetool command to regenerate 
bloomfilter/summaries/index/etc.

The new behaviour would be to:
# Never recreate Summary or bloomfilter when using an offline tool. Note that 
if the summary doesn't exist the tool will still create it in memory, but it 
won't persist it to disk (solving CASSANDRA-14166)
# Only regenerate the summary when it can't be loaded, OR when we've said to 
recreate the bloomfilter. However we only save the summary if we've been 
explicitly told to (which should be always EXCEPT for offline tools).
# Only regenerate and persist the bloomfilter when it's missing - not when it 
has changed. This means we rely on compactions/upgradesstables to update the 
bloomfilter. 

I've updated 
{{org.apache.cassandra.io.sstable.SSTableReaderTest#testOpeningSSTable}} to 
hopefully test all of these cases.

> Summaries are needlessly rebuilt when the BF FP ratio is changed
> 
>
> Key: CASSANDRA-11163
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11163
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Kurt Greaves
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> This is from trunk, but I also saw this happen on 2.0:
> Before:
> {noformat}
> root@bw-1:/srv/cassandra# ls -ltr 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/
> total 221460
> drwxr-xr-x 2 root root  4096 Feb 11 23:34 backups
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-6-big-TOC.txt
> -rw-r--r-- 1 root root 26518 Feb 11 23:50 ma-6-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-6-big-Statistics.db
> -rw-r--r-- 1 root root   2607705 Feb 11 23:50 ma-6-big-Index.db
> -rw-r--r-- 1 root root192440 Feb 11 23:50 ma-6-big-Filter.db
> -rw-r--r-- 1 root root10 Feb 11 23:50 ma-6-big-Digest.crc32
> -rw-r--r-- 1 root root  35212125 Feb 11 23:50 ma-6-big-Data.db
> -rw-r--r-- 1 root root  2156 Feb 11 23:50 ma-6-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-7-big-TOC.txt
> -rw-r--r-- 1 root root 26518 Feb 11 23:50 ma-7-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-7-big-Statistics.db
> -rw-r--r-- 1 root root   2607614 Feb 11 23:50 ma-7-big-Index.db
> -rw-r--r-- 1 root root192432 Feb 11 23:50 ma-7-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-7-big-Digest.crc32
> -rw-r--r-- 1 root root  35190400 Feb 11 23:50 ma-7-big-Data.db
> -rw-r--r-- 1 root root  2152 Feb 11 23:50 ma-7-big-CRC.db
> -rw-r--r-- 1 root root80 Feb 11 23:50 ma-5-big-TOC.txt
> -rw-r--r-- 1 root root104178 Feb 11 23:50 ma-5-big-Summary.db
> -rw-r--r-- 1 root root 10264 Feb 11 23:50 ma-5-big-Statistics.db
> -rw-r--r-- 1 root root  10289077 Feb 11 23:50 ma-5-big-Index.db
> -rw-r--r-- 1 root root757384 Feb 11 23:50 ma-5-big-Filter.db
> -rw-r--r-- 1 root root 9 Feb 11 23:50 ma-5-big-Digest.crc32
> -rw-r--r-- 1 root root 139201355 Feb 11 23:50 ma-5-big-Data.db
> -rw-r--r-- 1 root root  8508 Feb 11 23:50 ma-5-big-CRC.db
> root@bw-1:/srv/cassandra# md5sum 
> /var/lib/cassandra/data/keyspace1/standard1-071efdc0d11811e590c3413ee28a6c90/ma-5-big-Summary.db
> 5fca154fc790f7cfa37e8ad6d1c7552c
> {noformat}
> BF ratio changed, node 

[jira] [Updated] (CASSANDRA-14211) Revert ProtocolVersion changes from CASSANDRA-7544

2018-02-01 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14211:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed as 
[da58565ebc717b63fff4f4883559b5daf20cb6fa|https://github.com/apache/cassandra/commit/da58565ebc717b63fff4f4883559b5daf20cb6fa]

> Revert ProtocolVersion changes from CASSANDRA-7544
> --
>
> Key: CASSANDRA-14211
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14211
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Revert ProtocolVersion changes from CASSANDRA-7544

2018-02-01 Thread aweisberg
Repository: cassandra
Updated Branches:
  refs/heads/trunk 59a4624d5 -> da58565eb


Revert ProtocolVersion changes from CASSANDRA-7544

Patch by Ariel Weisberg; Reviewd by Jason Brown for CASSANDRA-14211


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/da58565e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/da58565e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/da58565e

Branch: refs/heads/trunk
Commit: da58565ebc717b63fff4f4883559b5daf20cb6fa
Parents: 59a4624
Author: Ariel Weisberg 
Authored: Thu Feb 1 12:19:37 2018 -0500
Committer: Ariel Weisberg 
Committed: Thu Feb 1 17:20:00 2018 -0500

--
 CHANGES.txt  |   1 +
 build.xml|   2 +-
 conf/cassandra.yaml  |   7 ---
 ...ssandra-driver-core-3.4.0-SNAPSHOT-shaded.jar | Bin 0 -> 2624086 bytes
 ...ssandra-driver-core-4.0.0-SNAPSHOT-shaded.jar | Bin 2621460 -> 0 bytes
 ...river-internal-only-3.12.0.post0-00f6f77e.zip | Bin 0 -> 265193 bytes
 ...river-internal-only-3.12.0.post0-9ee88ded.zip | Bin 265110 -> 0 bytes
 .../cql3/functions/ScriptBasedUDFunction.java|   4 +++-
 .../cassandra/transport/ProtocolVersion.java |   9 -
 .../cassandra/cql3/PreparedStatementsTest.java   |   1 +
 .../service/ProtocolBetaVersionTest.java |   2 +-
 .../cassandra/transport/ProtocolVersionTest.java |   3 +--
 12 files changed, 12 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/da58565e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 91b3ed8..38cf696 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Revert ProtocolVersion changes from CASSANDRA-7544 (CASSANDRA-14211)
  * Non-disruptive seed node list reload (CASSANDRA-14190)
  * Nodetool tablehistograms to print statics for all the tables 
(CASSANDRA-14185)
  * Migrate dtests to use pytest and python3 (CASSANDRA-14134)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da58565e/build.xml
--
diff --git a/build.xml b/build.xml
index 5796868..e04ce18 100644
--- a/build.xml
+++ b/build.xml
@@ -437,7 +437,7 @@
   
   
  

[jira] [Updated] (CASSANDRA-13933) Handle mutateRepaired failure in nodetool verify

2018-02-01 Thread Sumanth Pasupuleti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumanth Pasupuleti updated CASSANDRA-13933:
---
Status: Patch Available  (was: In Progress)

> Handle mutateRepaired failure in nodetool verify
> 
>
> Key: CASSANDRA-13933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13933
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Sumanth Pasupuleti
>Priority: Major
>  Labels: lhf
> Attachments: CASSANDRA-13933-trunk.txt
>
>
> See comment here: 
> https://issues.apache.org/jira/browse/CASSANDRA-13922?focusedCommentId=16189875=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16189875



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13933) Handle mutateRepaired failure in nodetool verify

2018-02-01 Thread Sumanth Pasupuleti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumanth Pasupuleti updated CASSANDRA-13933:
---
Attachment: CASSANDRA-13933-trunk.txt

> Handle mutateRepaired failure in nodetool verify
> 
>
> Key: CASSANDRA-13933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13933
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Sumanth Pasupuleti
>Priority: Major
>  Labels: lhf
> Attachments: CASSANDRA-13933-trunk.txt
>
>
> See comment here: 
> https://issues.apache.org/jira/browse/CASSANDRA-13922?focusedCommentId=16189875=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16189875



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14211) Revert ProtocolVersion changes from CASSANDRA-7544

2018-02-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349371#comment-16349371
 ] 

Ariel Weisberg commented on CASSANDRA-14211:


bq. is the new local variable for ThreadGroup in ThreadAwareSecurityManager?

Good catch. That is a mistake.

bq. don't commit your .circleci yaml changes

Thanks you for the reminder!

> Revert ProtocolVersion changes from CASSANDRA-7544
> --
>
> Key: CASSANDRA-14211
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14211
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14211) Revert ProtocolVersion changes from CASSANDRA-7544

2018-02-01 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14211:

Reviewer: Jason Brown

> Revert ProtocolVersion changes from CASSANDRA-7544
> --
>
> Key: CASSANDRA-14211
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14211
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14211) Revert ProtocolVersion changes from CASSANDRA-7544

2018-02-01 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349363#comment-16349363
 ] 

Jason Brown commented on CASSANDRA-14211:
-

+1, with two minor nits to fix on commit:

- is the new local variable for {{ThreadGroup}} in 
[{{ThreadAwareSecurityManager}}|https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-14211?expand=1#diff-30a3dbf7d783cf329b5fb28a8b14332eR262]?
- don't commit your .circleci yaml changes


> Revert ProtocolVersion changes from CASSANDRA-7544
> --
>
> Key: CASSANDRA-14211
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14211
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14211) Revert ProtocolVersion changes from CASSANDRA-7544

2018-02-01 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14211:

Status: Ready to Commit  (was: Patch Available)

> Revert ProtocolVersion changes from CASSANDRA-7544
> --
>
> Key: CASSANDRA-14211
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14211
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13981) Enable Cassandra for Persistent Memory

2018-02-01 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349334#comment-16349334
 ] 

Jason Brown commented on CASSANDRA-13981:
-

[~pree] good. I'll try to start looking at this next week.

> Enable Cassandra for Persistent Memory 
> ---
>
> Key: CASSANDRA-13981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13981
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Preetika Tyagi
>Assignee: Preetika Tyagi
>Priority: Major
> Fix For: 4.0
>
> Attachments: in-mem-cassandra-1.0.patch, readme.txt
>
>
> Currently, Cassandra relies on disks for data storage and hence it needs data 
> serialization, compaction, bloom filters and partition summary/index for 
> speedy access of the data. However, with persistent memory, data can be 
> stored directly in the form of Java objects and collections, which can 
> greatly simplify the retrieval mechanism of the data. What we are proposing 
> is to make use of faster and scalable B+ tree-based data collections built 
> for persistent memory in Java (PCJ: https://github.com/pmem/pcj) and enable a 
> complete in-memory version of Cassandra, while still keeping the data 
> persistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-02-01 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349213#comment-16349213
 ] 

Adam Holmberg commented on CASSANDRA-7544:
--

{quote}The primary key changed. We don't support schema changes of the primary 
key so there has to be a new table.
{quote}
Gotcha. I was focused on the other attributes and looked right past that.

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14205) ReservedKeywords class is missing some reserved CQL keywords

2018-02-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349207#comment-16349207
 ] 

Andrés de la Peña commented on CASSANDRA-14205:
---

Thanks for the review!

Committed to cassandra-3.11 as 
[6b00767427706124e016e4f471c2266899387163|https://github.com/apache/cassandra/commit/6b00767427706124e016e4f471c2266899387163]
 and merged to trunk.

> ReservedKeywords class is missing some reserved CQL keywords
> 
>
> Key: CASSANDRA-14205
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14205
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Major
> Fix For: 3.11.x, 4.x
>
>
> The CQL keywords {{DEFAULT}}, {{UNSET}}, {{MBEAN}} and {{MBEANS}} (introduced 
> by CASSANDRA-11424 and CASSANDRA-10091) are neither considered [unreserved 
> keywords|https://github.com/apache/cassandra/blob/trunk/src/antlr/Parser.g#L1788-L1846]
>  by the ANTLR parser, nor included in the 
> [{{ReservedKeywords}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/ReservedKeywords.java]
>  class.
> The current parser behaviour is considering them as reserved keywords, in the 
> sense that they can't be used as keyspace/table/column names, which seems 
> right:
> {code:java}
> cassandra@cqlsh> CREATE KEYSPACE unset WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1};
> SyntaxException: line 1:16 no viable alternative at input 'unset' (CREATE 
> KEYSPACE [unset]...)
> {code}
> I think we should keep considering these keywords as reserved and add them to 
> {{ReservedKeywords}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/3] cassandra git commit: Add missed DEFAULT, UNSET, MBEAN and MBEANS keywords to `ReservedKeywords`

2018-02-01 Thread adelapena
Add missed DEFAULT, UNSET, MBEAN and MBEANS keywords to `ReservedKeywords`


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b007674
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b007674
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b007674

Branch: refs/heads/trunk
Commit: 6b00767427706124e016e4f471c2266899387163
Parents: b8c12fb
Author: Andrés de la Peña 
Authored: Wed Jan 31 13:53:18 2018 +
Committer: Andrés de la Peña 
Committed: Thu Feb 1 20:22:03 2018 +

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/cql3/ReservedKeywords.java | 6 +-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b007674/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 81b358d..2d7d8f7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.2
+ * Add DEFAULT, UNSET, MBEAN and MBEANS to `ReservedKeywords` (CASSANDRA-14205)
  * Add Unittest for schema migration fix (CASSANDRA-14140)
  * Print correct snitch info from nodetool describecluster (CASSANDRA-13528)
  * Close socket on error during connect on OutboundTcpConnection 
(CASSANDRA-9630)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b007674/src/java/org/apache/cassandra/cql3/ReservedKeywords.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ReservedKeywords.java 
b/src/java/org/apache/cassandra/cql3/ReservedKeywords.java
index ee052a7..30b1a6e 100644
--- a/src/java/org/apache/cassandra/cql3/ReservedKeywords.java
+++ b/src/java/org/apache/cassandra/cql3/ReservedKeywords.java
@@ -85,7 +85,11 @@ class ReservedKeywords
  "NAN",
  "INFINITY",
  "OR",
- "REPLACE" };
+ "REPLACE",
+ "DEFAULT",
+ "UNSET",
+ "MBEAN",
+ "MBEANS"};
 
 private static final Set reservedSet = 
ImmutableSet.copyOf(reservedKeywords);
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-02-01 Thread adelapena
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/59a4624d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/59a4624d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/59a4624d

Branch: refs/heads/trunk
Commit: 59a4624d5f9b2c414b200e65b45beed9c5f4db52
Parents: bfecdf5 6b00767
Author: Andrés de la Peña 
Authored: Thu Feb 1 20:29:41 2018 +
Committer: Andrés de la Peña 
Committed: Thu Feb 1 20:29:41 2018 +

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/cql3/ReservedKeywords.java | 6 +-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/59a4624d/CHANGES.txt
--
diff --cc CHANGES.txt
index a2e3654,2d7d8f7..91b3ed8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,195 -1,14 +1,196 @@@
 +4.0
 + * Non-disruptive seed node list reload (CASSANDRA-14190)
 + * Nodetool tablehistograms to print statics for all the tables 
(CASSANDRA-14185)
 + * Migrate dtests to use pytest and python3 (CASSANDRA-14134)
 + * Allow storage port to be configurable per node (CASSANDRA-7544)
 + * Make sub-range selection for non-frozen collections return null instead of 
empty (CASSANDRA-14182)
 + * BloomFilter serialization format should not change byte ordering 
(CASSANDRA-9067)
 + * Remove unused on-heap BloomFilter implementation (CASSANDRA-14152)
 + * Delete temp test files on exit (CASSANDRA-14153)
 + * Make PartitionUpdate and Mutation immutable (CASSANDRA-13867)
 + * Fix CommitLogReplayer exception for CDC data (CASSANDRA-14066)
 + * Fix cassandra-stress startup failure (CASSANDRA-14106)
 + * Remove initialDirectories from CFS (CASSANDRA-13928)
 + * Fix trivial log format error (CASSANDRA-14015)
 + * Allow sstabledump to do a json object per partition (CASSANDRA-13848)
 + * Add option to optimise merkle tree comparison across replicas 
(CASSANDRA-3200)
 + * Remove unused and deprecated methods from AbstractCompactionStrategy 
(CASSANDRA-14081)
 + * Fix Distribution.average in cassandra-stress (CASSANDRA-14090)
 + * Support a means of logging all queries as they were invoked 
(CASSANDRA-13983)
 + * Presize collections (CASSANDRA-13760)
 + * Add GroupCommitLogService (CASSANDRA-13530)
 + * Parallelize initial materialized view build (CASSANDRA-12245)
 + * Fix flaky SecondaryIndexManagerTest.assert[Not]MarkedAsBuilt 
(CASSANDRA-13965)
 + * Make LWTs send resultset metadata on every request (CASSANDRA-13992)
 + * Fix flaky indexWithFailedInitializationIsNotQueryableAfterPartialRebuild 
(CASSANDRA-13963)
 + * Introduce leaf-only iterator (CASSANDRA-9988)
 + * Upgrade Guava to 23.3 and Airline to 0.8 (CASSANDRA-13997)
 + * Allow only one concurrent call to StatusLogger (CASSANDRA-12182)
 + * Refactoring to specialised functional interfaces (CASSANDRA-13982)
 + * Speculative retry should allow more friendly params (CASSANDRA-13876)
 + * Throw exception if we send/receive repair messages to incompatible nodes 
(CASSANDRA-13944)
 + * Replace usages of MessageDigest with Guava's Hasher (CASSANDRA-13291)
 + * Add nodetool cmd to print hinted handoff window (CASSANDRA-13728)
 + * Fix some alerts raised by static analysis (CASSANDRA-13799)
 + * Checksum sstable metadata (CASSANDRA-13321, CASSANDRA-13593)
 + * Add result set metadata to prepared statement MD5 hash calculation 
(CASSANDRA-10786)
 + * Refactor GcCompactionTest to avoid boxing (CASSANDRA-13941)
 + * Expose recent histograms in JmxHistograms (CASSANDRA-13642)
 + * Fix buffer length comparison when decompressing in netty-based streaming 
(CASSANDRA-13899)
 + * Properly close StreamCompressionInputStream to release any ByteBuf 
(CASSANDRA-13906)
 + * Add SERIAL and LOCAL_SERIAL support for cassandra-stress (CASSANDRA-13925)
 + * LCS needlessly checks for L0 STCS candidates multiple times 
(CASSANDRA-12961)
 + * Correctly close netty channels when a stream session ends (CASSANDRA-13905)
 + * Update lz4 to 1.4.0 (CASSANDRA-13741)
 + * Optimize Paxos prepare and propose stage for local requests 
(CASSANDRA-13862)
 + * Throttle base partitions during MV repair streaming to prevent OOM 
(CASSANDRA-13299)
 + * Use compaction threshold for STCS in L0 (CASSANDRA-13861)
 + * Fix problem with min_compress_ratio: 1 and disallow ratio < 1 
(CASSANDRA-13703)
 + * Add extra information to SASI timeout exception (CASSANDRA-13677)
 + * Add incremental repair support for --hosts, --force, and subrange repair 
(CASSANDRA-13818)
 + * Rework CompactionStrategyManager.getScanners synchronization 
(CASSANDRA-13786)
 + * Add additional unit tests for batch behavior, TTLs, Timestamps 

[1/3] cassandra git commit: Add missed DEFAULT, UNSET, MBEAN and MBEANS keywords to `ReservedKeywords`

2018-02-01 Thread adelapena
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 b8c12fba0 -> 6b0076742
  refs/heads/trunk bfecdf520 -> 59a4624d5


Add missed DEFAULT, UNSET, MBEAN and MBEANS keywords to `ReservedKeywords`


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b007674
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b007674
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b007674

Branch: refs/heads/cassandra-3.11
Commit: 6b00767427706124e016e4f471c2266899387163
Parents: b8c12fb
Author: Andrés de la Peña 
Authored: Wed Jan 31 13:53:18 2018 +
Committer: Andrés de la Peña 
Committed: Thu Feb 1 20:22:03 2018 +

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/cql3/ReservedKeywords.java | 6 +-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b007674/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 81b358d..2d7d8f7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.2
+ * Add DEFAULT, UNSET, MBEAN and MBEANS to `ReservedKeywords` (CASSANDRA-14205)
  * Add Unittest for schema migration fix (CASSANDRA-14140)
  * Print correct snitch info from nodetool describecluster (CASSANDRA-13528)
  * Close socket on error during connect on OutboundTcpConnection 
(CASSANDRA-9630)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b007674/src/java/org/apache/cassandra/cql3/ReservedKeywords.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ReservedKeywords.java 
b/src/java/org/apache/cassandra/cql3/ReservedKeywords.java
index ee052a7..30b1a6e 100644
--- a/src/java/org/apache/cassandra/cql3/ReservedKeywords.java
+++ b/src/java/org/apache/cassandra/cql3/ReservedKeywords.java
@@ -85,7 +85,11 @@ class ReservedKeywords
  "NAN",
  "INFINITY",
  "OR",
- "REPLACE" };
+ "REPLACE",
+ "DEFAULT",
+ "UNSET",
+ "MBEAN",
+ "MBEANS"};
 
 private static final Set reservedSet = 
ImmutableSet.copyOf(reservedKeywords);
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-02-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349197#comment-16349197
 ] 

Ariel Weisberg commented on CASSANDRA-7544:
---

bq. I don't think this is accurate. Protocol does not dictate which tables are 
present, server version does.
Driver support isn't merged yet so if we want to use server version to do that 
we can.

bq. On another note, is there a reason this was implemented as a new system 
table (system.peers_v2), rather than new columns in the existing system.peers? I
The primary key changed. We don't support schema changes of the primary key so 
there has to be a new table.

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14055) Index redistribution breaks SASI index

2018-02-01 Thread Jordan West (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-14055:

Reviewer: Jordan West  (was: Alex Petrov)

> Index redistribution breaks SASI index
> --
>
> Key: CASSANDRA-14055
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14055
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Ludovic Boutros
>Assignee: Ludovic Boutros
>Priority: Major
>  Labels: patch
> Fix For: 3.11.2
>
> Attachments: CASSANDRA-14055.patch, CASSANDRA-14055.patch, 
> CASSANDRA-14055.patch
>
>
> During index redistribution process, a new view is created.
> During this creation, old indexes should be released.
> But, new indexes are "attached" to the same SSTable as the old indexes.
> This leads to the deletion of the last SASI index file and breaks the index.
> The issue is in this function : 
> [https://github.com/apache/cassandra/blob/9ee44db49b13d4b4c91c9d6332ce06a6e2abf944/src/java/org/apache/cassandra/index/sasi/conf/view/View.java#L62]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-02-01 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349193#comment-16349193
 ] 

Adam Holmberg commented on CASSANDRA-7544:
--

Can we also discuss how the node metadata is implemented? 
{quote}The clients use the protocol version to select the correct system tables 
when querying metadata.
{quote}
I don't think this is accurate. Protocol does not dictate which tables are 
present, server version does.

On another note, is there a reason this was implemented as a new system table 
({{system.peers_v2}}), rather than new columns in the existing 
{{system.peers}}? I think it might be cleaner from the client perspective if 
the existing table is expanded, rather than having to probe or conditionally 
query different tables.

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14211) Revert ProtocolVersion changes from CASSANDRA-7544

2018-02-01 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14211:
---
Status: Patch Available  (was: Open)

https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-14211?expand=1
https://circleci.com/gh/aweisberg/cassandra/tree/cassandra-14211

> Revert ProtocolVersion changes from CASSANDRA-7544
> --
>
> Key: CASSANDRA-14211
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14211
> Project: Cassandra
>  Issue Type: Task
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-02-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349143#comment-16349143
 ] 

Ariel Weisberg commented on CASSANDRA-7544:
---

Working on reverting the protocol version changes now. CASSANDRA-14211.

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14211) Revert ProtocolVersion changes from CASSANDRA-7544

2018-02-01 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-14211:
--

 Summary: Revert ProtocolVersion changes from CASSANDRA-7544
 Key: CASSANDRA-14211
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14211
 Project: Cassandra
  Issue Type: Task
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 4.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14206) Python 3 LooseVersion breaks compatibility with Python 2.7 LooseVersion

2018-02-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349081#comment-16349081
 ] 

Ariel Weisberg commented on CASSANDRA-14206:


There are a whole bunch of issues 
https://github.com/aweisberg/cassandra-dtest/tree/cassandra-14206 

And I needed to tweak loose comparison more for different length version 
strings. We'll have to merge up at somepoint. Right now I am busy with the 7544 
revert of protocol version changes.

> Python 3 LooseVersion breaks compatibility with Python 2.7 LooseVersion
> ---
>
> Key: CASSANDRA-14206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14206
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Attachments: loose_version.diff
>
>
> In 2.7 it uses the cmp built-in to compare the list of version components 
> which accepts comparisons of strings and integers. In python 3 it manually 
> compares each using <, ==, and > which can fail if the the types don't match.
> Switch to using our own comparison function that preserves the old behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14206) Python 3 LooseVersion breaks compatibility with Python 2.7 LooseVersion

2018-02-01 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349035#comment-16349035
 ] 

Sam Tunnicliffe commented on CASSANDRA-14206:
-

Running the new dtests against 3.11 results in a bunch of failures at the 
moment. Some are legit failures and some are problems with the tests. I've got 
a branch where I'm pushing fixes to the test plumbing (including the attached 
diff): https://github.com/beobal/cassandra-dtest/tree/14206

> Python 3 LooseVersion breaks compatibility with Python 2.7 LooseVersion
> ---
>
> Key: CASSANDRA-14206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14206
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Attachments: loose_version.diff
>
>
> In 2.7 it uses the cmp built-in to compare the list of version components 
> which accepts comparisons of strings and integers. In python 3 it manually 
> compares each using <, ==, and > which can fail if the the types don't match.
> Switch to using our own comparison function that preserves the old behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Issue Comment Deleted] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-02-01 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-7544:
--
Comment: was deleted

(was: bq. ISTM that a discussion on the dev list is warranted.  That's a pretty 
big "side effect" of this patch.

It's not clear that it is? It sounds like you know something? I'd be happy to 
accept a patch )

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-02-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348921#comment-16348921
 ] 

Ariel Weisberg commented on CASSANDRA-7544:
---

bq. ISTM that a discussion on the dev list is warranted.  That's a pretty big 
"side effect" of this patch.

It's not clear that it is? It sounds like you know something? I'd be happy to 
accept a patch 

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14209) group by select queries query results differ when using select * vs select fields

2018-02-01 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-14209:
--

Assignee: Benjamin Lerer

> group by select queries query results differ when using select * vs select 
> fields
> -
>
> Key: CASSANDRA-14209
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14209
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Digant Modha
>Assignee: Benjamin Lerer
>Priority: Minor
> Attachments: Re group by select queries.txt
>
>
> {{I get two different out with these 2 queries.  The only difference between 
> the 2 queries is that one does ‘select *’ and other does ‘select specific 
> fields’ without any aggregate functions.}}
> {{I am using Apache Cassandra 3.10.}}
> {{Consistency level set to LOCAL_QUORUM.}}
> {{cassandra@cqlsh> select * from wp.position where account_id = 'user_1';}}
> {{ account_id | security_id | counter | avg_exec_price | pending_quantity | 
> quantity | transaction_id | update_time}}
> {{+-+-++--+--++-}}
> {{ user_1 | AMZN | 2 | 1239.2 | 0 | 1011 | null | 2018-01-25 
> 17:18:07.158000+}}
> {{ user_1 | AMZN | 1 | 1239.2 | 0 | 1010 | null | 2018-01-25 
> 17:18:07.158000+}}
> {{(2 rows)}}
> {{cassandra@cqlsh> select * from wp.position where account_id = 'user_1' 
> group by security_id;}}
> {{ account_id | security_id | counter | avg_exec_price | pending_quantity | 
> quantity | transaction_id | update_time}}
> {{+-+-++--+--++-}}
> {{ user_1 | AMZN | 1 | 1239.2 | 0 | 1010 | null | 2018-01-25 
> 17:18:07.158000+}}
> {{(1 rows)}}
> {{cassandra@cqlsh> select account_id,security_id, counter, 
> avg_exec_price,quantity, update_time from wp.position where account_id = 
> 'user_1' group by security_id ;}}
> {{ account_id | security_id | counter | avg_exec_price | quantity | 
> update_time}}
> {{+-+-++--+-}}
> {{ user_1 | AMZN | 2 | 1239.2 | 1011 | 2018-01-25 17:18:07.158000+}}
> {{(1 rows)}}
> {{Table Description:}}
> {{CREATE TABLE wp.position (}}
> {{ account_id text,}}
> {{ security_id text,}}
> {{ counter bigint,}}
> {{ avg_exec_price double,}}
> {{ pending_quantity double,}}
> {{ quantity double,}}
> {{ transaction_id uuid,}}
> {{ update_time timestamp,}}
> {{ PRIMARY KEY (account_id, security_id, counter)}}
> {{) WITH CLUSTERING ORDER BY (security_id ASC, counter DESC)}}{{ }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7544) Allow storage port to be configurable per node

2018-02-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348816#comment-16348816
 ] 

Jonathan Ellis commented on CASSANDRA-7544:
---

{quote}I'm not sure how we intended that to work. We don't have trunk releases 
so what is the expectation there from the perspective of clients?
{quote}
ISTM that a discussion on the dev list is warranted.  That's a pretty big "side 
effect" of this patch.

> Allow storage port to be configurable per node
> --
>
> Key: CASSANDRA-7544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7544
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Overton
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> Currently storage_port must be configured identically on all nodes in a 
> cluster and it is assumed that this is the case when connecting to a remote 
> node.
> This prevents running in any environment that requires multiple nodes to be 
> able to bind to the same network interface, such as with many automatic 
> provisioning/deployment frameworks.
> The current solutions seems to be
> * use a separate network interface for each node deployed to the same box. 
> This puts a big requirement on IP allocation at large scale.
> * allow multiple clusters to be provisioned from the same resource pool, but 
> restrict allocation to a maximum of one node per host from each cluster, 
> assuming each cluster is running on a different storage port.
> It would make operations much simpler in these kind of environments if the 
> environment provisioning the resources could assign the ports to be used when 
> bringing up a new node on shared hardware.
> The changes required would be at least the following:
> 1. configure seeds as IP:port instead of just IP
> 2. gossip the storage port as part of a node's ApplicationState
> 3. refer internally to nodes by hostID instead of IP, since there will be 
> multiple nodes with the same IP
> (1) & (2) are mostly trivial and I already have a patch for these. The bulk 
> of the work to enable this is (3), and I would structure this as a separate 
> pre-requisite patch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-02-01 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348723#comment-16348723
 ] 

Norman Maurer commented on CASSANDRA-13929:
---

[~tsteinmaurer] what about a heap dump ? Is this something you could provide ?

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14210) Optimize SSTables upgrade task scheduling

2018-02-01 Thread Oleksandr Shulgin (JIRA)
Oleksandr Shulgin created CASSANDRA-14210:
-

 Summary: Optimize SSTables upgrade task scheduling
 Key: CASSANDRA-14210
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14210
 Project: Cassandra
  Issue Type: Improvement
  Components: Compaction
Reporter: Oleksandr Shulgin


When starting the SSTable-rewrite process by running {{nodetool upgradesstables 
--jobs N}}, with N > 1, not all of the provided N slots are used.

For example, we were testing with {{concurrent_compactors=5}} and {{N=4}}.  
What we observed both for version 2.2 and 3.0, is that initially all 4 provided 
slots are used for "Upgrade sstables" compactions, but later when some of the 4 
tasks are finished, no new tasks are scheduled immediately.  It takes the last 
of the 4 tasks to finish before new 4 tasks would be scheduled.  This happens 
on every node we've observed.

This doesn't utilize available resources to the full extent allowed by the 
--jobs N parameter.  In the field, on a cluster of 12 nodes with 4-5 TiB data 
each, we've seen that the whole process was taking more than 7 days, instead of 
estimated 1.5-2 days (provided there would be close to full N slots 
utilization).

Instead, new tasks should be scheduled as soon as there is a free compaction 
slot.
Additionally, starting from the biggest SSTables could further reduce the total 
time required for the whole process to finish on any given node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14209) group by select queries query results differ when using select * vs select fields

2018-02-01 Thread Digant Modha (JIRA)
Digant Modha created CASSANDRA-14209:


 Summary: group by select queries query results differ when using 
select * vs select fields
 Key: CASSANDRA-14209
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14209
 Project: Cassandra
  Issue Type: Bug
Reporter: Digant Modha
 Attachments: Re group by select queries.txt

{{I get two different out with these 2 queries.  The only difference between 
the 2 queries is that one does ‘select *’ and other does ‘select specific 
fields’ without any aggregate functions.}}

{{I am using Apache Cassandra 3.10.}}


{{Consistency level set to LOCAL_QUORUM.}}
{{cassandra@cqlsh> select * from wp.position where account_id = 'user_1';}}

{{ account_id | security_id | counter | avg_exec_price | pending_quantity | 
quantity | transaction_id | update_time}}
{{+-+-++--+--++-}}
{{ user_1 | AMZN | 2 | 1239.2 | 0 | 1011 | null | 2018-01-25 
17:18:07.158000+}}
{{ user_1 | AMZN | 1 | 1239.2 | 0 | 1010 | null | 2018-01-25 
17:18:07.158000+}}

{{(2 rows)}}
{{cassandra@cqlsh> select * from wp.position where account_id = 'user_1' group 
by security_id;}}

{{ account_id | security_id | counter | avg_exec_price | pending_quantity | 
quantity | transaction_id | update_time}}
{{+-+-++--+--++-}}
{{ user_1 | AMZN | 1 | 1239.2 | 0 | 1010 | null | 2018-01-25 
17:18:07.158000+}}

{{(1 rows)}}
{{cassandra@cqlsh> select account_id,security_id, counter, 
avg_exec_price,quantity, update_time from wp.position where account_id = 
'user_1' group by security_id ;}}

{{ account_id | security_id | counter | avg_exec_price | quantity | 
update_time}}
{{+-+-++--+-}}
{{ user_1 | AMZN | 2 | 1239.2 | 1011 | 2018-01-25 17:18:07.158000+}}

{{(1 rows)}}


{{Table Description:}}
{{CREATE TABLE wp.position (}}
{{ account_id text,}}
{{ security_id text,}}
{{ counter bigint,}}
{{ avg_exec_price double,}}
{{ pending_quantity double,}}
{{ quantity double,}}
{{ transaction_id uuid,}}
{{ update_time timestamp,}}
{{ PRIMARY KEY (account_id, security_id, counter)}}
{{) WITH CLUSTERING ORDER BY (security_id ASC, counter DESC)}}{{ }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-02-01 Thread Thomas Steinmaurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348576#comment-16348576
 ] 

Thomas Steinmaurer commented on CASSANDRA-13929:


[~jay.zhuang], will let the node (1 out of 9) - previously, manually upgraded 
to Netty 4.0.55 - running with your mentioned Netty JVM option over the 
weekend. Thanks.

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13933) Handle mutateRepaired failure in nodetool verify

2018-02-01 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348504#comment-16348504
 ] 

Marcus Eriksson commented on CASSANDRA-13933:
-

no it is fine, I can rebase on top of your patch

> Handle mutateRepaired failure in nodetool verify
> 
>
> Key: CASSANDRA-13933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13933
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Sumanth Pasupuleti
>Priority: Major
>  Labels: lhf
>
> See comment here: 
> https://issues.apache.org/jira/browse/CASSANDRA-13922?focusedCommentId=16189875=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16189875



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14092) Max ttl of 20 years will overflow localDeletionTime

2018-02-01 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348437#comment-16348437
 ] 

Sam Tunnicliffe commented on CASSANDRA-14092:
-

Sounds good to me. Reject and optionally cap at input time, plus a offline 
route to full recovery (if possible & desired) via scrub sounds like the best 
way forward IMO. I'll do a full review when you have the cleaned up patches 
ready, but the WIP branches generally LGTM at first glance. The wording of the 
NEWS.txt entry is good, I do wonder if we should maybe place it right at the 
top of the file rather than just in the 3.0.16 section for extra emphasis. Any 
thoughts on that? 

I also have one piece of feedback on the policies; I don't see any benefit in 
being able to turn off logging of capped expirations (especially since we're 
using NoSpamLogger) but I do I think the client warning is useful. So I would 
change the policies to:
{noformat}
- REJECT (default and as you've defined it)
- CAP (cap, log and issue client warning)
- CAP_NOWARN (cap and log)
{noformat}

I also noticed that the logging of a parse error/invalid value for the policy 
sysprop is at DEBUG in the current patches, but it might be sensible to draw a 
bit more attention to that if it happens.


> Max ttl of 20 years will overflow localDeletionTime
> ---
>
> Key: CASSANDRA-14092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Blocker
> Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
>
>
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14055) Index redistribution breaks SASI index

2018-02-01 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348404#comment-16348404
 ] 

Ludovic Boutros commented on CASSANDRA-14055:
-

Hi @Jordan,

first, thank you for reviewing this patch.
I will try to give answers to your questions.

Your global analysis is correct. The idea of this patch was to change as few 
things as possible.
I do not see any other failure scenarios currently.
We are using this patch in production with success since the end of november.

Regarding the {{keepFile}} change, with my last patch, I can reproduce the file 
deletion with the {{forceFlush}} boolean set to {{true}}.

You can just add a conditional breakpoint with {{keepFile && (obsolete.get() || 
sstableRef.globalCount() == 0)}} in the {{release}} function.
It will stop on each attempt of index redistribution with {{forceFlush}} active 
(second part of the test).

With my limited knowledge of the global code, I did not see any issue in the 
reference counting process with my patch.
But again, I'm quite new with this code :).

> Index redistribution breaks SASI index
> --
>
> Key: CASSANDRA-14055
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14055
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Ludovic Boutros
>Assignee: Ludovic Boutros
>Priority: Major
>  Labels: patch
> Fix For: 3.11.2
>
> Attachments: CASSANDRA-14055.patch, CASSANDRA-14055.patch, 
> CASSANDRA-14055.patch
>
>
> During index redistribution process, a new view is created.
> During this creation, old indexes should be released.
> But, new indexes are "attached" to the same SSTable as the old indexes.
> This leads to the deletion of the last SASI index file and breaks the index.
> The issue is in this function : 
> [https://github.com/apache/cassandra/blob/9ee44db49b13d4b4c91c9d6332ce06a6e2abf944/src/java/org/apache/cassandra/index/sasi/conf/view/View.java#L62]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14055) Index redistribution breaks SASI index

2018-02-01 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348404#comment-16348404
 ] 

Ludovic Boutros edited comment on CASSANDRA-14055 at 2/1/18 11:04 AM:
--

Hi [~jrwest],

first, thank you for reviewing this patch.
 I will try to give answers to your questions.

Your global analysis is correct. The idea of this patch was to change as few 
things as possible.
 I do not see any other failure scenarios currently.
 We are using this patch in production with success since the end of november.

Regarding the {{keepFile}} change, with my last patch, I can reproduce the file 
deletion with the {{forceFlush}} boolean set to {{true}}.

You can just add a conditional breakpoint with {{keepFile && (obsolete.get() || 
sstableRef.globalCount() == 0)}} in the {{release}} function.
 It will stop on each attempt of index redistribution with {{forceFlush}} 
active (second part of the test).

With my limited knowledge of the global code, I did not see any issue in the 
reference counting process with my patch.
 But again, I'm quite new with this code :).


was (Author: lboutros):
Hi @Jordan,

first, thank you for reviewing this patch.
I will try to give answers to your questions.

Your global analysis is correct. The idea of this patch was to change as few 
things as possible.
I do not see any other failure scenarios currently.
We are using this patch in production with success since the end of november.

Regarding the {{keepFile}} change, with my last patch, I can reproduce the file 
deletion with the {{forceFlush}} boolean set to {{true}}.

You can just add a conditional breakpoint with {{keepFile && (obsolete.get() || 
sstableRef.globalCount() == 0)}} in the {{release}} function.
It will stop on each attempt of index redistribution with {{forceFlush}} active 
(second part of the test).

With my limited knowledge of the global code, I did not see any issue in the 
reference counting process with my patch.
But again, I'm quite new with this code :).

> Index redistribution breaks SASI index
> --
>
> Key: CASSANDRA-14055
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14055
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Ludovic Boutros
>Assignee: Ludovic Boutros
>Priority: Major
>  Labels: patch
> Fix For: 3.11.2
>
> Attachments: CASSANDRA-14055.patch, CASSANDRA-14055.patch, 
> CASSANDRA-14055.patch
>
>
> During index redistribution process, a new view is created.
> During this creation, old indexes should be released.
> But, new indexes are "attached" to the same SSTable as the old indexes.
> This leads to the deletion of the last SASI index file and breaks the index.
> The issue is in this function : 
> [https://github.com/apache/cassandra/blob/9ee44db49b13d4b4c91c9d6332ce06a6e2abf944/src/java/org/apache/cassandra/index/sasi/conf/view/View.java#L62]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14056) Many dtests fail with ConfigurationException: offheap_objects are not available in 3.0 when OFFHEAP_MEMTABLES="true"

2018-02-01 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348393#comment-16348393
 ] 

Jason Brown commented on CASSANDRA-14056:
-

[~alourie] Thanks for the patch. Is this still a problem after CASSANDRA-14134? 
At a minimum, [~mkjellman] killed off the environment varibles, and the literal 
string "OFFHEAP_MEMTABLES" doesn't exist {{master}} anymore.

> Many dtests fail with ConfigurationException: offheap_objects are not 
> available in 3.0 when OFFHEAP_MEMTABLES="true"
> 
>
> Key: CASSANDRA-14056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14056
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Alex Lourie
>Priority: Major
>
> Tons of dtests are running when they shouldn't as it looks like the path is 
> no longer supported.. we need to add a bunch of logic that's missing to fully 
> support running dtests with off-heap memtables enabled (via the 
> OFFHEAP_MEMTABLES="true" environment variable)
> {code}[node2 ERROR] java.lang.ExceptionInInitializerError
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:394)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:361)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:577)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:554)
>   at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
>   at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
>   at 
> org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:887)
>   at 
> org.apache.cassandra.service.StartupChecks$9.execute(StartupChecks.java:354)
>   at 
> org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:110)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:179)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: 
> offheap_objects are not available in 3.0. They will be re-introduced in a 
> future release, see https://issues.apache.org/jira/browse/CASSANDRA-9472 for 
> details
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.getMemtableAllocatorPool(DatabaseDescriptor.java:1907)
>   at org.apache.cassandra.db.Memtable.(Memtable.java:65)
>   ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14208) space is 100 percent full on one node and other nodes we have free space

2018-02-01 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown resolved CASSANDRA-14208.
-
Resolution: Invalid

Please send an email to the u...@cassandra.apache.org for help with this. JIra 
is for bugs or features

> space is 100 percent full on one node and other nodes we have free space 
> -
>
> Key: CASSANDRA-14208
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14208
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: chinta kiran
>Priority: Critical
> Fix For: 3.0.x
>
>
> We have 3 node cluster. On one node we are filled with 100 precent. on other 
> 2 nodes we have littlt bit space ]. how to reclaim the space on node3 which 
> is having only 1GB free out of 300 GB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13665) nodetool clientlist

2018-02-01 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-13665:

Reviewer: Jason Brown

> nodetool clientlist
> ---
>
> Key: CASSANDRA-13665
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13665
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jon Haddad
>Assignee: Chris Lohfink
>Priority: Major
>
> There should exist a nodetool command that lists each client connection. 
> Ideally it would display the following:
>  * host
>  * protocol version
>  * user logged in as
>  * current keyspace
>  * total queries executed
>  * ssl connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14205) ReservedKeywords class is missing some reserved CQL keywords

2018-02-01 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-14205:
---
Status: Ready to Commit  (was: Patch Available)

> ReservedKeywords class is missing some reserved CQL keywords
> 
>
> Key: CASSANDRA-14205
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14205
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Major
> Fix For: 3.11.x, 4.x
>
>
> The CQL keywords {{DEFAULT}}, {{UNSET}}, {{MBEAN}} and {{MBEANS}} (introduced 
> by CASSANDRA-11424 and CASSANDRA-10091) are neither considered [unreserved 
> keywords|https://github.com/apache/cassandra/blob/trunk/src/antlr/Parser.g#L1788-L1846]
>  by the ANTLR parser, nor included in the 
> [{{ReservedKeywords}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/ReservedKeywords.java]
>  class.
> The current parser behaviour is considering them as reserved keywords, in the 
> sense that they can't be used as keyspace/table/column names, which seems 
> right:
> {code:java}
> cassandra@cqlsh> CREATE KEYSPACE unset WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1};
> SyntaxException: line 1:16 no viable alternative at input 'unset' (CREATE 
> KEYSPACE [unset]...)
> {code}
> I think we should keep considering these keywords as reserved and add them to 
> {{ReservedKeywords}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14205) ReservedKeywords class is missing some reserved CQL keywords

2018-02-01 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348282#comment-16348282
 ] 

Benjamin Lerer commented on CASSANDRA-14205:


Thanks for the patch +1.

> ReservedKeywords class is missing some reserved CQL keywords
> 
>
> Key: CASSANDRA-14205
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14205
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Major
> Fix For: 3.11.x, 4.x
>
>
> The CQL keywords {{DEFAULT}}, {{UNSET}}, {{MBEAN}} and {{MBEANS}} (introduced 
> by CASSANDRA-11424 and CASSANDRA-10091) are neither considered [unreserved 
> keywords|https://github.com/apache/cassandra/blob/trunk/src/antlr/Parser.g#L1788-L1846]
>  by the ANTLR parser, nor included in the 
> [{{ReservedKeywords}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/ReservedKeywords.java]
>  class.
> The current parser behaviour is considering them as reserved keywords, in the 
> sense that they can't be used as keyspace/table/column names, which seems 
> right:
> {code:java}
> cassandra@cqlsh> CREATE KEYSPACE unset WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1};
> SyntaxException: line 1:16 no viable alternative at input 'unset' (CREATE 
> KEYSPACE [unset]...)
> {code}
> I think we should keep considering these keywords as reserved and add them to 
> {{ReservedKeywords}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14205) ReservedKeywords class is missing some reserved CQL keywords

2018-02-01 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-14205:
---
Reviewer: Benjamin Lerer

> ReservedKeywords class is missing some reserved CQL keywords
> 
>
> Key: CASSANDRA-14205
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14205
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Major
> Fix For: 3.11.x, 4.x
>
>
> The CQL keywords {{DEFAULT}}, {{UNSET}}, {{MBEAN}} and {{MBEANS}} (introduced 
> by CASSANDRA-11424 and CASSANDRA-10091) are neither considered [unreserved 
> keywords|https://github.com/apache/cassandra/blob/trunk/src/antlr/Parser.g#L1788-L1846]
>  by the ANTLR parser, nor included in the 
> [{{ReservedKeywords}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/ReservedKeywords.java]
>  class.
> The current parser behaviour is considering them as reserved keywords, in the 
> sense that they can't be used as keyspace/table/column names, which seems 
> right:
> {code:java}
> cassandra@cqlsh> CREATE KEYSPACE unset WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1};
> SyntaxException: line 1:16 no viable alternative at input 'unset' (CREATE 
> KEYSPACE [unset]...)
> {code}
> I think we should keep considering these keywords as reserved and add them to 
> {{ReservedKeywords}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-02-01 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348181#comment-16348181
 ] 

Norman Maurer commented on CASSANDRA-13929:
---

yeah 4.0.55.Final should have the "fix" as well:

 

[https://github.com/netty/netty/commit/b386ee3eaf35abd5072992d626de6ae2ccadc6d9#diff-23eafd00fcd66829f8cce343b26c236a]

 

That said maybe there are other issues. Would it be possible to share a 
heap-dump ?

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2018-02-01 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348178#comment-16348178
 ] 

Alex Lourie commented on CASSANDRA-13010:
-

[~rustyrazorblade] Would you be able to have a look at the patch? Thanks!

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>Priority: Major
>  Labels: lhf
> Attachments: 13010.patch, cleanup.png, multiple operations.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14080) Handling 0 size hint files during start

2018-02-01 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348176#comment-16348176
 ] 

Alex Lourie edited comment on CASSANDRA-14080 at 2/1/18 8:19 AM:
-

[~iamaleksey] it would be great if you could give a feedback on the previous 
comment. Thanks!


was (Author: alourie):
[~iamaleksey] it would be great if you could give a feedback on previous 
comment. Thanks!

> Handling 0 size hint files during start
> ---
>
> Key: CASSANDRA-14080
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14080
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Aleksandr Ivanov
>Assignee: Alex Lourie
>Priority: Major
>
> Continuation of CASSANDRA-12728 bug.
> Problem: Cassandra didn't start due to 0 size hints files
> Log form v3.0.14:
> {code:java}
> INFO  [main] 2017-11-28 19:10:13,554 StorageService.java:575 - Cassandra 
> version: 3.0.14
> INFO  [main] 2017-11-28 19:10:13,555 StorageService.java:576 - Thrift API 
> version: 20.1.0
> INFO  [main] 2017-11-28 19:10:13,555 StorageService.java:577 - CQL supported 
> versions: 3.4.0 (default: 3.4.0)
> ERROR [main] 2017-11-28 19:10:13,592 CassandraDaemon.java:710 - Exception 
> encountered during startup
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsDescriptor.readFromFile(HintsDescriptor.java:142)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) 
> ~[na:1.8.0_141]
> at java.util.Iterator.forEachRemaining(Iterator.java:116) 
> ~[na:1.8.0_141]
> at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>  ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) 
> ~[na:1.8.0_141]
> at org.apache.cassandra.hints.HintsCatalog.load(HintsCatalog.java:65) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsService.(HintsService.java:88) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsService.(HintsService.java:63) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.StorageProxy.(StorageProxy.java:121) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at java.lang.Class.forName0(Native Method) ~[na:1.8.0_141]
> at java.lang.Class.forName(Class.java:264) ~[na:1.8.0_141]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:585)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:570)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:346) 
> [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.14.jar:3.0.14]
> Caused by: java.io.EOFException: null
> at java.io.RandomAccessFile.readInt(RandomAccessFile.java:803) 
> ~[na:1.8.0_141]
> at 
> org.apache.cassandra.hints.HintsDescriptor.deserialize(HintsDescriptor.java:237)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsDescriptor.readFromFile(HintsDescriptor.java:138)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> ... 20 common frames omitted
> {code}
> After several 0 size hints files deletion Cassandra started successfully.
> Jeff Jirsa added a comment - Yesterday
> Aleksandr Ivanov can you open a new JIRA and link it back to this one? It's 
> possible that the original patch didn't consider 0 byte files (I don't have 
> time to go back and look at the commit, and it was long enough ago that I've 
> forgotten) - were all of your files 0 bytes?
> Not all, 8..10 hints files were with 0 size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (CASSANDRA-14080) Handling 0 size hint files during start

2018-02-01 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348176#comment-16348176
 ] 

Alex Lourie commented on CASSANDRA-14080:
-

[~iamaleksey] it would be great if you could give a feedback on previous 
comment. Thanks!

> Handling 0 size hint files during start
> ---
>
> Key: CASSANDRA-14080
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14080
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Aleksandr Ivanov
>Assignee: Alex Lourie
>Priority: Major
>
> Continuation of CASSANDRA-12728 bug.
> Problem: Cassandra didn't start due to 0 size hints files
> Log form v3.0.14:
> {code:java}
> INFO  [main] 2017-11-28 19:10:13,554 StorageService.java:575 - Cassandra 
> version: 3.0.14
> INFO  [main] 2017-11-28 19:10:13,555 StorageService.java:576 - Thrift API 
> version: 20.1.0
> INFO  [main] 2017-11-28 19:10:13,555 StorageService.java:577 - CQL supported 
> versions: 3.4.0 (default: 3.4.0)
> ERROR [main] 2017-11-28 19:10:13,592 CassandraDaemon.java:710 - Exception 
> encountered during startup
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsDescriptor.readFromFile(HintsDescriptor.java:142)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) 
> ~[na:1.8.0_141]
> at java.util.Iterator.forEachRemaining(Iterator.java:116) 
> ~[na:1.8.0_141]
> at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>  ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) 
> ~[na:1.8.0_141]
> at org.apache.cassandra.hints.HintsCatalog.load(HintsCatalog.java:65) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsService.(HintsService.java:88) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsService.(HintsService.java:63) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.StorageProxy.(StorageProxy.java:121) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at java.lang.Class.forName0(Native Method) ~[na:1.8.0_141]
> at java.lang.Class.forName(Class.java:264) ~[na:1.8.0_141]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:585)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:570)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:346) 
> [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.14.jar:3.0.14]
> Caused by: java.io.EOFException: null
> at java.io.RandomAccessFile.readInt(RandomAccessFile.java:803) 
> ~[na:1.8.0_141]
> at 
> org.apache.cassandra.hints.HintsDescriptor.deserialize(HintsDescriptor.java:237)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsDescriptor.readFromFile(HintsDescriptor.java:138)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> ... 20 common frames omitted
> {code}
> After several 0 size hints files deletion Cassandra started successfully.
> Jeff Jirsa added a comment - Yesterday
> Aleksandr Ivanov can you open a new JIRA and link it back to this one? It's 
> possible that the original patch didn't consider 0 byte files (I don't have 
> time to go back and look at the commit, and it was long enough ago that I've 
> forgotten) - were all of your files 0 bytes?
> Not all, 8..10 hints files were with 0 size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14054) testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is flaky: expected <2> but got <1>

2018-02-01 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348175#comment-16348175
 ] 

Alex Lourie commented on CASSANDRA-14054:
-

[~mkjellman] bumping this one too.

> testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is 
> flaky: expected <2> but got <1>
> -
>
> Key: CASSANDRA-14054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14054
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Alex Lourie
>Priority: Major
>
> testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is 
> flaky: expected <2> but got <1>
> Fails about 25% of the time. It is currently our only flaky unit test on 
> trunk so it would be great to get this one fixed up so we can be confident in 
> unit test failures going forward.
> junit.framework.AssertionFailedError: Invalid value for row 0 column 0 (c of 
> type int), expected <2> but got <1>
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:973)
>   at 
> org.apache.cassandra.cql3.ViewTest.testRegularColumnTimestampUpdates(ViewTest.java:380)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14056) Many dtests fail with ConfigurationException: offheap_objects are not available in 3.0 when OFFHEAP_MEMTABLES="true"

2018-02-01 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348166#comment-16348166
 ] 

Alex Lourie commented on CASSANDRA-14056:
-

[~mkjellman] poke. I hope you find some time to have a look at the patch.

> Many dtests fail with ConfigurationException: offheap_objects are not 
> available in 3.0 when OFFHEAP_MEMTABLES="true"
> 
>
> Key: CASSANDRA-14056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14056
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Alex Lourie
>Priority: Major
>
> Tons of dtests are running when they shouldn't as it looks like the path is 
> no longer supported.. we need to add a bunch of logic that's missing to fully 
> support running dtests with off-heap memtables enabled (via the 
> OFFHEAP_MEMTABLES="true" environment variable)
> {code}[node2 ERROR] java.lang.ExceptionInInitializerError
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:394)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:361)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:577)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:554)
>   at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
>   at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
>   at 
> org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:887)
>   at 
> org.apache.cassandra.service.StartupChecks$9.execute(StartupChecks.java:354)
>   at 
> org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:110)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:179)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: 
> offheap_objects are not available in 3.0. They will be re-introduced in a 
> future release, see https://issues.apache.org/jira/browse/CASSANDRA-9472 for 
> details
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.getMemtableAllocatorPool(DatabaseDescriptor.java:1907)
>   at org.apache.cassandra.db.Memtable.(Memtable.java:65)
>   ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org