[jira] [Comment Edited] (CASSANDRA-14655) Upgrade C* to use latest guava (27.0)

2019-09-25 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936427#comment-16936427
 ] 

mck edited comment on CASSANDRA-14655 at 9/25/19 3:01 PM:
--

[~sumanth.pasupuleti],
 - unit tests look good,
 - i think the {{"cassandra-driver-core"}} lines in the build.xml can now be 
updated and uncommented, see {{"UPDATE AND UNCOMMENT"}} sections,
 - for LoaderOptions and CqlConfigHelper are there docs we want to update? (ie 
to use `host:port`?
 - is the {{nativePort}} parameter needed anymore in 
{{NativeSSTableLoaderClient}}'s constructor? the caller can put that into the 
{{hosts}} parameter, (and then in the {{init(..)}} method (line 73) the cluster 
builder called instead with {{addContactPointsWithPorts(hosts)}},
 - i am still looking into the dtests…




bq. … the caller can put that into the {{hosts}} parameter, …

For example it looks like {{CqlBulkRecordWriter}} (in the call to 
{{resolveHostAddresses}}) does this, while {{BulkLoader}}+{{LoaderOptions}} 
could be doing similar if {{LoaderOptions.Builder.build()}} looped through its 
{{hosts}} and put in {{nativePort}} when undefined.


was (Author: michaelsembwever):
[~sumanth.pasupuleti],
 - unit tests look good,
 - i think the {{"cassandra-driver-core"}} lines in the build.xml can now be 
updated and uncommented, see {{"UPDATE AND UNCOMMENT"}} sections,
 - for LoaderOptions and CqlConfigHelper are there docs we want to update? (ie 
to use `host:port`?
 - is the {{nativePort}} parameter needed anymore in 
{{NativeSSTableLoaderClient}}'s constructor? the caller can put that into the 
{{hosts}} parameter, (and then in the {{init(..)}} method (line 73) the cluster 
builder called instead with {{addContactPointsWithPorts(hosts)}},
 - i am still looking into the dtests…


bq. … the caller can put that into the {{hosts}} parameter, …

For example it looks like {{CqlBulkRecordWriter}} (in the call to 
{{resolveHostAddresses}}), while {{BulkLoader}}+{{LoaderOptions}} could be 
doing similar if {{LoaderOptions.Builder.build()}} looped through its {{hosts}} 
and put in {{nativePort}} when undefined.

> Upgrade C* to use latest guava (27.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14655) Upgrade C* to use latest guava (27.0)

2019-09-24 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936427#comment-16936427
 ] 

mck edited comment on CASSANDRA-14655 at 9/24/19 2:35 PM:
--

[~sumanth.pasupuleti],
 - unit tests look good,
 - i think the {{"cassandra-driver-core"}} lines in the build.xml can now be 
updated and uncommented, see {{"UPDATE AND UNCOMMENT"}} sections,
 - for LoaderOptions and CqlConfigHelper are there docs we want to update? (ie 
to use `host:port`?
 - is the {{nativePort}} parameter needed anymore in 
{{NativeSSTableLoaderClient}}'s constructor? the caller can put that into the 
{{hosts}} parameter, (and then in the {{init(..)}} method (line 73) the cluster 
builder called instead with {{addContactPointsWithPorts(hosts)}},
 - i am still looking into the dtests…


bq. … the caller can put that into the {{hosts}} parameter, …

For example it looks like {{CqlBulkRecordWriter}} (in the call to 
{{resolveHostAddresses}}), while {{BulkLoader}}+{{LoaderOptions}} could be 
doing similar if {{LoaderOptions.Builder.build()}} looped through its {{hosts}} 
and put in {{nativePort}} when undefined.


was (Author: michaelsembwever):
[~sumanth.pasupuleti],
 - unit tests look good,
 - i think the {{"cassandra-driver-core"}} lines in the build.xml can now be 
updated and uncommented, see {{"UPDATE AND UNCOMMENT"}} sections,
 - for LoaderOptions and CqlConfigHelper are there docs we want to update? (ie 
to use `host:port`?
 - is the {{nativePort}} parameter needed anymore in 
{{NativeSSTableLoaderClient}}'s constructor? the caller can put that into the 
{{hosts}} parameter, (and then in the {{init(..)}} method (line 73) the cluster 
builder called instead with {{addContactPointsWithPorts(hosts)}},
 - i am still looking into the dtests…

bq. … the caller can put that into the {{hosts}} parameter, …

For example it looks like {{CqlBulkRecordWriter}} (in the call to 
{{resolveHostAddresses}}), while {{BulkLoader}}+{{LoaderOptions}} could be 
doing similar if {{LoaderOptions.Builder.build()}} looped through its {{hosts}} 
and put in {{nativePort}} when undefined.

> Upgrade C* to use latest guava (27.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14655) Upgrade C* to use latest guava (27.0)

2019-09-24 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936427#comment-16936427
 ] 

mck edited comment on CASSANDRA-14655 at 9/24/19 2:35 PM:
--

[~sumanth.pasupuleti],
 - unit tests look good,
 - i think the {{"cassandra-driver-core"}} lines in the build.xml can now be 
updated and uncommented, see {{"UPDATE AND UNCOMMENT"}} sections,
 - for LoaderOptions and CqlConfigHelper are there docs we want to update? (ie 
to use `host:port`?
 - is the {{nativePort}} parameter needed anymore in 
{{NativeSSTableLoaderClient}}'s constructor? the caller can put that into the 
{{hosts}} parameter, (and then in the {{init(..)}} method (line 73) the cluster 
builder called instead with {{addContactPointsWithPorts(hosts)}},
 - i am still looking into the dtests…

bq. … the caller can put that into the {{hosts}} parameter, …

For example it looks like {{CqlBulkRecordWriter}} (in the call to 
{{resolveHostAddresses}}), while {{BulkLoader}}+{{LoaderOptions}} could be 
doing similar if {{LoaderOptions.Builder.build()}} looped through its {{hosts}} 
and put in {{nativePort}} when undefined.


was (Author: michaelsembwever):
[~sumanth.pasupuleti],
 - unit tests look good,
 - i think the {{"cassandra-driver-core"}} lines in the build.xml can now be 
updated and uncommented, see {{"UPDATE AND UNCOMMENT"}} sections,
 - for LoaderOptions and CqlConfigHelper are there docs we want to update? (ie 
to use `host:`port`?
 - is the {{nativePort}} parameter needed anymore in 
{{NativeSSTableLoaderClient}}'s constructor? the caller can put that into the 
{{hosts}} parameter, (and then in the {{init(..)}} method (line 73) the cluster 
builder called instead with {{addContactPointsWithPorts(hosts)}},
 - i am still looking into the dtests…

> Upgrade C* to use latest guava (27.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14655) Upgrade C* to use latest guava (27.0)

2019-09-23 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936427#comment-16936427
 ] 

mck commented on CASSANDRA-14655:
-

[~sumanth.pasupuleti],
 - unit tests look good,
 - i think the {{"cassandra-driver-core"}} lines in the build.xml can now be 
updated and uncommented, see {{"UPDATE AND UNCOMMENT"}} sections,
 - for LoaderOptions and CqlConfigHelper are there docs we want to update? (ie 
to use `host:`port`?
 - is the {{nativePort}} parameter needed anymore in 
{{NativeSSTableLoaderClient}}'s constructor? the caller can put that into the 
{{hosts}} parameter, (and then in the {{init(..)}} method (line 73) the cluster 
builder called instead with {{addContactPointsWithPorts(hosts)}},
 - i am still looking into the dtests…

> Upgrade C* to use latest guava (27.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14655) Upgrade C* to use latest guava (27.0)

2019-09-23 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936398#comment-16936398
 ] 

mck commented on CASSANDRA-14655:
-

bq.  thanks for bringing this back into light, would be really nice to get this 
into 4.0

it's breaks a number of coding practices to cut releases with such a binary 
included, imo.

> Upgrade C* to use latest guava (27.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14655) Upgrade C* to use latest guava (27.0)

2019-09-23 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935930#comment-16935930
 ] 

mck commented on CASSANDRA-14655:
-

||branch||circleci||asf jenkins testall||asf jenkins dtests||
|[trunk|https://github.com/apache/cassandra/compare/trunk...sumanth-pasupuleti:guava_27_trunk]|[circleci|https://circleci.com/gh/sumanth-pasupuleti/workflows/cassandra/tree/guava_27_trunk]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/47//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/47/]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/680//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/680]|



> Upgrade C* to use latest guava (27.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15334) Restore java-driver back to upstream code, using new implementation for dynamic port discovery

2019-09-23 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15334:

Resolution: Duplicate
Status: Resolved  (was: Open)

[~sumanth.pasupuleti] has done the work, ready for review, for this ticket 
already in CASSANDRA-14655. 

Specifically, for this concern raised in this issue, [~andrew.tolbert] mentions 
the problem 
[here|https://issues.apache.org/jira/browse/CASSANDRA-14655?focusedCommentId=16678661=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16678661].

> Restore java-driver back to upstream code, using new implementation for 
> dynamic port discovery
> --
>
> Key: CASSANDRA-15334
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15334
> Project: Cassandra
>  Issue Type: Task
>  Components: Dependencies
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.0-alpha
>
>
>  In Cassandra multiple ports per node was implemented in 
> [CASSANDRA-7544|https://issues.apache.org/jira/browse/CASSANDRA-7544] and in 
> the java-driver implemented under 
> [JAVA-1388|https://datastax-oss.atlassian.net/browse/JAVA-1388]. What's 
> currently included in {{lib/cassandra-driver-core-3.4.0-shaded.jar}} is a 
> custom build of code that is not found in any of the github repo's code 
> (branches or tags). It was built off a [forked 
> branch|https://github.com/datastax/java-driver/pull/931] that was never 
> accepted into the driver. It was implemented instead by the java-driver team 
> in [way|https://github.com/datastax/java-driver/pull/1065]. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14655) Upgrade C* to use latest guava (27.0)

2019-09-23 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14655:

Reviewers: mck, mck  (was: mck)
   mck, mck
   Status: Review In Progress  (was: Patch Available)

> Upgrade C* to use latest guava (27.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15321) Cassandra 4.0-alpha1 released with SNAPSHOT dependencies

2019-09-22 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15321:

  Since Version: 4.0-alpha
Source Control Link: 
https://github.com/apache/cassandra/commit/bc5fc8bc2dc517e2749edd73f6f28be3ce2fdb95
 
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed as bc5fc8bc2dc517e2749edd73f6f28be3ce2fdb95

> Cassandra 4.0-alpha1 released with SNAPSHOT dependencies
> 
>
> Key: CASSANDRA-15321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15321
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Marvin Froeder
>Assignee: Marvin Froeder
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0-alpha
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> I just noticed that for cassandra 4.0-alpha1, the {{cassandra-all}} has a 
> dependency to {{chronicle-core}} version {{1.16.3-SNAPSHOT}}. and 
> {{cassandra-driver 3.4.0-SNAPSHOT}}
> [http://repo1.maven.org/maven2/org/apache/cassandra/cassandra-all/4.0-alpha1/cassandra-all-4.0-alpha1.pom]
> This snapshots dependencies are not available on maven central, meaning 
> {{cassandra-all}} can't be used as dependency for maven projects as is.
>  
> Also, noticed that {{carrotsearch}} was missing from the dependency list.
>  
> PR available on github
> https://github.com/apache/cassandra/pull/358



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15321) Cassandra 4.0-alpha1 released with SNAPSHOT dependencies

2019-09-22 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15321:

Status: Ready to Commit  (was: Review In Progress)

> Cassandra 4.0-alpha1 released with SNAPSHOT dependencies
> 
>
> Key: CASSANDRA-15321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15321
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Marvin Froeder
>Assignee: Marvin Froeder
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0-alpha
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I just noticed that for cassandra 4.0-alpha1, the {{cassandra-all}} has a 
> dependency to {{chronicle-core}} version {{1.16.3-SNAPSHOT}}. and 
> {{cassandra-driver 3.4.0-SNAPSHOT}}
> [http://repo1.maven.org/maven2/org/apache/cassandra/cassandra-all/4.0-alpha1/cassandra-all-4.0-alpha1.pom]
> This snapshots dependencies are not available on maven central, meaning 
> {{cassandra-all}} can't be used as dependency for maven projects as is.
>  
> Also, noticed that {{carrotsearch}} was missing from the dependency list.
>  
> PR available on github
> https://github.com/apache/cassandra/pull/358



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15333) The release process does not incremental the version, nor document the need to

2019-09-22 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15333:

  Fix Version/s: 4.0-alpha
  Since Version: 4.0-alpha
Source Control Link: 
https://github.com/apache/cassandra/commit/b0f9d72840ec13030ad97ad77bf7478a079c2f6f
 
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed as b0f9d72840ec13030ad97ad77bf7478a079c2f6f

> The release process does not incremental the version, nor document the need to
> --
>
> Key: CASSANDRA-15333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> Incrementing the {{`base.version`}} in {{build.xml}} has remained a manual, 
> and easily forgotten, part of the release process.
> This patch adds the how and when to perform that step into the existing 
> release process documentation: 
>  
> https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk_15333



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15321) Cassandra 4.0-alpha1 released with SNAPSHOT dependencies

2019-09-22 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935259#comment-16935259
 ] 

mck edited comment on CASSANDRA-15321 at 9/22/19 8:47 AM:
--

[~velobr] has taken the cassandra-driver-core update out of the patch. The 
names of the jars under {{lib/}} are not related to, or involving, the fault 
described, although and despite consistency between declared dependency 
versions and those bundled jar files being desired.

The cassandra-driver-core involves a more complicated issue. In Cassandra 
multiple ports per node was implemented in 
[CASSANDRA-7544|https://issues.apache.org/jira/browse/CASSANDRA-7544] and in 
the java-driver implemented under 
[JAVA-1388|https://datastax-oss.atlassian.net/browse/JAVA-1388]. What's 
currently included in {{lib/cassandra-driver-core-3.4.0-shaded.jar}} is a 
custom build of code that is not found in any of the github repo's code 
(branches or tags). It was built off a [forked 
branch|https://github.com/datastax/java-driver/pull/931] that was never 
accepted into the driver. It was implemented instead by the java-driver team in 
a different [way|https://github.com/datastax/java-driver/pull/1065]. Restoring 
the version of the java-driver used has been entered in as issue 
[CASSANDRA-15334|https://issues.apache.org/jira/browse/CASSANDRA-15334].


was (Author: michaelsembwever):
[~velobr] has taken the cassandra-driver-core update out of the patch. The 
names of the jars under {{lib/}} are not related to, or involving, the fault 
described, although and despite consistency between declared dependency 
versions and those bundled jar files being desired.

The cassandra-driver-core involves a more complicated issue. In Cassandra 
multiple ports per node was implemented in 
[CASSANDRA-7544|https://issues.apache.org/jira/browse/CASSANDRA-7544] and in 
the java-driver implemented under 
[JAVA-1388|https://datastax-oss.atlassian.net/browse/JAVA-1388]. What's 
currently included in {{lib/cassandra-driver-core-3.4.0-shaded.jar}} is a 
custom build of code that is not found in any of the github repo's code 
(branches or tags). It was built off a [forked 
branch|https://github.com/datastax/java-driver/pull/931] that was never 
accepted into the driver. It was implemented instead by the java-driver team in 
[way|https://github.com/datastax/java-driver/pull/1065]. 

> Cassandra 4.0-alpha1 released with SNAPSHOT dependencies
> 
>
> Key: CASSANDRA-15321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15321
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Marvin Froeder
>Assignee: Marvin Froeder
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0-alpha
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I just noticed that for cassandra 4.0-alpha1, the {{cassandra-all}} has a 
> dependency to {{chronicle-core}} version {{1.16.3-SNAPSHOT}}. and 
> {{cassandra-driver 3.4.0-SNAPSHOT}}
> [http://repo1.maven.org/maven2/org/apache/cassandra/cassandra-all/4.0-alpha1/cassandra-all-4.0-alpha1.pom]
> This snapshots dependencies are not available on maven central, meaning 
> {{cassandra-all}} can't be used as dependency for maven projects as is.
>  
> Also, noticed that {{carrotsearch}} was missing from the dependency list.
>  
> PR available on github
> https://github.com/apache/cassandra/pull/358



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15334) Restore java-driver back to upstream code, using new implementation for dynamic port discovery

2019-09-22 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15334:

Change Category: Operability
 Complexity: Normal
  Fix Version/s: 4.0-alpha
 Status: Open  (was: Triage Needed)

> Restore java-driver back to upstream code, using new implementation for 
> dynamic port discovery
> --
>
> Key: CASSANDRA-15334
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15334
> Project: Cassandra
>  Issue Type: Task
>  Components: Dependencies
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.0-alpha
>
>
>  In Cassandra multiple ports per node was implemented in 
> [CASSANDRA-7544|https://issues.apache.org/jira/browse/CASSANDRA-7544] and in 
> the java-driver implemented under 
> [JAVA-1388|https://datastax-oss.atlassian.net/browse/JAVA-1388]. What's 
> currently included in {{lib/cassandra-driver-core-3.4.0-shaded.jar}} is a 
> custom build of code that is not found in any of the github repo's code 
> (branches or tags). It was built off a [forked 
> branch|https://github.com/datastax/java-driver/pull/931] that was never 
> accepted into the driver. It was implemented instead by the java-driver team 
> in [way|https://github.com/datastax/java-driver/pull/1065]. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15334) Restore java-driver back to upstream code, using new implementation for dynamic port discovery

2019-09-22 Thread mck (Jira)
mck created CASSANDRA-15334:
---

 Summary: Restore java-driver back to upstream code, using new 
implementation for dynamic port discovery
 Key: CASSANDRA-15334
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15334
 Project: Cassandra
  Issue Type: Task
  Components: Dependencies
Reporter: mck
Assignee: mck



 In Cassandra multiple ports per node was implemented in 
[CASSANDRA-7544|https://issues.apache.org/jira/browse/CASSANDRA-7544] and in 
the java-driver implemented under 
[JAVA-1388|https://datastax-oss.atlassian.net/browse/JAVA-1388]. What's 
currently included in {{lib/cassandra-driver-core-3.4.0-shaded.jar}} is a 
custom build of code that is not found in any of the github repo's code 
(branches or tags). It was built off a [forked 
branch|https://github.com/datastax/java-driver/pull/931] that was never 
accepted into the driver. It was implemented instead by the java-driver team in 
[way|https://github.com/datastax/java-driver/pull/1065]. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15321) Cassandra 4.0-alpha1 released with SNAPSHOT dependencies

2019-09-22 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935259#comment-16935259
 ] 

mck commented on CASSANDRA-15321:
-

[~velobr] has taken the cassandra-driver-core update out of the patch. The 
names of the jars under {{lib/}} are not related to, or involving, the fault 
described, although and despite consistency between declared dependency 
versions and those bundled jar files being desired.

The cassandra-driver-core involves a more complicated issue. In Cassandra 
multiple ports per node was implemented in 
[CASSANDRA-7544|https://issues.apache.org/jira/browse/CASSANDRA-7544] and in 
the java-driver implemented under 
[JAVA-1388|https://datastax-oss.atlassian.net/browse/JAVA-1388]. What's 
currently included in {{lib/cassandra-driver-core-3.4.0-shaded.jar}} is a 
custom build of code that is not found in any of the github repo's code 
(branches or tags). It was built off a [forked 
branch|https://github.com/datastax/java-driver/pull/931] that was never 
accepted into the driver. It was implemented instead by the java-driver team in 
[way|https://github.com/datastax/java-driver/pull/1065]. 

> Cassandra 4.0-alpha1 released with SNAPSHOT dependencies
> 
>
> Key: CASSANDRA-15321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15321
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Marvin Froeder
>Assignee: Marvin Froeder
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0-alpha
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I just noticed that for cassandra 4.0-alpha1, the {{cassandra-all}} has a 
> dependency to {{chronicle-core}} version {{1.16.3-SNAPSHOT}}. and 
> {{cassandra-driver 3.4.0-SNAPSHOT}}
> [http://repo1.maven.org/maven2/org/apache/cassandra/cassandra-all/4.0-alpha1/cassandra-all-4.0-alpha1.pom]
> This snapshots dependencies are not available on maven central, meaning 
> {{cassandra-all}} can't be used as dependency for maven projects as is.
>  
> Also, noticed that {{carrotsearch}} was missing from the dependency list.
>  
> PR available on github
> https://github.com/apache/cassandra/pull/358



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15333) The release process does not incremental the version, nor document the need to

2019-09-21 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15333:

Impacts:   (was: None)
Test and Documentation Plan: is a fix to documentation
 Status: Patch Available  (was: Open)

> The release process does not incremental the version, nor document the need to
> --
>
> Key: CASSANDRA-15333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: mck
>Assignee: mck
>Priority: Normal
>
> Incrementing the {{`base.version`}} in {{build.xml}} has remained a manual, 
> and easily forgotten, part of the release process.
> This patch adds the how and when to perform that step into the existing 
> release process documentation: 
>  
> https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk_15333



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15333) The release process does not incremental the version, nor document the need to

2019-09-21 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15333:

 Bug Category: Parent values: Correctness(12982)
   Complexity: Low Hanging Fruit
Discovered By: User Report
 Severity: Low
   Status: Open  (was: Triage Needed)

[~mshuler], have you an opportunity to review this, it is but a few lines of 
docs.

> The release process does not incremental the version, nor document the need to
> --
>
> Key: CASSANDRA-15333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: mck
>Assignee: mck
>Priority: Normal
>
> Incrementing the {{`base.version`}} in {{build.xml}} has remained a manual, 
> and easily forgotten, part of the release process.
> This patch adds the how and when to perform that step into the existing 
> release process documentation: 
>  
> https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk_15333



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15333) The release process does not incremental the version, nor document the need to

2019-09-21 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck reassigned CASSANDRA-15333:
---

Assignee: mck

> The release process does not incremental the version, nor document the need to
> --
>
> Key: CASSANDRA-15333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: mck
>Assignee: mck
>Priority: Normal
>
> Incrementing the {{`base.version`}} in {{build.xml}} has remained a manual, 
> and easily forgotten, part of the release process.
> This patch adds the how and when to perform that step into the existing 
> release process documentation: 
>  
> https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk_15333



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15333) The release process does not incremental the version, nor document the need to

2019-09-21 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15333:

Description: 
Incrementing the {{`base.version`}} in {{build.xml}} has remained a manual, and 
easily forgotten, part of the release process.

This patch adds the how and when to perform that step into the existing release 
process documentation: 
 
https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk_15333

  was:
Incrementing the {{`base.version`}} in {{build.xml}} has remained a manual, and 
easily forgotten, part of the release process.

This PR adds the how and when to perform that step into the existing release 
process documentation.


> The release process does not incremental the version, nor document the need to
> --
>
> Key: CASSANDRA-15333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: mck
>Priority: Normal
>
> Incrementing the {{`base.version`}} in {{build.xml}} has remained a manual, 
> and easily forgotten, part of the release process.
> This patch adds the how and when to perform that step into the existing 
> release process documentation: 
>  
> https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk_15333



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15333) The release process does not incremental the version, nor document the need to

2019-09-21 Thread mck (Jira)
mck created CASSANDRA-15333:
---

 Summary: The release process does not incremental the version, nor 
document the need to
 Key: CASSANDRA-15333
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15333
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation/Website
Reporter: mck


Incrementing the {{`base.version`}} in {{build.xml}} has remained a manual, and 
easily forgotten, part of the release process.

This PR adds the how and when to perform that step into the existing release 
process documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15321) Cassandra 4.0-alpha1 released with SNAPSHOT dependencies

2019-09-09 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15321:

Test and Documentation Plan: maven build using cassandra-all passing
 Status: Patch Available  (was: Open)

> Cassandra 4.0-alpha1 released with SNAPSHOT dependencies
> 
>
> Key: CASSANDRA-15321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15321
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Marvin Froeder
>Assignee: Marvin Froeder
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> I just noticed that for cassandra 4.0-alpha1, the {{cassandra-all}} has a 
> dependency to {{chronicle-core}} version {{1.16.3-SNAPSHOT}}. and 
> {{cassandra-driver 3.4.0-SNAPSHOT}}
> [http://repo1.maven.org/maven2/org/apache/cassandra/cassandra-all/4.0-alpha1/cassandra-all-4.0-alpha1.pom]
> This snapshots dependencies are not available on maven central, meaning 
> {{cassandra-all}} can't be used as dependency for maven projects as is.
>  
> Also, noticed that {{carrotsearch}} was missing from the dependency list.
>  
> PR available on github
> https://github.com/apache/cassandra/pull/358



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15321) Cassandra 4.0-alpha1 released with SNAPSHOT dependencies

2019-09-09 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15321:

Reviewers: mck, mck  (was: mck)
   mck, mck  (was: mck)
   Status: Review In Progress  (was: Patch Available)

> Cassandra 4.0-alpha1 released with SNAPSHOT dependencies
> 
>
> Key: CASSANDRA-15321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15321
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Marvin Froeder
>Assignee: Marvin Froeder
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> I just noticed that for cassandra 4.0-alpha1, the {{cassandra-all}} has a 
> dependency to {{chronicle-core}} version {{1.16.3-SNAPSHOT}}. and 
> {{cassandra-driver 3.4.0-SNAPSHOT}}
> [http://repo1.maven.org/maven2/org/apache/cassandra/cassandra-all/4.0-alpha1/cassandra-all-4.0-alpha1.pom]
> This snapshots dependencies are not available on maven central, meaning 
> {{cassandra-all}} can't be used as dependency for maven projects as is.
>  
> Also, noticed that {{carrotsearch}} was missing from the dependency list.
>  
> PR available on github
> https://github.com/apache/cassandra/pull/358



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15321) Cassandra 4.0-alpha1 released with SNAPSHOT dependencies

2019-09-09 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15321:

 Bug Category: Parent values: Code(13163)
   Complexity: Low Hanging Fruit
Discovered By: User Report
Fix Version/s: 4.0-alpha
Reviewers: mck
 Severity: Normal
 Assignee: Marvin Froeder
   Status: Open  (was: Triage Needed)

> Cassandra 4.0-alpha1 released with SNAPSHOT dependencies
> 
>
> Key: CASSANDRA-15321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15321
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Marvin Froeder
>Assignee: Marvin Froeder
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> I just noticed that for cassandra 4.0-alpha1, the {{cassandra-all}} has a 
> dependency to {{chronicle-core}} version {{1.16.3-SNAPSHOT}}. and 
> {{cassandra-driver 3.4.0-SNAPSHOT}}
> [http://repo1.maven.org/maven2/org/apache/cassandra/cassandra-all/4.0-alpha1/cassandra-all-4.0-alpha1.pom]
> This snapshots dependencies are not available on maven central, meaning 
> {{cassandra-all}} can't be used as dependency for maven projects as is.
>  
> Also, noticed that {{carrotsearch}} was missing from the dependency list.
>  
> PR available on github
> https://github.com/apache/cassandra/pull/358



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-09-08 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Source Control Link: 
https://github.com/apache/cassandra/commit/068d2d37c6fbdb60546821c4d408a84161fd1cb6
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed as 068d2d37c6fbdb60546821c4d408a84161fd1cb6

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-09-08 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Reviewers: Branimir Lambov  (was: Branimir Lambov, mck)

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-09-08 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Fix Version/s: (was: 4.x)
   4.0-alpha

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-09-08 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Status: Ready to Commit  (was: Review In Progress)

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-09-08 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Reviewers: Branimir Lambov, mck  (was: Branimir Lambov)
   Status: Review In Progress  (was: Patch Available)

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-30 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Impacts: Docs
Test and Documentation Plan: unit test, manual testing
 Status: Patch Available  (was: In Progress)

Have added a unit test in BootStrapperTest. Does not do that much, as 
SummaryStatistics is not available when using 
`allocate_tokens_for_local_replication_factor`.



> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-30 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905336#comment-16905336
 ] 

mck edited comment on CASSANDRA-15260 at 8/30/19 10:48 AM:
---

Thanks [~blambov]. The rename is done.


||branch||circleci||asf jenkins testall||
|[CASSANDRA-15260|https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk__allocate_tokens_for_dc_rf]|[circleci|https://circleci.com/gh/thelastpickle/workflows/cassandra/tree/mck%2Ftrunk__allocate_tokens_for_dc_rf]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/45//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/45/]|

I've opened the ticket, and will 'Submit Patch' it after I get some unit tests 
in.


was (Author: michaelsembwever):
Thanks [~blambov]. The rename is done.


||branch||circleci||asf jenkins testall||
|[CASSANDRA-15260|https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk__allocate_tokens_for_dc_rf]|[circleci|https://circleci.com/gh/thelastpickle/workflows/cassandra/tree/mck%2Ftrunk__allocate_tokens_for_dc_rf]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43/]|

I've opened the ticket, and will 'Submit Patch' it after I get some unit tests 
in.

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-27 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916921#comment-16916921
 ] 

mck commented on CASSANDRA-15172:
-

[~Sagges],

 this bug comes from existing thrift (legacy) tables, where range tombstones 
were used. 

The NPE [~ferozshaik...@gmail.com] reported is a separate bug, despite it also 
being coming from legacy thrift tables with range tombstones. Unfortunately 
though, the fix for this ticket will not solve the NPE bug. 

> LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException
> 
>
> Key: CASSANDRA-15172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Shalom
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19, 3.11.5
>
>
> Hi All,
> This is the first time I open an issue, so apologies if I'm not following the 
> rules properly.
>  
> After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
> lot of AbstractLocalAwareExecutorService exceptions. This happened right 
> after the node successfully started up with the new 3.11.4 binaries. 
> {noformat}
> INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
> proceeding
> INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
> using native Epoll event loop
> INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
> [netty-buffer=netty-buffer-4.0.44.Final.452812a, 
> netty-codec=netty-codec-4.0.44.Final.452812a, 
> netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
> netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
> netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
> netty-common=netty-common-4.0.44.Final.452812a, 
> netty-handler=netty-handler-4.0.44.Final.452812a, 
> netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
> netty-transport=netty-transport-4.0.44.Final.452812a, 
> netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a,
>  netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
> netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
> netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
> INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
> CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 
> AuthCache.java:161 - (Re)initializing PermissionsCache (validity 
> period/update interval/max entries) (2000/2000/1000)
> INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 
> - Converting legacy permissions data
> INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
> INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
> INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
> OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6
> WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 1
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
>     at 
> org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
>     at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
>     at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
>     at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
>     at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:953)
>     at 
> org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:929)
>     at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:62)
>     at 
> 

[jira] [Updated] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-22 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15172:

  Since Version: 3.0 alpha 1
Source Control Link: 
https://github.com/apache/cassandra/commit/2b10a5f2b5e62f2900119a37e91637916e8b23df
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed as 2b10a5f2b5e62f2900119a37e91637916e8b23df

> LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException
> 
>
> Key: CASSANDRA-15172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Shalom
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19, 3.11.5
>
>
> Hi All,
> This is the first time I open an issue, so apologies if I'm not following the 
> rules properly.
>  
> After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
> lot of AbstractLocalAwareExecutorService exceptions. This happened right 
> after the node successfully started up with the new 3.11.4 binaries. 
> {noformat}
> INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
> proceeding
> INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
> using native Epoll event loop
> INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
> [netty-buffer=netty-buffer-4.0.44.Final.452812a, 
> netty-codec=netty-codec-4.0.44.Final.452812a, 
> netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
> netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
> netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
> netty-common=netty-common-4.0.44.Final.452812a, 
> netty-handler=netty-handler-4.0.44.Final.452812a, 
> netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
> netty-transport=netty-transport-4.0.44.Final.452812a, 
> netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a,
>  netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
> netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
> netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
> INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
> CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 
> AuthCache.java:161 - (Re)initializing PermissionsCache (validity 
> period/update interval/max entries) (2000/2000/1000)
> INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 
> - Converting legacy permissions data
> INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
> INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
> INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
> OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6
> WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 1
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
>     at 
> org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
>     at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
>     at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
>     at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
>     at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:953)
>     at 
> org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:929)
>     at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:62)
>     at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
>     at 
> 

[jira] [Updated] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-22 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15172:

Status: Ready to Commit  (was: Review In Progress)

> LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException
> 
>
> Key: CASSANDRA-15172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Shalom
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19, 3.11.5
>
>
> Hi All,
> This is the first time I open an issue, so apologies if I'm not following the 
> rules properly.
>  
> After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
> lot of AbstractLocalAwareExecutorService exceptions. This happened right 
> after the node successfully started up with the new 3.11.4 binaries. 
> {noformat}
> INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
> proceeding
> INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
> using native Epoll event loop
> INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
> [netty-buffer=netty-buffer-4.0.44.Final.452812a, 
> netty-codec=netty-codec-4.0.44.Final.452812a, 
> netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
> netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
> netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
> netty-common=netty-common-4.0.44.Final.452812a, 
> netty-handler=netty-handler-4.0.44.Final.452812a, 
> netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
> netty-transport=netty-transport-4.0.44.Final.452812a, 
> netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a,
>  netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
> netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
> netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
> INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
> CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 
> AuthCache.java:161 - (Re)initializing PermissionsCache (validity 
> period/update interval/max entries) (2000/2000/1000)
> INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 
> - Converting legacy permissions data
> INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
> INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
> INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
> OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6
> WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 1
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
>     at 
> org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
>     at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
>     at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
>     at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
>     at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:953)
>     at 
> org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:929)
>     at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:62)
>     at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>     at 
> 

[jira] [Updated] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-22 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15172:

Fix Version/s: (was: 4.0)

> LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException
> 
>
> Key: CASSANDRA-15172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Shalom
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19, 3.11.5
>
>
> Hi All,
> This is the first time I open an issue, so apologies if I'm not following the 
> rules properly.
>  
> After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
> lot of AbstractLocalAwareExecutorService exceptions. This happened right 
> after the node successfully started up with the new 3.11.4 binaries. 
> {noformat}
> INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
> proceeding
> INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
> using native Epoll event loop
> INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
> [netty-buffer=netty-buffer-4.0.44.Final.452812a, 
> netty-codec=netty-codec-4.0.44.Final.452812a, 
> netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
> netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
> netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
> netty-common=netty-common-4.0.44.Final.452812a, 
> netty-handler=netty-handler-4.0.44.Final.452812a, 
> netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
> netty-transport=netty-transport-4.0.44.Final.452812a, 
> netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a,
>  netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
> netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
> netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
> INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
> CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 
> AuthCache.java:161 - (Re)initializing PermissionsCache (validity 
> period/update interval/max entries) (2000/2000/1000)
> INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 
> - Converting legacy permissions data
> INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
> INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
> INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
> OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6
> WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 1
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
>     at 
> org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
>     at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
>     at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
>     at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
>     at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:953)
>     at 
> org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:929)
>     at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:62)
>     at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>     at 
> 

[jira] [Updated] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-22 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15172:

Fix Version/s: 4.0
   3.11.5
   3.0.19

> LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException
> 
>
> Key: CASSANDRA-15172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Shalom
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19, 3.11.5, 4.0
>
>
> Hi All,
> This is the first time I open an issue, so apologies if I'm not following the 
> rules properly.
>  
> After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
> lot of AbstractLocalAwareExecutorService exceptions. This happened right 
> after the node successfully started up with the new 3.11.4 binaries. 
> {noformat}
> INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
> proceeding
> INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
> using native Epoll event loop
> INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
> [netty-buffer=netty-buffer-4.0.44.Final.452812a, 
> netty-codec=netty-codec-4.0.44.Final.452812a, 
> netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
> netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
> netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
> netty-common=netty-common-4.0.44.Final.452812a, 
> netty-handler=netty-handler-4.0.44.Final.452812a, 
> netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
> netty-transport=netty-transport-4.0.44.Final.452812a, 
> netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a,
>  netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
> netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
> netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
> INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
> CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 
> AuthCache.java:161 - (Re)initializing PermissionsCache (validity 
> period/update interval/max entries) (2000/2000/1000)
> INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 
> - Converting legacy permissions data
> INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
> INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
> INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
> OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6
> WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 1
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
>     at 
> org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
>     at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
>     at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
>     at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
>     at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:953)
>     at 
> org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:929)
>     at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:62)
>     at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>     

[jira] [Commented] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-21 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912704#comment-16912704
 ] 

mck commented on CASSANDRA-15172:
-


||branch||circleci||asf jenkins testall||asf jenkins dtests||
|[15172-3.0|https://github.com/apache/cassandra/compare/trunk...belliottsmith:15172-3.0]|[circleci|https://circleci.com/gh/belliottsmith/workflows/cassandra/tree/15172-3.0]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/44//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/44/]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/679//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/679]|



> LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException
> 
>
> Key: CASSANDRA-15172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Shalom
>Assignee: Benedict
>Priority: Normal
>
> Hi All,
> This is the first time I open an issue, so apologies if I'm not following the 
> rules properly.
>  
> After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
> lot of AbstractLocalAwareExecutorService exceptions. This happened right 
> after the node successfully started up with the new 3.11.4 binaries. 
> {noformat}
> INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
> proceeding
> INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
> using native Epoll event loop
> INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
> [netty-buffer=netty-buffer-4.0.44.Final.452812a, 
> netty-codec=netty-codec-4.0.44.Final.452812a, 
> netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
> netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
> netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
> netty-common=netty-common-4.0.44.Final.452812a, 
> netty-handler=netty-handler-4.0.44.Final.452812a, 
> netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
> netty-transport=netty-transport-4.0.44.Final.452812a, 
> netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a,
>  netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
> netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
> netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
> INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
> CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 
> AuthCache.java:161 - (Re)initializing PermissionsCache (validity 
> period/update interval/max entries) (2000/2000/1000)
> INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 
> - Converting legacy permissions data
> INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
> INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
> INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
> OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6
> WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 1
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
>     at 
> org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
>     at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
>     at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
>     at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
>     at 
> 

[jira] [Updated] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-21 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15172:

Reviewers: mck, mck  (was: mck)
   mck, mck
   Status: Review In Progress  (was: Patch Available)

> LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException
> 
>
> Key: CASSANDRA-15172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Shalom
>Assignee: Benedict
>Priority: Normal
>
> Hi All,
> This is the first time I open an issue, so apologies if I'm not following the 
> rules properly.
>  
> After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
> lot of AbstractLocalAwareExecutorService exceptions. This happened right 
> after the node successfully started up with the new 3.11.4 binaries. 
> {noformat}
> INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
> proceeding
> INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
> using native Epoll event loop
> INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
> [netty-buffer=netty-buffer-4.0.44.Final.452812a, 
> netty-codec=netty-codec-4.0.44.Final.452812a, 
> netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
> netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
> netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
> netty-common=netty-common-4.0.44.Final.452812a, 
> netty-handler=netty-handler-4.0.44.Final.452812a, 
> netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
> netty-transport=netty-transport-4.0.44.Final.452812a, 
> netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a,
>  netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
> netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
> netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
> INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
> CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 
> AuthCache.java:161 - (Re)initializing PermissionsCache (validity 
> period/update interval/max entries) (2000/2000/1000)
> INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 
> - Converting legacy permissions data
> INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
> INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
> INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
> OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6
> WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 1
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
>     at 
> org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
>     at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
>     at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
>     at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
>     at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:953)
>     at 
> org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:929)
>     at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:62)
>     at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>     

[jira] [Commented] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-21 Thread mck (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912357#comment-16912357
 ] 

mck commented on CASSANDRA-15172:
-

[~benedict], we've seen this in the wild as well, with an upgrade from 2.2.14 
to 3.11.4.
 I am jumping in to test and review it.

> LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException
> 
>
> Key: CASSANDRA-15172
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15172
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Shalom
>Assignee: Benedict
>Priority: Normal
>
> Hi All,
> This is the first time I open an issue, so apologies if I'm not following the 
> rules properly.
>  
> After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
> lot of AbstractLocalAwareExecutorService exceptions. This happened right 
> after the node successfully started up with the new 3.11.4 binaries. 
> {noformat}
> INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
> proceeding
> INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
> using native Epoll event loop
> INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
> [netty-buffer=netty-buffer-4.0.44.Final.452812a, 
> netty-codec=netty-codec-4.0.44.Final.452812a, 
> netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
> netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
> netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
> netty-common=netty-common-4.0.44.Final.452812a, 
> netty-handler=netty-handler-4.0.44.Final.452812a, 
> netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
> netty-transport=netty-transport-4.0.44.Final.452812a, 
> netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a,
>  netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
> netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
> netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
> INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
> CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it
> INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 
> AuthCache.java:161 - (Re)initializing PermissionsCache (validity 
> period/update interval/max entries) (2000/2000/1000)
> INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 
> - Converting legacy permissions data
> INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
> INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
> OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
> INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
> OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6
> WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: 1
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
>     at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
>     at 
> org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
>     at 
> org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
>     at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
>     at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
>     at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
>     at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:953)
>     at 
> org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:929)
>     at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:62)
>     at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at 
> 

[jira] [Updated] (CASSANDRA-15172) LegacyLayout RangeTombstoneList throws IndexOutOfBoundsException

2019-08-21 Thread mck (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15172:

Description: 
Hi All,

This is the first time I open an issue, so apologies if I'm not following the 
rules properly.

 

After upgrading a node from version 2.1.21 to 3.11.4, we've started seeing a 
lot of AbstractLocalAwareExecutorService exceptions. This happened right after 
the node successfully started up with the new 3.11.4 binaries. 
{noformat}
INFO  [main] 2019-06-05 04:41:37,730 Gossiper.java:1715 - No gossip backlog; 
proceeding
INFO  [main] 2019-06-05 04:41:38,036 NativeTransportService.java:70 - Netty 
using native Epoll event loop
INFO  [main] 2019-06-05 04:41:38,117 Server.java:155 - Using Netty Version: 
[netty-buffer=netty-buffer-4.0.44.Final.452812a, 
netty-codec=netty-codec-4.0.44.Final.452812a, 
netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, 
netty-codec-http=netty-codec-http-4.0.44.Final.452812a, 
netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, 
netty-common=netty-common-4.0.44.Final.452812a, 
netty-handler=netty-handler-4.0.44.Final.452812a, 
netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, 
netty-transport=netty-transport-4.0.44.Final.452812a, 
netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, 
netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, 
netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, 
netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
INFO  [main] 2019-06-05 04:41:38,118 Server.java:156 - Starting listening for 
CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO  [main] 2019-06-05 04:41:38,179 CassandraDaemon.java:556 - Not starting 
RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool 
(enablethrift) to start it
INFO  [Native-Transport-Requests-21] 2019-06-05 04:41:39,145 AuthCache.java:161 
- (Re)initializing PermissionsCache (validity period/update interval/max 
entries) (2000/2000/1000)
INFO  [OptionalTasks:1] 2019-06-05 04:41:39,729 CassandraAuthorizer.java:409 - 
Converting legacy permissions data
INFO  [HANDSHAKE-/10.10.10.8] 2019-06-05 04:41:39,808 
OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.8
INFO  [HANDSHAKE-/10.10.10.9] 2019-06-05 04:41:39,808 
OutboundTcpConnection.java:561 - Handshaking version with /10.10.10.9
INFO  [HANDSHAKE-dc1_02/10.10.10.6] 2019-06-05 04:41:39,809 
OutboundTcpConnection.java:561 - Handshaking version with dc1_02/10.10.10.6

WARN  [ReadStage-2] 2019-06-05 04:41:39,857 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[ReadStage-2,5,main]: {}
java.lang.ArrayIndexOutOfBoundsException: 1
    at 
org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
    at 
org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSizeCompound(LegacyLayout.java:2545)
    at 
org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.serializedSize(LegacyLayout.java:2522)
    at 
org.apache.cassandra.db.LegacyLayout.serializedSizeAsLegacyPartition(LegacyLayout.java:565)
    at 
org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:446)
    at 
org.apache.cassandra.db.ReadResponse$Serializer.serializedSize(ReadResponse.java:352)
    at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:171)
    at 
org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:77)
    at 
org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:802)
    at 
org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:953)
    at 
org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:929)
    at 
org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:62)
    at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
    at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
    at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:114)
    at java.lang.Thread.run(Thread.java:745)
 {noformat}

 

After several of the above warnings, the following warning appeared as well:

 {noformat}
WARN  [ReadStage-9] 2019-06-05 04:42:04,369 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[ReadStage-9,5,main]: {}
java.lang.ArrayIndexOutOfBoundsException: null
WARN  [ReadStage-11] 2019-06-05 04:42:04,381 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 

[jira] [Comment Edited] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-13 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905336#comment-16905336
 ] 

mck edited comment on CASSANDRA-15260 at 8/13/19 2:52 PM:
--

Thanks [~blambov]. The rename is done.


||branch||circleci||asf jenkins testall||
|[CASSANDRA-15260|https://github.com/apache/cassandra/compare/trunk...thelastpickle:mck/trunk__allocate_tokens_for_dc_rf]|[circleci|https://circleci.com/gh/thelastpickle/workflows/cassandra/tree/mck%2Ftrunk__allocate_tokens_for_dc_rf]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43/]|

I've opened the ticket, and will 'Submit Patch' it after I get some unit tests 
in.


was (Author: michaelsembwever):
Thanks [~blambov]. The rename is done.


||branch||circleci||asf jenkins testall||
|[CASSANDRA-15260|https://github.com/thelastpickle/cassandra/commit/4513af58a532b91ab4449161a79e70f78b7ebcfc]|[circleci|https://circleci.com/gh/thelastpickle/workflows/cassandra/tree/mck%2Ftrunk__allocate_tokens_for_dc_rf]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43/]|

I've opened the ticket, and will 'Submit Patch' it after I get some unit tests 
in.

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-12 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905336#comment-16905336
 ] 

mck commented on CASSANDRA-15260:
-

Thanks [~blambov]. The rename is done.


||branch||circleci||asf jenkins testall||
|[CASSANDRA-15260|https://github.com/thelastpickle/cassandra/commit/4513af58a532b91ab4449161a79e70f78b7ebcfc]|[circleci|https://circleci.com/gh/thelastpickle/workflows/cassandra/tree/mck%2Ftrunk__allocate_tokens_for_dc_rf]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43/]|

I've opened the ticket, and will transition it to 'Submit Patch' after I get 
some unit tests in.

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-12 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905336#comment-16905336
 ] 

mck edited comment on CASSANDRA-15260 at 8/12/19 4:05 PM:
--

Thanks [~blambov]. The rename is done.


||branch||circleci||asf jenkins testall||
|[CASSANDRA-15260|https://github.com/thelastpickle/cassandra/commit/4513af58a532b91ab4449161a79e70f78b7ebcfc]|[circleci|https://circleci.com/gh/thelastpickle/workflows/cassandra/tree/mck%2Ftrunk__allocate_tokens_for_dc_rf]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43/]|

I've opened the ticket, and will 'Submit Patch' it after I get some unit tests 
in.


was (Author: michaelsembwever):
Thanks [~blambov]. The rename is done.


||branch||circleci||asf jenkins testall||
|[CASSANDRA-15260|https://github.com/thelastpickle/cassandra/commit/4513af58a532b91ab4449161a79e70f78b7ebcfc]|[circleci|https://circleci.com/gh/thelastpickle/workflows/cassandra/tree/mck%2Ftrunk__allocate_tokens_for_dc_rf]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/43/]|

I've opened the ticket, and will transition it to 'Submit Patch' after I get 
some unit tests in.

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-12 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

 Complexity: Low Hanging Fruit
Change Category: Operability
 Status: Open  (was: Triage Needed)

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-09 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904095#comment-16904095
 ] 

mck commented on CASSANDRA-15260:
-

[~blambov], in context of this thread 
https://lists.apache.org/thread.html/56435ee11852ea842443d462500277eebe76743e6657e0cfdd7d67df@%3Cdev.cassandra.apache.org%3E
would you agree with the default of `allocate_tokens_for_local_rf=3` if 
`num_tokens=16` also became the default?

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-09 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14952:

  Fix Version/s: (was: 3.0.x)
 4.0
 3.11.5
 3.0.19
Source Control Link: 
https://github.com/apache/cassandra/commit/2374a74eba6a4df84f9bda3fd311916c820e9cd6
  Since Version: 3.0 alpha 1
 Status: Resolved  (was: Ready to Commit)
 Resolution: Fixed

Committed as 2374a74eba6a4df84f9bda3fd311916c820e9cd6

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.19, 3.11.5, 4.0
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +{
> +racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +}
>  
>  if (racks >= replicas)
>  {
> {code}
> Let me know your comments.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 

[jira] [Updated] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-09 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14952:

Authors: Jaydeepkumar Chovatia, mck  (was: Jaydeepkumar Chovatia)

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +{
> +racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +}
>  
>  if (racks >= replicas)
>  {
> {code}
> Let me know your comments.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-09 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14952:

Status: Ready to Commit  (was: Review In Progress)

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +{
> +racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +}
>  
>  if (racks >= replicas)
>  {
> {code}
> Let me know your comments.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-09 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14952:

Status: Review In Progress  (was: Patch Available)

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +{
> +racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +}
>  
>  if (racks >= replicas)
>  {
> {code}
> Let me know your comments.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-08 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899686#comment-16899686
 ] 

mck edited comment on CASSANDRA-14952 at 8/8/19 3:43 PM:
-

> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as its own unique unit (as it's the 
first in any eventuating unit group). Although seeds (non-autobootstrapping) 
and non-existant dc names (CASSANDRA-12681) can also prevent that from 
happening.

A slightly modified version of your fix [~chovatia.jayd...@gmail.com]
||branch||circleci||asf jenkins testall||asf jenkins dtests||
|[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]|[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/41//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/41/]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/678//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/678/]|


was (Author: michaelsembwever):
> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as its own unique unit (as it's the 
first in any eventuating unit group). Although seeds (non-autobootstrapping) 
and non-existant dc names (CASSANDRA-12681) can also prevent that from 
happening.


A slightly modified version of your fix [~chovatia.jayd...@gmail.com]

|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | 

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> 

[jira] [Comment Edited] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-07 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902243#comment-16902243
 ] 

mck edited comment on CASSANDRA-15260 at 8/7/19 4:54 PM:
-

{quote} Meanwhile, for consistency's sake I would change the name of the option 
to match DSE's as it is doing exactly the same thing. {quote}

No objection. I will fix it. The naming is a bit clumsy either way imho, but 
nothing better comes to mind, and indeed it makes sense to re-use DSE's 
terminology for an identical feature.

 

{quote}We may be doing more damage than good over e.g. 256-vnode random choice, 
…{quote}

Makes sense and is fine by me. It helps to just have these concerns, and the 
trade-off, stated somewhere. 

 


was (Author: michaelsembwever):
{quote} Meanwhile, for consistency's sake I would change the name of the option 
to match DSE's as it is doing exactly the same thing. \{quote}

No objection. I will fix it. The naming is a bit clumsy either way imho, but 
nothing better comes to mind, and indeed it makes sense to re-use DSE's 
terminology for an identical feature.

 

{quote}We may be doing more damage than good over e.g. 256-vnode random choice, 
…\{quote}

Makes sense and is fine by me. It helps to just have these concerns, and the 
trade-off, stated somewhere. 

 

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-07 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902243#comment-16902243
 ] 

mck commented on CASSANDRA-15260:
-

{quote} Meanwhile, for consistency's sake I would change the name of the option 
to match DSE's as it is doing exactly the same thing. \{quote}

No objection. I will fix it. The naming is a bit clumsy either way imho, but 
nothing better comes to mind, and indeed it makes sense to re-use DSE's 
terminology for an identical feature.

 

{quote}We may be doing more damage than good over e.g. 256-vnode random choice, 
…\{quote}

Makes sense and is fine by me. It helps to just have these concerns, and the 
trade-off, stated somewhere. 

 

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS. This can be done 
> with the use of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default? as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as that provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Description: 
Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. The real keyspace can not be used when 
adding new datacenters as, in practice, all its nodes need to be up and running 
before it has the capacity to replicate data into it. New datacenters (or 
lift-and-shifting a cluster via datacenter migration) therefore has to be done 
using a dummy keyspace that duplicates the replication strategy+factor of the 
real keyspace. This gets even more difficult come version 4.0, as the replica 
factor can not even be defined in new datacenters before those datacenters are 
up and running. 

These issues are removed by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS. This can be done with 
the use of an {{allocate_tokens_for_dc_rf}} option.

It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
becomes the default? as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
[https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]

The patch does not remove the existing {{allocate_tokens_for_keyspace}} option, 
as that provides the codebase for handling different replication strategies.

 

fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
[~alexchueshev]

  was:
Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. The real keyspace can not be used when 
adding new datacenters as, in practice, all its nodes need to be up and running 
before it has the capacity to replicate data into it. New datacenters (or 
lift-and-shifting a cluster via datacenter migration) therefore has to be done 
using a dummy keyspace that duplicates the replication strategy+factor of the 
real keyspace. This gets even more difficult come version 4.0, as the replica 
factor can not even be defined in new datacenters before those datacenters are 
up and running. 

These issues are removed by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS, with the introduction 
of an {{allocate_tokens_for_dc_rf}} option.

It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
becomes the default, as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
[https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]

The patch does not remove the existing {{allocate_tokens_for_keyspace}} option, 
as that provides the codebase for handling different replication strategies.

 

fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
[~alexchueshev]


> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> 

[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Impacts:   (was: None)

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS, with the 
> introduction of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default, as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as it still provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Description: 
Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. The real keyspace can not be used when 
adding new datacenters as, in practice, all its nodes need to be up and running 
before it has the capacity to replicate data into it. New datacenters (or 
lift-and-shifting a cluster via datacenter migration) therefore has to be done 
using a dummy keyspace that duplicates the replication strategy+factor of the 
real keyspace. This gets even more difficult come version 4.0, as the replica 
factor can not even be defined in new datacenters before those datacenters are 
up and running. 

These issues are removed by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS, with the introduction 
of an {{allocate_tokens_for_dc_rf}} option.

It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
becomes the default, as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
[https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]

The patch does not remove the existing {{allocate_tokens_for_keyspace}} option, 
as that provides the codebase for handling different replication strategies.

 

fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
[~alexchueshev]

  was:
Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. The real keyspace can not be used when 
adding new datacenters as, in practice, all its nodes need to be up and running 
before it has the capacity to replicate data into it. New datacenters (or 
lift-and-shifting a cluster via datacenter migration) therefore has to be done 
using a dummy keyspace that duplicates the replication strategy+factor of the 
real keyspace. This gets even more difficult come version 4.0, as the replica 
factor can not even be defined in new datacenters before those datacenters are 
up and running. 

These issues are removed by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS, with the introduction 
of an {{allocate_tokens_for_dc_rf}} option.

It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
becomes the default, as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
[https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]

The patch does not remove the existing {{allocate_tokens_for_keyspace}} option, 
as it still provides the codebase for handling different replication strategies.

 

fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
[~alexchueshev]


> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These 

[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Description: 
Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. The real keyspace can not be used when 
adding new datacenters as, in practice, all its nodes need to be up and running 
before it has the capacity to replicate data into it. New datacenters (or 
lift-and-shifting a cluster via datacenter migration) therefore has to be done 
using a dummy keyspace that duplicates the replication strategy+factor of the 
real keyspace. This gets even more difficult come version 4.0, as the replica 
factor can not even be defined in new datacenters before those datacenters are 
up and running. 

These issues are removed by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS, with the introduction 
of an {{allocate_tokens_for_dc_rf}} option.

It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
becomes the default, as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
[https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]

The patch does not remove the existing {{allocate_tokens_for_keyspace}} option, 
as it still provides the codebase for handling different replication strategies.

 

fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
[~alexchueshev]

  was:
Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. The real keyspace can not be used when 
adding new datacenters as, in practice, all its nodes need to be up and running 
before it has the capacity to replicate data into it. New datacenters (or 
lift-and-shifting a cluster via datacenter migration) therefore has to be done 
using a dummy keyspace that duplicates the replication strategy+factor of the 
real keyspace. This gets even more difficult come version 4.0, as the replica 
factor can not even be defined in new datacenters before those datacenters are 
up and running. 

These issues are removed by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS, with the introduction 
of an {{allocate_tokens_for_dc_rf}} option.

It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
becomes the default, as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
[https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]

The patch does not remove the existing {{allocate_tokens_for_keyspace}} option, 
as it still provides the codebase for handling different replication strategies.


> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, 

[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Fix Version/s: 4.x

> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
> Fix For: 4.x
>
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS, with the 
> introduction of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default, as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 
> the existing randomly generated tokens algorithm.
> Initial patch is available in 
> [https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]
> The patch does not remove the existing {{allocate_tokens_for_keyspace}} 
> option, as it still provides the codebase for handling different replication 
> strategies.
>  
> fyi [~blambov] [~jay.zhuang] [~chovatia.jayd...@gmail.com] [~alokamvenki] 
> [~alexchueshev]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15260:

Description: 
Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. The real keyspace can not be used when 
adding new datacenters as, in practice, all its nodes need to be up and running 
before it has the capacity to replicate data into it. New datacenters (or 
lift-and-shifting a cluster via datacenter migration) therefore has to be done 
using a dummy keyspace that duplicates the replication strategy+factor of the 
real keyspace. This gets even more difficult come version 4.0, as the replica 
factor can not even be defined in new datacenters before those datacenters are 
up and running. 

These issues are removed by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS, with the introduction 
of an {{allocate_tokens_for_dc_rf}} option.

It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
becomes the default, as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
[https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97]

The patch does not remove the existing {{allocate_tokens_for_keyspace}} option, 
as it still provides the codebase for handling different replication strategies.

  was:
Similar to option in DSE `allocate_tokens_for_local_replication_factor`

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. Come version 4.0 the replica factor 
can not be defined in new datacenters before those datacenters are up and 
running. Previously even real keyspaces could not be used as a new datacenter 
has to, in practice, have all its nodes up and running before it has the 
capacity to replicate data into it. New datacenters, or lift-and-shifting a 
cluster via datacenter migration, can be done using a dummy keyspace that 
duplicates the replication strategy and factor of the real keyspace.

This issues are reduced by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS, with the introduction 
of an `allocate_tokens_for_dc_rf` option.

It may also be of value considering whether `allocate_tokens_for_dc_rf=3` is 
the default, as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97


> Add `allocate_tokens_for_dc_rf` yaml option for token allocation
> 
>
> Key: CASSANDRA-15260
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: mck
>Assignee: mck
>Priority: Normal
>
> Similar to DSE's option: {{allocate_tokens_for_local_replication_factor}}
> Currently the 
> [ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
>  requires a defined keyspace and a replica factor specified in the current 
> datacenter.
> This is problematic in a number of ways. The real keyspace can not be used 
> when adding new datacenters as, in practice, all its nodes need to be up and 
> running before it has the capacity to replicate data into it. New datacenters 
> (or lift-and-shifting a cluster via datacenter migration) therefore has to be 
> done using a dummy keyspace that duplicates the replication strategy+factor 
> of the real keyspace. This gets even more difficult come version 4.0, as the 
> replica factor can not even be defined in new datacenters before those 
> datacenters are up and running. 
> These issues are removed by avoiding the keyspace definition and lookup, and 
> presuming the replica strategy is by datacenter, ie NTS, with the 
> introduction of an {{allocate_tokens_for_dc_rf}} option.
> It may also be of value considering whether {{allocate_tokens_for_dc_rf=3}} 
> becomes the default, as this is the replication factor for the vast majority 
> of datacenters in production. I suspect this would be a good improvement over 

[jira] [Created] (CASSANDRA-15260) Add `allocate_tokens_for_dc_rf` yaml option for token allocation

2019-08-05 Thread mck (JIRA)
mck created CASSANDRA-15260:
---

 Summary: Add `allocate_tokens_for_dc_rf` yaml option for token 
allocation
 Key: CASSANDRA-15260
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15260
 Project: Cassandra
  Issue Type: Improvement
  Components: Local/Config
Reporter: mck
Assignee: mck


Similar to option in DSE `allocate_tokens_for_local_replication_factor`

Currently the 
[ReplicationAwareTokenAllocator|https://www.datastax.com/dev/blog/token-allocation-algorithm]
 requires a defined keyspace and a replica factor specified in the current 
datacenter.

This is problematic in a number of ways. Come version 4.0 the replica factor 
can not be defined in new datacenters before those datacenters are up and 
running. Previously even real keyspaces could not be used as a new datacenter 
has to, in practice, have all its nodes up and running before it has the 
capacity to replicate data into it. New datacenters, or lift-and-shifting a 
cluster via datacenter migration, can be done using a dummy keyspace that 
duplicates the replication strategy and factor of the real keyspace.

This issues are reduced by avoiding the keyspace definition and lookup, and 
presuming the replica strategy is by datacenter, ie NTS, with the introduction 
of an `allocate_tokens_for_dc_rf` option.

It may also be of value considering whether `allocate_tokens_for_dc_rf=3` is 
the default, as this is the replication factor for the vast majority of 
datacenters in production. I suspect this would be a good improvement over the 
existing randomly generated tokens algorithm.

Initial patch is available in 
https://github.com/thelastpickle/cassandra/commit/fc4865b0399570e58f11215565ba17dc4a53da97



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-05 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899686#comment-16899686
 ] 

mck edited comment on CASSANDRA-14952 at 8/5/19 2:34 PM:
-

> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as its own unique unit (as it's the 
first in any eventuating unit group). Although seeds (non-autobootstrapping) 
and non-existant dc names (CASSANDRA-12681) can also prevent that from 
happening.


A slightly modified version of your fix [~chovatia.jayd...@gmail.com]

|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | 


was (Author: michaelsembwever):
> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as its own unique unit (as it's the 
first in any eventuating unit group). Although seeds (non-autobootstrapping) 
and non-existant dc names (CASSANDRA-12681) can also prevent that from 
happening.


A slightly modified version of your fix [~chovatia.jayd...@gmail.com]

|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | |

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this 

[jira] [Comment Edited] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-04 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899686#comment-16899686
 ] 

mck edited comment on CASSANDRA-14952 at 8/4/19 9:51 PM:
-

> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as its own unique unit (as it's the 
first in any eventuating unit group). Although seeds (non-autobootstrapping) 
and non-existant dc names (CASSANDRA-12681) can also prevent that from 
happening.


A slightly modified version of your fix [~chovatia.jayd...@gmail.com]

|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | |


was (Author: michaelsembwever):
> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as unit. Although seeds 
(non-autobootstrapping) and non-existant dc names (CASSANDRA-12681) can also 
prevent that from happening.


A slightly modified version of your fix [~chovatia.jayd...@gmail.com]

|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | |

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> 

[jira] [Comment Edited] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-04 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899686#comment-16899686
 ] 

mck edited comment on CASSANDRA-14952 at 8/4/19 9:50 PM:
-

> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as unit. Although seeds 
(non-autobootstrapping) and non-existant dc names (CASSANDRA-12681) can also 
prevent that from happening.


A slightly modified version of your fix [~chovatia.jayd...@gmail.com]

|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | |


was (Author: michaelsembwever):
> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as unit. Although seeds 
(non-bootstrapping) and non-existant dc names (CASSANDRA-12681) can also 
prevent that from happening.


A slightly modified version of your fix [~chovatia.jayd...@gmail.com]

|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | |

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> 

[jira] [Updated] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-04 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14952:

Test and Documentation Plan: .
 Status: Patch Available  (was: Open)

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +{
> +racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +}
>  
>  if (racks >= replicas)
>  {
> {code}
> Let me know your comments.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-04 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899686#comment-16899686
 ] 

mck edited comment on CASSANDRA-14952 at 8/4/19 9:47 PM:
-

> Do we want to treat the first node added in a new datacenter as a unique 
> unit, which is what we get with rack = 1?

It seems to make sense to treat such a node as unit. Although seeds 
(non-bootstrapping) and non-existant dc names (CASSANDRA-12681) can also 
prevent that from happening.


A slightly modified version of your fix [~chovatia.jayd...@gmail.com]

|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | |


was (Author: michaelsembwever):
A slightly modified version of your fix [~chovatia.jayd...@gmail.com]


|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | |

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  

[jira] [Commented] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-04 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899686#comment-16899686
 ] 

mck commented on CASSANDRA-14952:
-

A slightly modified version of your fix [~chovatia.jayd...@gmail.com]


|| branch || circleci || asf jenkins testall || asf jenkins dtests ||
| 
[CASSANDRA-14952|https://github.com/thelastpickle/cassandra/commit/3a72a51f9cb06ac85a4c78f3719a598a3a754909]
  | 
[circleci|https://circleci.com/workflow-run/b1f8b919-f889-47c5-9019-22a3468a428d]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/40/]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675//badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/675/]
 | |

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +{
> +racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +}
>  
>  if (racks >= replicas)
>  {
> {code}

[jira] [Assigned] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-04 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck reassigned CASSANDRA-14952:
---

Assignee: Jaydeepkumar Chovatia

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +{
> +racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +}
>  
>  if (racks >= replicas)
>  {
> {code}
> Let me know your comments.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-02 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898359#comment-16898359
 ] 

mck edited comment on CASSANDRA-14952 at 8/2/19 4:32 PM:
-

There's a few peculiarities in how {{allocate_tokens_for_keyspace}} bootstraps 
in new datacenters.

For example subsequent nodes in a new datacenter will also fail, unless RF=2, 
until there is at least one node in each rack up until RF number of racks. That 
failure is a {{ConfigurationException}} with the message {code}"Token 
allocation failed: the number of racks %d in datacenter %s is lower than its 
replication factor %d."{code}

It is an undocumented requirement that one node in each rack, up until RF 
number of racks, are bootstrapped with manually calculated tokens, when adding 
a new datacenter and using {{allocate_tokens_for_keyspace}}.

Do we want to treat the first node added in a new datacenter as a unique unit, 
which is what we get with {{rack = 1}}?
[~chovatia.jayd...@gmail.com], unless anyone speaks up, let me do some testing 
on it and get back to you…


was (Author: michaelsembwever):
There's a few peculiarities in how {{allocate_tokens_for_keyspace}} bootstraps 
in new datacenters.

For example subsequent nodes in a new datacenter will also fail, unless RF=2, 
until there is at least one node in each rack up until RF number of racks. That 
failure is a {{ConfigurationException}} with the message {code}"Token 
allocation failed: the number of racks %d in datacenter %s is lower than its 
replication factor %d."{code}

It is an undocumented requirement that one node in each rack, up until RF 
number of racks, are bootstrapped with manually calculated tokens, when adding 
a new datacenter and using {{allocate_tokens_for_keyspace}}.

Do we want to treat the first node added in a new datacenter as a unique unit, 
which is what we get with {{rack = 1}}?
[~chovatia.jayd...@gmail.com], unless any speaks up, let me do some testing on 
it and get back to you…

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> 

[jira] [Updated] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-01 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14952:

Reviewers: mck

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +{
> +racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +}
>  
>  if (racks >= replicas)
>  {
> {code}
> Let me know your comments.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14952) NPE when using allocate_tokens_for_keyspace and add new DC

2019-08-01 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898359#comment-16898359
 ] 

mck commented on CASSANDRA-14952:
-

There's a few peculiarities in how {{allocate_tokens_for_keyspace}} bootstraps 
in new datacenters.

For example subsequent nodes in a new datacenter will also fail, unless RF=2, 
until there is at least one node in each rack up until RF number of racks. That 
failure is a {{ConfigurationException}} with the message {code}"Token 
allocation failed: the number of racks %d in datacenter %s is lower than its 
replication factor %d."{code}

It is an undocumented requirement that one node in each rack, up until RF 
number of racks, are bootstrapped with manually calculated tokens, when adding 
a new datacenter and using {{allocate_tokens_for_keyspace}}.

Do we want to treat the first node added in a new datacenter as a unique unit, 
which is what we get with {{rack = 1}}?
[~chovatia.jayd...@gmail.com], unless any speaks up, let me do some testing on 
it and get back to you…

> NPE when using allocate_tokens_for_keyspace and add new DC
> --
>
> Key: CASSANDRA-14952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 3.0.x
>
>
> Received following NPE while bootstrapping very first node in the new 
> datacenter with {{allocate_tokens_for_keyspace}} yaml option
> {code:java}
> INFO  21:44:13 JOINING: getting bootstrap token
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:208)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:170)
>   at 
> org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:55)
>   at 
> org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:206)
>   at 
> org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:854)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:666)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:579)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:351)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:586)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
> {code}
> Please find reproducible steps here:
>  1. Set {{allocate_tokens_for_keyspace}} property with 
> {{Networktopologystrategy}} say Networktopologystrategy, 'dc1' : 1, 'dc2' 
> : 1
>  2. Start first node in {{dc1}}
>  3. Now bootstrap second node in {{dc2,}} it will throw above exception.
> RCA:
>  
> [doAddEndpoint|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1325]
>  is invoked from the 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1254]
>  and at this time [local node's rack 
> information|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/locator/TokenMetadata.java#L1276]
>  is available
> However with have {{allocate_tokens_for_keyspace}} option, daemon tries to 
> access rack information even before calling 
> [bootstrap|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L1241]
>  function, at [this 
> place|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L878]
>  which results in NPE
> Fix:
>  Since this is applicable to only very first node for new dc, we can check 
> for {{null}} as:
> {code:java}
> diff --git 
> a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
> b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> index 8d8a6ffeca..e162757d95 100644
> --- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> +++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
> @@ -205,7 +205,11 @@ public class TokenAllocation
>  final int replicas = rs.getReplicationFactor(dc);
>  
>  Topology topology = tokenMetadata.getTopology();
> -int racks = topology.getDatacenterRacks().get(dc).asMap().size();
> +int racks = 1;
> +if (topology.getDatacenterRacks().get(dc) != null)
> +

[jira] [Commented] (CASSANDRA-14954) Website documentation for stable and latest, with stable as default linked

2019-07-31 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896863#comment-16896863
 ] 

mck commented on CASSANDRA-14954:
-

{quote}  Closest thing I can find is the SVN repo 
https://svn.apache.org/repos/asf/cassandra/site/ {quote}

That's correct. Are you reading the instructions in 
https://svn.apache.org/repos/asf/cassandra/site/README ?

This patch is intended only to be applied to SVN and to one branch. It still on 
the website is relevant for the code docs for 3.11 and 4.0 hence those fix 
versions. Sorry about the confusion that may have created.



> Website documentation for stable and latest, with stable as default linked
> --
>
> Key: CASSANDRA-14954
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14954
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Website
>Reporter: mck
>Assignee: Rahul Singh
>Priority: Low
> Fix For: 3.11.x, 4.x
>
> Attachments: make-add-stable-doc.patch
>
>
> The website should link Documentation to the docs generated for our most 
> recent stable release.
> By providing directory listing (using {{`htaccess Indexes`}}) under /doc/, 
> and having two symlinks {{latest}} and {{stable}}, we can by default link to 
> {{stable}}.
> The following patch
>  - adds to the website Makefile the task {{add-stable-doc}}
>  - changes the default documentation link to {{/doc/stable/}}
>  - removes the html redirecting from {{doc/ --> doc/latest/}}
>  - adds directory listing to {{/doc/}} for a simple view of versioned docs 
> available



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14954) Website documentation for stable and latest, with stable as default linked

2019-07-30 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14954:

Authors: mck, Rahul Singh  (was: Rahul Singh)

> Website documentation for stable and latest, with stable as default linked
> --
>
> Key: CASSANDRA-14954
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14954
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Website
>Reporter: mck
>Assignee: Rahul Singh
>Priority: Low
> Fix For: 3.11.x, 4.x
>
> Attachments: make-add-stable-doc.patch
>
>
> The website should link Documentation to the docs generated for our most 
> recent stable release.
> By providing directory listing (using {{`htaccess Indexes`}}) under /doc/, 
> and having two symlinks {{latest}} and {{stable}}, we can by default link to 
> {{stable}}.
> The following patch
>  - adds to the website Makefile the task {{add-stable-doc}}
>  - changes the default documentation link to {{/doc/stable/}}
>  - removes the html redirecting from {{doc/ --> doc/latest/}}
>  - adds directory listing to {{/doc/}} for a simple view of versioned docs 
> available



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15249) Add documentation on release lifecycle

2019-07-28 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894736#comment-16894736
 ] 

mck commented on CASSANDRA-15249:
-

[~sumanth.pasupuleti], would this contribution be better off as a separate page?

The existing `release_process.html` page is intended as a runbook, ie the "how" 
to the process of cutting a release. 

The contribution is broader in both context and aim, eg it is discussing 
quality assurance and testing that leads into releases, snapshots which are not 
releases and have nothing to do with process of cutting releases. I fear that 
its value will be lost by putting into the existing page, as well as muddling 
the purpose that existing page.

> Add documentation on release lifecycle
> --
>
> Key: CASSANDRA-15249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15249
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation/Website
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
> Fix For: 4.0
>
> Attachments: release_lifecycle.patch
>
>
> Relevant dev list mail thread: 
> https://lists.apache.org/thread.html/1a768d057d1af5a0f373c4c399a23e65cb04c61bbfff612634b9437c@%3Cdev.cassandra.apache.org%3E
> Google doc with community collaboration on documenting release lifecycle 
> https://docs.google.com/document/d/1bS6sr-HSrHFjZb0welife6Qx7u3ZDgRiAoENMLYlfz8/edit



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-07-02 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876844#comment-16876844
 ] 

mck commented on CASSANDRA-14812:
-

Committed as 97eae441dab742f0eaffcedc360991350232cfd6

> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, 
> small_repro_script.py, small_repro_script_cql.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> hierarchy|https://github.com/apache/cassandra/commit/609497471441273367013c09a1e0e1c990726ec7]
>  related to CASSANDRA-9975 triggers incorrect behaviour.
> When concatenated iterator is returned from the 
> [StorageProxy.fetchRows(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/service/StorageProxy.java#L1770],
>  Cassandra starts to consume this combined iterator. Because of 
> {{DigestMismatch}} exception some elements of this combined iterator contain 
> additional {{ThriftCounter}}, that was added during 
> [DataResolver.resolve(...)|https://github.com/apache/cassandra/blob/ee9e06b5a75c0be954694b191ea4170456015b98/src/java/org/apache/cassandra/service/reads/DataResolver.java#L120]
>  execution. While consuming iterator for many partitions Cassandra calls 
> [BaseIterator.tryGetMoreContents(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/db/transform/BaseIterator.java#L115]
>  method that must switch from one partition iterator to another in case of 
> exhaustion of the former. In this case all Transformations contained in the 
> next iterator are applied to the combined BaseIterator that enumerates 
> partitions sequence which is wrong. This behaviour causes BaseIterator to 
> stop enumeration after it fully consumes partition with {{DigestMismatch}} 
> error, because this partition iterator has additional {{ThriftCounter}} data 
> limit.
> The attachment contains the python2 script [^small_repro_script.py] that 
> reproduces this bug within 3-nodes ccmlib controlled cluster. Also, there is 
> an extended version of this script - [^repro_script.py] - that contains more 
> logging information and provides the ability to test behavior for many 
> Cassandra versions (to run all test cases from repro_script.py you can call 
> {{python -m unittest2 -v repro_script.ThriftMultigetTestCase}}). All the 
> necessary dependencies contained in the [^requirements.txt]
>  
> This bug is critical in our production environment because we can't permit 
> any data skip.
> Any ideas about a patch for this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-07-02 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14812:

  Fix Version/s: (was: 3.11.x)
 (was: 3.0.x)
 4.0
 3.11.5
 3.0.19
Source Control Link: 
https://github.com/apache/cassandra/commit/97eae441dab742f0eaffcedc360991350232cfd6
 Status: Resolved  (was: Ready to Commit)
 Resolution: Fixed

> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.19, 3.11.5, 4.0
>
> Attachments: repro_script.py, requirements.txt, 
> small_repro_script.py, small_repro_script_cql.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> hierarchy|https://github.com/apache/cassandra/commit/609497471441273367013c09a1e0e1c990726ec7]
>  related to CASSANDRA-9975 triggers incorrect behaviour.
> When concatenated iterator is returned from the 
> [StorageProxy.fetchRows(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/service/StorageProxy.java#L1770],
>  Cassandra starts to consume this combined iterator. Because of 
> {{DigestMismatch}} exception some elements of this combined iterator contain 
> additional {{ThriftCounter}}, that was added during 
> [DataResolver.resolve(...)|https://github.com/apache/cassandra/blob/ee9e06b5a75c0be954694b191ea4170456015b98/src/java/org/apache/cassandra/service/reads/DataResolver.java#L120]
>  execution. While consuming iterator for many partitions Cassandra calls 
> [BaseIterator.tryGetMoreContents(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/db/transform/BaseIterator.java#L115]
>  method that must switch from one partition iterator to another in case of 
> exhaustion of the former. In this case all Transformations contained in the 
> next iterator are applied to the combined BaseIterator that enumerates 
> partitions sequence which is wrong. This behaviour causes BaseIterator to 
> stop enumeration after it fully consumes partition with {{DigestMismatch}} 
> error, because this partition iterator has additional {{ThriftCounter}} data 
> limit.
> The attachment contains the python2 script [^small_repro_script.py] that 
> reproduces this bug within 3-nodes ccmlib controlled cluster. Also, there is 
> an extended version of this script - [^repro_script.py] - that contains more 
> logging information and provides the ability to test behavior for many 
> Cassandra versions (to run all test cases from repro_script.py you can call 
> {{python -m unittest2 -v repro_script.ThriftMultigetTestCase}}). All the 
> necessary dependencies contained in the [^requirements.txt]
>  
> This bug is critical in our production environment because we can't permit 
> any data skip.
> Any ideas about a patch for this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-07-02 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14812:

Status: Ready to Commit  (was: Review In Progress)

> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, 
> small_repro_script.py, small_repro_script_cql.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> hierarchy|https://github.com/apache/cassandra/commit/609497471441273367013c09a1e0e1c990726ec7]
>  related to CASSANDRA-9975 triggers incorrect behaviour.
> When concatenated iterator is returned from the 
> [StorageProxy.fetchRows(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/service/StorageProxy.java#L1770],
>  Cassandra starts to consume this combined iterator. Because of 
> {{DigestMismatch}} exception some elements of this combined iterator contain 
> additional {{ThriftCounter}}, that was added during 
> [DataResolver.resolve(...)|https://github.com/apache/cassandra/blob/ee9e06b5a75c0be954694b191ea4170456015b98/src/java/org/apache/cassandra/service/reads/DataResolver.java#L120]
>  execution. While consuming iterator for many partitions Cassandra calls 
> [BaseIterator.tryGetMoreContents(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/db/transform/BaseIterator.java#L115]
>  method that must switch from one partition iterator to another in case of 
> exhaustion of the former. In this case all Transformations contained in the 
> next iterator are applied to the combined BaseIterator that enumerates 
> partitions sequence which is wrong. This behaviour causes BaseIterator to 
> stop enumeration after it fully consumes partition with {{DigestMismatch}} 
> error, because this partition iterator has additional {{ThriftCounter}} data 
> limit.
> The attachment contains the python2 script [^small_repro_script.py] that 
> reproduces this bug within 3-nodes ccmlib controlled cluster. Also, there is 
> an extended version of this script - [^repro_script.py] - that contains more 
> logging information and provides the ability to test behavior for many 
> Cassandra versions (to run all test cases from repro_script.py you can call 
> {{python -m unittest2 -v repro_script.ThriftMultigetTestCase}}). All the 
> necessary dependencies contained in the [^requirements.txt]
>  
> This bug is critical in our production environment because we can't permit 
> any data skip.
> Any ideas about a patch for this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-07-02 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14812:

Reviewers: mck
   Status: Review In Progress  (was: Patch Available)

> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, 
> small_repro_script.py, small_repro_script_cql.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> hierarchy|https://github.com/apache/cassandra/commit/609497471441273367013c09a1e0e1c990726ec7]
>  related to CASSANDRA-9975 triggers incorrect behaviour.
> When concatenated iterator is returned from the 
> [StorageProxy.fetchRows(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/service/StorageProxy.java#L1770],
>  Cassandra starts to consume this combined iterator. Because of 
> {{DigestMismatch}} exception some elements of this combined iterator contain 
> additional {{ThriftCounter}}, that was added during 
> [DataResolver.resolve(...)|https://github.com/apache/cassandra/blob/ee9e06b5a75c0be954694b191ea4170456015b98/src/java/org/apache/cassandra/service/reads/DataResolver.java#L120]
>  execution. While consuming iterator for many partitions Cassandra calls 
> [BaseIterator.tryGetMoreContents(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/db/transform/BaseIterator.java#L115]
>  method that must switch from one partition iterator to another in case of 
> exhaustion of the former. In this case all Transformations contained in the 
> next iterator are applied to the combined BaseIterator that enumerates 
> partitions sequence which is wrong. This behaviour causes BaseIterator to 
> stop enumeration after it fully consumes partition with {{DigestMismatch}} 
> error, because this partition iterator has additional {{ThriftCounter}} data 
> limit.
> The attachment contains the python2 script [^small_repro_script.py] that 
> reproduces this bug within 3-nodes ccmlib controlled cluster. Also, there is 
> an extended version of this script - [^repro_script.py] - that contains more 
> logging information and provides the ability to test behavior for many 
> Cassandra versions (to run all test cases from repro_script.py you can call 
> {{python -m unittest2 -v repro_script.ThriftMultigetTestCase}}). All the 
> necessary dependencies contained in the [^requirements.txt]
>  
> This bug is critical in our production environment because we can't permit 
> any data skip.
> Any ideas about a patch for this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-07-02 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876826#comment-16876826
 ] 

mck commented on CASSANDRA-14812:
-

Have tested and finished review (see 
https://github.com/thelastpickle/cassandra/commit/1d3fa25fefa96580fee3dd469f2c9cef860e6ea3#r34153353).
 
LGTM.

> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, 
> small_repro_script.py, small_repro_script_cql.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> hierarchy|https://github.com/apache/cassandra/commit/609497471441273367013c09a1e0e1c990726ec7]
>  related to CASSANDRA-9975 triggers incorrect behaviour.
> When concatenated iterator is returned from the 
> [StorageProxy.fetchRows(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/service/StorageProxy.java#L1770],
>  Cassandra starts to consume this combined iterator. Because of 
> {{DigestMismatch}} exception some elements of this combined iterator contain 
> additional {{ThriftCounter}}, that was added during 
> [DataResolver.resolve(...)|https://github.com/apache/cassandra/blob/ee9e06b5a75c0be954694b191ea4170456015b98/src/java/org/apache/cassandra/service/reads/DataResolver.java#L120]
>  execution. While consuming iterator for many partitions Cassandra calls 
> [BaseIterator.tryGetMoreContents(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/db/transform/BaseIterator.java#L115]
>  method that must switch from one partition iterator to another in case of 
> exhaustion of the former. In this case all Transformations contained in the 
> next iterator are applied to the combined BaseIterator that enumerates 
> partitions sequence which is wrong. This behaviour causes BaseIterator to 
> stop enumeration after it fully consumes partition with {{DigestMismatch}} 
> error, because this partition iterator has additional {{ThriftCounter}} data 
> limit.
> The attachment contains the python2 script [^small_repro_script.py] that 
> reproduces this bug within 3-nodes ccmlib controlled cluster. Also, there is 
> an extended version of this script - [^repro_script.py] - that contains more 
> logging information and provides the ability to test behavior for many 
> Cassandra versions (to run all test cases from repro_script.py you can call 
> {{python -m unittest2 -v repro_script.ThriftMultigetTestCase}}). All the 
> necessary dependencies contained in the [^requirements.txt]
>  
> This bug is critical in our production environment because we can't permit 
> any data skip.
> Any ideas about a patch for this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14757) GCInspector "Error accessing field of java.nio.Bits" under java11

2019-06-27 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14757:

Source Control Link: 
https://github.com/apache/cassandra/commit/2ed2b87b634c1b9d9ec9b3ba3f580f1be753972a
  Since Version: 4.0
 Status: Resolved  (was: Ready to Commit)
 Resolution: Fixed

Committed with 2ed2b87b634c1b9d9ec9b3ba3f580f1be753972a

> GCInspector "Error accessing field of java.nio.Bits" under java11
> -
>
> Key: CASSANDRA-14757
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14757
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Jason Brown
>Assignee: Robert Stupp
>Priority: Low
>  Labels: Java11, pull-request-available
> Fix For: 4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Running under java11, {{GCInspector}} throws the following exception:
> {noformat}
> DEBUG [main] 2018-09-18 05:18:25,905 GCInspector.java:78 - Error accessing 
> field of java.nio.Bits
> java.lang.NoSuchFieldException: totalCapacity
> at java.base/java.lang.Class.getDeclaredField(Class.java:2412)
> at 
> org.apache.cassandra.service.GCInspector.(GCInspector.java:72)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:308)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:590)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
> {noformat}
> This is because {{GCInspector}} uses reflection to read the {{totalCapacity}} 
> from {{java.nio.Bits}}. This field was renamed to {{TOTAL_CAPACITY}} 
> somewhere between java8 and java11.
> Note: this is a rather harmless error, as we only look at 
> {{Bits.totalCapacity}} for metrics collection on how much direct memory is 
> being used by {{ByteBuffer}}s. If we fail to read the field, we simply return 
> -1 for the metric value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14757) GCInspector "Error accessing field of java.nio.Bits" under java11

2019-06-19 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14757:

  Authors: mck  (was: Robert Stupp)
Reviewers: Robert Stupp

> GCInspector "Error accessing field of java.nio.Bits" under java11
> -
>
> Key: CASSANDRA-14757
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14757
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Jason Brown
>Assignee: Robert Stupp
>Priority: Low
>  Labels: Java11, pull-request-available
> Fix For: 4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Running under java11, {{GCInspector}} throws the following exception:
> {noformat}
> DEBUG [main] 2018-09-18 05:18:25,905 GCInspector.java:78 - Error accessing 
> field of java.nio.Bits
> java.lang.NoSuchFieldException: totalCapacity
> at java.base/java.lang.Class.getDeclaredField(Class.java:2412)
> at 
> org.apache.cassandra.service.GCInspector.(GCInspector.java:72)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:308)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:590)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
> {noformat}
> This is because {{GCInspector}} uses reflection to read the {{totalCapacity}} 
> from {{java.nio.Bits}}. This field was renamed to {{TOTAL_CAPACITY}} 
> somewhere between java8 and java11.
> Note: this is a rather harmless error, as we only look at 
> {{Bits.totalCapacity}} for metrics collection on how much direct memory is 
> being used by {{ByteBuffer}}s. If we fail to read the field, we simply return 
> -1 for the metric value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14757) GCInspector "Error accessing field of java.nio.Bits" under java11

2019-06-19 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868210#comment-16868210
 ] 

mck commented on CASSANDRA-14757:
-

done. am i free to merge [~snazy]?

> GCInspector "Error accessing field of java.nio.Bits" under java11
> -
>
> Key: CASSANDRA-14757
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14757
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Jason Brown
>Assignee: Robert Stupp
>Priority: Low
>  Labels: Java11, pull-request-available
> Fix For: 4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Running under java11, {{GCInspector}} throws the following exception:
> {noformat}
> DEBUG [main] 2018-09-18 05:18:25,905 GCInspector.java:78 - Error accessing 
> field of java.nio.Bits
> java.lang.NoSuchFieldException: totalCapacity
> at java.base/java.lang.Class.getDeclaredField(Class.java:2412)
> at 
> org.apache.cassandra.service.GCInspector.(GCInspector.java:72)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:308)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:590)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
> {noformat}
> This is because {{GCInspector}} uses reflection to read the {{totalCapacity}} 
> from {{java.nio.Bits}}. This field was renamed to {{TOTAL_CAPACITY}} 
> somewhere between java8 and java11.
> Note: this is a rather harmless error, as we only look at 
> {{Bits.totalCapacity}} for metrics collection on how much direct memory is 
> being used by {{ByteBuffer}}s. If we fail to read the field, we simply return 
> -1 for the metric value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-06-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14305:

Status: Ready to Commit  (was: Review In Progress)

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
>  Labels: pull-request-available
> Fix For: 3.11.5, 4.0
>
> Attachments: conf_cassandra-env.sh.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-06-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14305:

Status: Review In Progress  (was: Patch Available)

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
>  Labels: pull-request-available
> Fix For: 3.11.5, 4.0
>
> Attachments: conf_cassandra-env.sh.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-06-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14305:

Status: Resolved  (was: Ready to Commit)

{quote}
Fix is in https://github.com/apache/cassandra-dtest/pull/51
dtest run at 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/673/
{quote}
Committed with 
https://github.com/apache/cassandra-dtest/commit/a81e9a754ac7b56c5c1669970463578304b21105

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
>  Labels: pull-request-available
> Fix For: 3.11.5, 4.0
>
> Attachments: conf_cassandra-env.sh.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-06-05 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14305:

Reviewers: mck, Sam Tunnicliffe  (was: mck)

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
>  Labels: pull-request-available
> Fix For: 3.11.5, 4.0
>
> Attachments: conf_cassandra-env.sh.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-06-04 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14305:

Test and Documentation Plan: 
manual testing
dtest: mx_auth_test.TestJMXAuth.test_basic_auth

  was:manual testing

 Status: Patch Available  (was: Open)

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
>  Labels: pull-request-available
> Fix For: 3.11.5, 4.0
>
> Attachments: conf_cassandra-env.sh.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-06-04 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855405#comment-16855405
 ] 

mck commented on CASSANDRA-14305:
-

Fix is in https://github.com/apache/cassandra-dtest/pull/51
 dtest run at 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/673/

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
>  Labels: pull-request-available
> Fix For: 3.11.5, 4.0
>
> Attachments: conf_cassandra-env.sh.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-06-03 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855246#comment-16855246
 ] 

mck commented on CASSANDRA-14305:
-

Thanks for spotting and raising that [~samt]. I am looking into it.

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
> Fix For: 3.11.5, 4.0
>
> Attachments: conf_cassandra-env.sh.patch
>
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-05-27 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848873#comment-16848873
 ] 

mck edited comment on CASSANDRA-14812 at 5/27/19 12:01 PM:
---

[~benedict], I have reviewed the patch and tested the python reproducible on 
3.0.18 and 3.11.4, working with and failing without the patch applied.

I'm not competent on this area, but I am jumping in to help as we too are 
seeing users unable to upgrade because of this fault.

Review questions/points are: 
 - is there a way to replicate the test for the CQL equivalent? While this bug 
does not impact CQL it is my understanding that CQL queries with `IN` clauses 
will still be going through this code path… I've attached the reproducible 
script rewritten for CQL, is it applicable? Should it be added as a dtest? (i 
don't think so but double-checking)
 - I understand overriding {{`filter(..)`}} for the NONE impl, although at 
first it is not intuitive that  {{`DataLimits.NONE`}} is also used in thrift 
queries…
 - fyi the circleci results are here: 
https://circleci.com/workflow-run/3dd0d7f3-fa79-4118-80d8-247e85db40ea ; are 
these failures of concern?
 - {{"The branch I have uploaded also has a back port of CASSANDRA-14821"}}. I 
am confused… where is this?
 - a rebased commit for the 3.0 branch is here 
[mck/cassandra-3.0_14812|https://github.com/thelastpickle/cassandra/commits/mck/cassandra-3.0_14812]
 - the change in {{BasePartitions}} and the interactions from different 
{{StoppingTransformation}} subclasses is a bit harder to grok… It makes sense 
that the {{while}} loop does not need to continue in the situation where, 
{{stop}} has "leaked" and not been signalled, but where 
{{stopChild.isSignalled}} was. But not returning false in that same situation 
seems odd…? Do you want me to test the different cql interactions here (per 
partition, grouping, paging)?




was (Author: michaelsembwever):
[~benedict], I have reviewed the patch and tested the python reproducible on 
3.0.18 and 3.11.4, working with and failing without the patch applied.

I'm not competent on this area, but I am jumping in to help as we too are 
seeing users unable to upgrade because of this fault.

Review questions/points are: 
 - is there a way to replicate the test for the CQL equivalent? While this bug 
does not impact CQL it is my understanding that CQL queries with `IN` clauses 
will still be going through this code path… I've attached the reproducible 
script rewritten for CQL, is it applicable? Should it be added as a dtest? (i 
don't think so but double-checking)
 - I understand overriding {{`filter(..)`}} for the NONE impl, although at 
first it is not intuitive that  {{`DataLimits.NONE`}} is also used in thrift 
queries…
 - fyi the circleci results are here: 
https://circleci.com/workflow-run/3dd0d7f3-fa79-4118-80d8-247e85db40ea ; are 
these failures of concern?
 - {{"The branch I have uploaded also has a back port of CASSANDRA-14821"}}. I 
am confused… where is this?
 - a rebased commit for the 3.0 branch is here 
[mck/cassandra-3.0_14812|https://github.com/thelastpickle/cassandra/commits/mck/cassandra-3.0_14812]
 - the change in {{BasePartitions}} and the interactions from different 
{{StoppingTransformation}} subclasses is a bit harder to grok… It makes that 
the {{while}} loop does not need to continue in the situation where, {{stop}} 
has "leaked" and not been signalled, but where {{stopChild.isSignalled}} was. 
But not returning false in that same situation seems odd…? Do you want me to 
test the different cql interactions here (per partition, grouping, paging)?



> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, 
> small_repro_script.py, small_repro_script_cql.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 

[jira] [Updated] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-05-27 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14812:

Attachment: small_repro_script_cql.py

> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, 
> small_repro_script.py, small_repro_script_cql.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> hierarchy|https://github.com/apache/cassandra/commit/609497471441273367013c09a1e0e1c990726ec7]
>  related to CASSANDRA-9975 triggers incorrect behaviour.
> When concatenated iterator is returned from the 
> [StorageProxy.fetchRows(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/service/StorageProxy.java#L1770],
>  Cassandra starts to consume this combined iterator. Because of 
> {{DigestMismatch}} exception some elements of this combined iterator contain 
> additional {{ThriftCounter}}, that was added during 
> [DataResolver.resolve(...)|https://github.com/apache/cassandra/blob/ee9e06b5a75c0be954694b191ea4170456015b98/src/java/org/apache/cassandra/service/reads/DataResolver.java#L120]
>  execution. While consuming iterator for many partitions Cassandra calls 
> [BaseIterator.tryGetMoreContents(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/db/transform/BaseIterator.java#L115]
>  method that must switch from one partition iterator to another in case of 
> exhaustion of the former. In this case all Transformations contained in the 
> next iterator are applied to the combined BaseIterator that enumerates 
> partitions sequence which is wrong. This behaviour causes BaseIterator to 
> stop enumeration after it fully consumes partition with {{DigestMismatch}} 
> error, because this partition iterator has additional {{ThriftCounter}} data 
> limit.
> The attachment contains the python2 script [^small_repro_script.py] that 
> reproduces this bug within 3-nodes ccmlib controlled cluster. Also, there is 
> an extended version of this script - [^repro_script.py] - that contains more 
> logging information and provides the ability to test behavior for many 
> Cassandra versions (to run all test cases from repro_script.py you can call 
> {{python -m unittest2 -v repro_script.ThriftMultigetTestCase}}). All the 
> necessary dependencies contained in the [^requirements.txt]
>  
> This bug is critical in our production environment because we can't permit 
> any data skip.
> Any ideas about a patch for this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-05-27 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848873#comment-16848873
 ] 

mck edited comment on CASSANDRA-14812 at 5/27/19 12:00 PM:
---

[~benedict], I have reviewed the patch and tested the python reproducible on 
3.0.18 and 3.11.4, working with and failing without the patch applied.

I'm not competent on this area, but I am jumping in to help as we too are 
seeing users unable to upgrade because of this fault.

Review questions/points are: 
 - is there a way to replicate the test for the CQL equivalent? While this bug 
does not impact CQL it is my understanding that CQL queries with `IN` clauses 
will still be going through this code path… I've attached the reproducible 
script rewritten for CQL, is it applicable? Should it be added as a dtest? (i 
don't think so but double-checking)
 - I understand overriding {{`filter(..)`}} for the NONE impl, although at 
first it is not intuitive that  {{`DataLimits.NONE`}} is also used in thrift 
queries…
 - fyi the circleci results are here: 
https://circleci.com/workflow-run/3dd0d7f3-fa79-4118-80d8-247e85db40ea ; are 
these failures of concern?
 - {{"The branch I have uploaded also has a back port of CASSANDRA-14821"}}. I 
am confused… where is this?
 - a rebased commit for the 3.0 branch is here 
[mck/cassandra-3.0_14812|https://github.com/thelastpickle/cassandra/commits/mck/cassandra-3.0_14812]
 - the change in {{BasePartitions}} and the interactions from different 
{{StoppingTransformation}} subclasses is a bit harder to grok… It makes that 
the {{while}} loop does not need to continue in the situation where, {{stop}} 
has "leaked" and not been signalled, but where {{stopChild.isSignalled}} was. 
But not returning false in that same situation seems odd…? Do you want me to 
test the different cql interactions here (per partition, grouping, paging)?




was (Author: michaelsembwever):
[~benedict], I have reviewed the patch and tested the python reproducible on 
3.0.18 and 3.11.4, working with and failing without the patch applied.

I'm not competent on this area, but I am jumping in to help as we too are 
seeing users unable to upgrade because of this fault.

Review questions/points are: 
 - is there a way to replicate the test for the CQL equivalent? While this bug 
does not impact CQL it is my understanding that CQL queries with `IN` clauses 
will still be going through this code path… I've attached the reproducible 
script rewritten for CQL, is it applicable? Should it be added as a dtest? 
 - I understand overriding {{`filter(..)`}} for the NONE impl, although at 
first it is not intuitive that  {{`DataLimits.NONE`}} is also used in thrift 
queries…
 - fyi the circleci results are here: 
https://circleci.com/workflow-run/3dd0d7f3-fa79-4118-80d8-247e85db40ea ; are 
these failures of concern?
 - {{"The branch I have uploaded also has a back port of CASSANDRA-14821"}}. I 
am confused… where is this?
 - a rebased commit for the 3.0 branch is here 
[mck/cassandra-3.0_14812|https://github.com/thelastpickle/cassandra/commits/mck/cassandra-3.0_14812]
 - the change in {{BasePartitions}} and the interactions from different 
{{StoppingTransformation}} subclasses is a bit harder to grok… It makes that 
the {{while}} loop does not need to continue in the situation where, {{stop}} 
has "leaked" and not been signalled, but where {{stopChild.isSignalled}} was. 
But not returning false in that same situation seems odd…? Do you want me to 
test the different cql interactions here (per partition, grouping, paging)?



> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, 
> small_repro_script.py, small_repro_script_cql.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> 

[jira] [Comment Edited] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-05-27 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848873#comment-16848873
 ] 

mck edited comment on CASSANDRA-14812 at 5/27/19 11:59 AM:
---

[~benedict], I have reviewed the patch and tested the python reproducible on 
3.0.18 and 3.11.4, working with and failing without the patch applied.

I'm not competent on this area, but I am jumping in to help as we too are 
seeing users unable to upgrade because of this fault.

Review questions/points are: 
 - is there a way to replicate the test for the CQL equivalent? While this bug 
does not impact CQL it is my understanding that CQL queries with `IN` clauses 
will still be going through this code path… I've attached the reproducible 
script rewritten for CQL, is it applicable? Should it be added as a dtest? 
 - I understand overriding {{`filter(..)`}} for the NONE impl, although at 
first it is not intuitive that  {{`DataLimits.NONE`}} is also used in thrift 
queries…
 - fyi the circleci results are here: 
https://circleci.com/workflow-run/3dd0d7f3-fa79-4118-80d8-247e85db40ea ; are 
these failures of concern?
 - {{"The branch I have uploaded also has a back port of CASSANDRA-14821"}}. I 
am confused… where is this?
 - a rebased commit for the 3.0 branch is here 
[mck/cassandra-3.0_14812|https://github.com/thelastpickle/cassandra/commits/mck/cassandra-3.0_14812]
 - the change in {{BasePartitions}} and the interactions from different 
{{StoppingTransformation}} subclasses is a bit harder to grok… It makes that 
the {{while}} loop does not need to continue in the situation where, {{stop}} 
has "leaked" and not been signalled, but where {{stopChild.isSignalled}} was. 
But not returning false in that same situation seems odd…? Do you want me to 
test the different cql interactions here (per partition, grouping, paging)?




was (Author: michaelsembwever):
[~benedict], I have reviewed the patch and tested the python reproducible on 
3.0.18 and 3.11.4, working with and failing without the patch applied.

I'm not competent on this area, but I am jumping in to help as we too are 
seeing users unable to upgrade because of this fault.

Review questions/points are: 
 - is there a way to replicate the test for the CQL equivalent? While this bug 
does not impact CQL it is my understanding that CQL queries with `IN` clauses 
will still be going through this code path… I've attached the reproducible 
script rewritten for CQL, is it applicable? Should it be added as a dtest? XXX
 - I understand overriding {{`filter(..)`}} for the NONE impl, although at 
first it is not intuitive that  {{`DataLimits.NONE`}} is also used in thrift 
queries…
 - fyi the circleci results are here: 
https://circleci.com/workflow-run/3dd0d7f3-fa79-4118-80d8-247e85db40ea ; are 
these failures of concern?
 - {{"The branch I have uploaded also has a back port of CASSANDRA-14821"}}. I 
am confused… where is this?
 - a rebased commit for the 3.0 branch is here 
[mck/cassandra-3.0_14812|https://github.com/thelastpickle/cassandra/commits/mck/cassandra-3.0_14812]
 - the change in {{BasePartitions}} and the interactions from different 
{{StoppingTransformation}} subclasses is a bit harder to grok… It makes that 
the {{while}} loop does not need to continue in the situation where, {{stop}} 
has "leaked" and not been signalled, but where {{stopChild.isSignalled}} was. 
But not returning false in that same situation seems odd…? Do you want me to 
test the different cql interactions here (per partition, grouping, paging)?



> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, small_repro_script.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> 

[jira] [Commented] (CASSANDRA-14812) Multiget Thrift query returns null records after digest mismatch

2019-05-27 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848873#comment-16848873
 ] 

mck commented on CASSANDRA-14812:
-

[~benedict], I have reviewed the patch and tested the python reproducible on 
3.0.18 and 3.11.4, working with and failing without the patch applied.

I'm not competent on this area, but I am jumping in to help as we too are 
seeing users unable to upgrade because of this fault.

Review questions/points are: 
 - is there a way to replicate the test for the CQL equivalent? While this bug 
does not impact CQL it is my understanding that CQL queries with `IN` clauses 
will still be going through this code path… I've attached the reproducible 
script rewritten for CQL, is it applicable? Should it be added as a dtest? XXX
 - I understand overriding {{`filter(..)`}} for the NONE impl, although at 
first it is not intuitive that  {{`DataLimits.NONE`}} is also used in thrift 
queries…
 - fyi the circleci results are here: 
https://circleci.com/workflow-run/3dd0d7f3-fa79-4118-80d8-247e85db40ea ; are 
these failures of concern?
 - {{"The branch I have uploaded also has a back port of CASSANDRA-14821"}}. I 
am confused… where is this?
 - a rebased commit for the 3.0 branch is here 
[mck/cassandra-3.0_14812|https://github.com/thelastpickle/cassandra/commits/mck/cassandra-3.0_14812]
 - the change in {{BasePartitions}} and the interactions from different 
{{StoppingTransformation}} subclasses is a bit harder to grok… It makes that 
the {{while}} loop does not need to continue in the situation where, {{stop}} 
has "leaked" and not been signalled, but where {{stopChild.isSignalled}} was. 
But not returning false in that same situation seems odd…? Do you want me to 
test the different cql interactions here (per partition, grouping, paging)?



> Multiget Thrift query returns null records after digest mismatch
> 
>
> Key: CASSANDRA-14812
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14812
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Messaging/Thrift
>Reporter: Sivukhin Nikita
>Assignee: Benedict
>Priority: Urgent
> Fix For: 3.0.x, 3.11.x
>
> Attachments: repro_script.py, requirements.txt, small_repro_script.py
>
>
> It seems that in Cassandra 3.0.0 a nasty bug was introduced in {{multiget}} 
> Thrift query processing logic. When one tries to read data from several 
> partitions with a single {{multiget}} query and {{DigestMismatch}} exception 
> is raised during this query processing, request coordinator prematurely 
> terminates response stream right at the point where the first 
> \{{DigestMismatch}} error is occurring. This leads to situation where clients 
> "do not see" some data contained in the database.
> We managed to reproduce this bug in all versions of Cassandra starting with 
> v3.0.0. The pre-release version 3.0.0-rc2 works correctly. It looks like 
> [refactoring of iterator transformation 
> hierarchy|https://github.com/apache/cassandra/commit/609497471441273367013c09a1e0e1c990726ec7]
>  related to CASSANDRA-9975 triggers incorrect behaviour.
> When concatenated iterator is returned from the 
> [StorageProxy.fetchRows(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/service/StorageProxy.java#L1770],
>  Cassandra starts to consume this combined iterator. Because of 
> {{DigestMismatch}} exception some elements of this combined iterator contain 
> additional {{ThriftCounter}}, that was added during 
> [DataResolver.resolve(...)|https://github.com/apache/cassandra/blob/ee9e06b5a75c0be954694b191ea4170456015b98/src/java/org/apache/cassandra/service/reads/DataResolver.java#L120]
>  execution. While consuming iterator for many partitions Cassandra calls 
> [BaseIterator.tryGetMoreContents(...)|https://github.com/apache/cassandra/blob/a05785d82c621c9cd04d8a064c38fd2012ef981c/src/java/org/apache/cassandra/db/transform/BaseIterator.java#L115]
>  method that must switch from one partition iterator to another in case of 
> exhaustion of the former. In this case all Transformations contained in the 
> next iterator are applied to the combined BaseIterator that enumerates 
> partitions sequence which is wrong. This behaviour causes BaseIterator to 
> stop enumeration after it fully consumes partition with {{DigestMismatch}} 
> error, because this partition iterator has additional {{ThriftCounter}} data 
> limit.
> The attachment contains the python2 script [^small_repro_script.py] that 
> reproduces this bug within 3-nodes ccmlib controlled cluster. Also, there is 
> an extended version of this script - [^repro_script.py] - that contains more 
> logging information and provides the ability to test behavior 

[jira] [Updated] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-05-26 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14305:

  Fix Version/s: (was: 4.0.x)
 (was: 3.11.x)
 4.0
 3.11.5
Source Control Link: 
https://github.com/apache/cassandra/commit/60bdfb1731d2bc0d63720d52d0f64c4d88791f33
 Status: Resolved  (was: Ready to Commit)
 Resolution: Fixed

Committed as 60bdfb1731d2bc0d63720d52d0f64c4d88791f33

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
> Fix For: 3.11.5, 4.0
>
> Attachments: conf_cassandra-env.sh.patch
>
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-05-26 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14305:

Status: Ready to Commit  (was: Review In Progress)

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
> Fix For: 3.11.x, 4.0.x
>
> Attachments: conf_cassandra-env.sh.patch
>
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-05-26 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-14305:

Fix Version/s: (was: 3.0.x)

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
> Fix For: 3.11.x, 4.0.x
>
> Attachments: conf_cassandra-env.sh.patch
>
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14305) Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh

2019-05-26 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck reassigned CASSANDRA-14305:
---

Assignee: Angelo Polo

> Use $CASSANDRA_CONF not $CASSANDRA_HOME/conf in cassandra-env.sh 
> -
>
> Key: CASSANDRA-14305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Low
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: conf_cassandra-env.sh.patch
>
>
> CASSANDRA_CONF should be used uniformly in conf/cassandra-env.sh to reference 
> the configuration path. Currently, jaas users will have to modify the default 
> path provided for cassandra-jaas.config if their $CASSANDRA_CONF differs from 
> $CASSANDRA_HOME/conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15090) Customize cassandra log directory

2019-05-26 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15090:

Fix Version/s: (was: 4.0.x)
   (was: 3.11.x)
   (was: 3.0.x)
   4.0
   3.11.5
   3.0.19

> Customize cassandra log directory
> -
>
> Key: CASSANDRA-15090
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15090
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Normal
> Fix For: 3.0.19, 3.11.5, 4.0
>
> Attachments: CASSANDRA-15090-v1.patch
>
>
> Add a new variable CASSANDRA_LOG_DIR (default: $CASSANDRA_HOME/logs) so that 
> we could customize log directory such as ‘/var/log/cassandra’ .
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15090) Customize cassandra log directory

2019-05-26 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848556#comment-16848556
 ] 

mck commented on CASSANDRA-15090:
-

Committed as ac10b817313c2260fb1889e025fd719d076f7a72

> Customize cassandra log directory
> -
>
> Key: CASSANDRA-15090
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15090
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: CASSANDRA-15090-v1.patch
>
>
> Add a new variable CASSANDRA_LOG_DIR (default: $CASSANDRA_HOME/logs) so that 
> we could customize log directory such as ‘/var/log/cassandra’ .
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15090) Customize cassandra log directory

2019-05-26 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15090:

Source Control Link: 
https://github.com/apache/cassandra/commit/ac10b817313c2260fb1889e025fd719d076f7a72
 Status: Resolved  (was: Ready to Commit)
 Resolution: Fixed

> Customize cassandra log directory
> -
>
> Key: CASSANDRA-15090
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15090
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: CASSANDRA-15090-v1.patch
>
>
> Add a new variable CASSANDRA_LOG_DIR (default: $CASSANDRA_HOME/logs) so that 
> we could customize log directory such as ‘/var/log/cassandra’ .
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15090) Customize cassandra log directory

2019-05-26 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15090:

Status: Ready to Commit  (was: Review In Progress)

> Customize cassandra log directory
> -
>
> Key: CASSANDRA-15090
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15090
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: CASSANDRA-15090-v1.patch
>
>
> Add a new variable CASSANDRA_LOG_DIR (default: $CASSANDRA_HOME/logs) so that 
> we could customize log directory such as ‘/var/log/cassandra’ .
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15090) Customize cassandra log directory

2019-05-26 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-15090:

Fix Version/s: 4.0.x
   3.11.x
   3.0.x

> Customize cassandra log directory
> -
>
> Key: CASSANDRA-15090
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15090
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: CASSANDRA-15090-v1.patch
>
>
> Add a new variable CASSANDRA_LOG_DIR (default: $CASSANDRA_HOME/logs) so that 
> we could customize log directory such as ‘/var/log/cassandra’ .
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   3   4   5   6   7   8   9   10   >