[jira] [Updated] (CASSANDRA-15374) New cassandra node not able to see all the nodes in other DCs

2019-10-23 Thread venky (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venky updated CASSANDRA-15374:
--
 Bug Category: Parent values: Correctness(12982)
   Complexity: Normal
  Component/s: Cluster/Membership
Discovered By: User Report
 Severity: Normal
   Status: Open  (was: Triage Needed)

> New cassandra node not able to see all the nodes in other DCs
> -
>
> Key: CASSANDRA-15374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15374
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Membership
>Reporter: venky
>Priority: Normal
>
> We were adding a new DC to existing cluster. We first started one seed node 
> and we observed that this node after coming up does not show all the other 
> nodes in other DCs. We checked the connectivity between this node and nodes 
> in other DC which are not appearing in the nodetool status. It is all fine. 
> On restarting the missing nodes start appearing. i.e to start with if we had 
> 175 nodes in nodetool status as against the expected 190 nodes, on first 
> restart it becomes 180 nodes, on second restart it becomes 183, on third 
> restart it becomes 185 and so on before we get all 190 nodes in nodetool 
> status.  This clearly indicates there is no network issue, as we get nodes 
> one by one after restart.  
>  
> We are able to reproduce this issue multiple times by adding new nodes in a 
> new DC. It looks like a bug to us and hence reporting here. We observed this 
> in 3.11.0 and 3.11.4. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15374) New cassandra node not able to see all the nodes in other DCs

2019-10-23 Thread venky (Jira)
venky created CASSANDRA-15374:
-

 Summary: New cassandra node not able to see all the nodes in other 
DCs
 Key: CASSANDRA-15374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15374
 Project: Cassandra
  Issue Type: Bug
Reporter: venky


We were adding a new DC to existing cluster. We first started one seed node and 
we observed that this node after coming up does not show all the other nodes in 
other DCs. We checked the connectivity between this node and nodes in other DC 
which are not appearing in the nodetool status. It is all fine. On restarting 
the missing nodes start appearing. i.e to start with if we had 175 nodes in 
nodetool status as against the expected 190 nodes, on first restart it becomes 
180 nodes, on second restart it becomes 183, on third restart it becomes 185 
and so on before we get all 190 nodes in nodetool status.  This clearly 
indicates there is no network issue, as we get nodes one by one after restart.  

 

We are able to reproduce this issue multiple times by adding new nodes in a new 
DC. It looks like a bug to us and hence reporting here. We observed this in 
3.11.0 and 3.11.4. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15373) validate value sizes in LegacyLayout

2019-10-23 Thread Blake Eggleston (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-15373:

Test and Documentation Plan: circleci
 Status: Patch Available  (was: Open)

|[3.0|https://github.com/bdeggleston/cassandra/tree/15373-3.0]|[tests|https://circleci.com/gh/bdeggleston/cassandra/tree/15373-3.0]|
|[3.11|https://github.com/bdeggleston/cassandra/tree/15373-3.11]|[tests|https://circleci.com/gh/bdeggleston/cassandra/tree/15373-3.11]|

> validate value sizes in LegacyLayout
> 
>
> Key: CASSANDRA-15373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15373
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Normal
> Fix For: 3.0.19, 3.11.5, 4.0
>
>
> In 2.1, all values are serialized as variable length blobs, with a length 
> prefix, followed by the actual value, even with fixed width types like int32. 
> The 3.0 storage engine, on the other hand, omits the length prefix for fixed 
> width types. Since the length of fixed width types are not validated on the 
> 3.0 write path, writing data for a fixed width type from an incorrectly sized 
> byte buffer will over or underflow the space allocated for it, corrupting the 
> remainder of that partition or indexed region from being read. This is not 
> discovered until we attempt to read the corrupted value. This patch updates 
> LegacyLayout to throw a marshal exception if it encounters an unexpected 
> value size for fixed size columns.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15373) validate value sizes in LegacyLayout

2019-10-23 Thread Blake Eggleston (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-15373:

 Bug Category: Parent values: Correctness(12982)Level 1 values: 
Unrecoverable Corruption / Loss(13161)
   Complexity: Normal
Discovered By: User Report
Fix Version/s: 4.0
   3.11.5
   3.0.19
 Severity: Normal
   Status: Open  (was: Triage Needed)

> validate value sizes in LegacyLayout
> 
>
> Key: CASSANDRA-15373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15373
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Normal
> Fix For: 3.0.19, 3.11.5, 4.0
>
>
> In 2.1, all values are serialized as variable length blobs, with a length 
> prefix, followed by the actual value, even with fixed width types like int32. 
> The 3.0 storage engine, on the other hand, omits the length prefix for fixed 
> width types. Since the length of fixed width types are not validated on the 
> 3.0 write path, writing data for a fixed width type from an incorrectly sized 
> byte buffer will over or underflow the space allocated for it, corrupting the 
> remainder of that partition or indexed region from being read. This is not 
> discovered until we attempt to read the corrupted value. This patch updates 
> LegacyLayout to throw a marshal exception if it encounters an unexpected 
> value size for fixed size columns.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15373) validate value sizes in LegacyLayout

2019-10-23 Thread Blake Eggleston (Jira)
Blake Eggleston created CASSANDRA-15373:
---

 Summary: validate value sizes in LegacyLayout
 Key: CASSANDRA-15373
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15373
 Project: Cassandra
  Issue Type: Bug
  Components: Legacy/Local Write-Read Paths
Reporter: Blake Eggleston
Assignee: Blake Eggleston


In 2.1, all values are serialized as variable length blobs, with a length 
prefix, followed by the actual value, even with fixed width types like int32. 
The 3.0 storage engine, on the other hand, omits the length prefix for fixed 
width types. Since the length of fixed width types are not validated on the 3.0 
write path, writing data for a fixed width type from an incorrectly sized byte 
buffer will over or underflow the space allocated for it, corrupting the 
remainder of that partition or indexed region from being read. This is not 
discovered until we attempt to read the corrupted value. This patch updates 
LegacyLayout to throw a marshal exception if it encounters an unexpected value 
size for fixed size columns.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15277) Make it possible to resize concurrent read / write thread pools at runtime

2019-10-23 Thread Jon Meredith (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16958229#comment-16958229
 ] 

Jon Meredith commented on CASSANDRA-15277:
--

[~ifesdjeen] I've rebased and cleaned things up and pushed to a new branch.  
There's not longer a dependency on merging CASSANDRA-15227.

Branch: [https://github.com/jonmeredith/cassandra/tree/CASSANDRA-15277-v3]

GitHub PR: [https://github.com/apache/cassandra/pull/369]

CircleCI run: 
https://circleci.com/workflow-run/1102a711-b347-4370-8bcc-fc9f7e326b32

> Make it possible to resize concurrent read / write thread pools at runtime
> --
>
> Key: CASSANDRA-15277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15277
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Other
>Reporter: Jon Meredith
>Assignee: Jon Meredith
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> To better mitigate cluster overload the executor services for various stages 
> should be configurable at runtime (probably as a JMX hot property). 
> Related to CASSANDRA-5044, this would add the capability to resize to 
> multiThreadedLowSignalStage pools based on SEPExecutor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15372) Add Jon Haddad's GPG key to KEYS file

2019-10-23 Thread Jon Haddad (Jira)
Jon Haddad created CASSANDRA-15372:
--

 Summary: Add Jon Haddad's GPG key to KEYS file
 Key: CASSANDRA-15372
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15372
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jon Haddad
Assignee: Jon Haddad


Following up CASSANDRA-15360 and the mailing list discussion, I'll add my GPG 
key to the keys file.

References:
 - [https://www.apache.org/dev/release-signing#keys-policy]
 - [http://www.apache.org/legal/release-policy.html]
 - [dev ML thread "Improving our frequency of (patch) releases, and letting 
committers make 
releases"|https://lists.apache.org/thread.html/660ed8c73e7b79afa610f3e45f37914ef43a4358a85a99c8b4b0288a@%3Cdev.cassandra.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15360) add mick's gpg key to project's KEYS file

2019-10-23 Thread Jon Haddad (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-15360:
---
  Fix Version/s: 3.11.x
Source Control Link: 
https://dist.apache.org/repos/dist/release/cassandra/KEYS rev 36452
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> add mick's gpg key to project's KEYS file
> -
>
> Key: CASSANDRA-15360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15360
> Project: Cassandra
>  Issue Type: Task
>  Components: Packaging
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 3.11.x
>
> Attachments: 15360.patch
>
>
> Currently only four individual's have their key in the project's KEYS file, 
> and only one of these people are taking on the release manager role.
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint 
>  ABCD 3108 336F 7CC6 567E  769F FDD3 B769 B21C 125C
> References:
>  - https://www.apache.org/dev/release-signing#keys-policy
>  - http://www.apache.org/legal/release-policy.html
>  - [dev ML thread "Improving our frequency of (patch) releases, and letting 
> committers make 
> releases"|https://lists.apache.org/thread.html/660ed8c73e7b79afa610f3e45f37914ef43a4358a85a99c8b4b0288a@]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15360) add mick's gpg key to project's KEYS file

2019-10-23 Thread Jon Haddad (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957906#comment-16957906
 ] 

Jon Haddad commented on CASSANDRA-15360:


Committed revision 36452.

> add mick's gpg key to project's KEYS file
> -
>
> Key: CASSANDRA-15360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15360
> Project: Cassandra
>  Issue Type: Task
>  Components: Packaging
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Attachments: 15360.patch
>
>
> Currently only four individual's have their key in the project's KEYS file, 
> and only one of these people are taking on the release manager role.
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint 
>  ABCD 3108 336F 7CC6 567E  769F FDD3 B769 B21C 125C
> References:
>  - https://www.apache.org/dev/release-signing#keys-policy
>  - http://www.apache.org/legal/release-policy.html
>  - [dev ML thread "Improving our frequency of (patch) releases, and letting 
> committers make 
> releases"|https://lists.apache.org/thread.html/660ed8c73e7b79afa610f3e45f37914ef43a4358a85a99c8b4b0288a@]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



svn commit: r36452 - /release/cassandra/KEYS

2019-10-23 Thread rustyrazorblade
Author: rustyrazorblade
Date: Wed Oct 23 14:28:19 2019
New Revision: 36452

Log:
Added Mick's key for releases

Modified:
release/cassandra/KEYS

Modified: release/cassandra/KEYS
==
--- release/cassandra/KEYS (original)
+++ release/cassandra/KEYS Wed Oct 23 14:28:19 2019
@@ -3864,3 +3864,359 @@ iKh4wsFPQGBh9ssAC3lQrs6T7ccqnRoO6xsmL+Y2
 gbFPnWvcHSSFnKg=
 =GW0U
 -END PGP PUBLIC KEY BLOCK-
+pub   dsa3072 2010-04-26 [SC]
+  ABCD3108336F7CC6567E769FFDD3B769B21C125C
+uid   [ultimate] Mick Semb Wever 
+sig 3FDD3B769B21C125C 2018-06-01  Mick Semb Wever 
+sig  91D3EB78F8AEBAD3 2010-11-04  Michael Semb Wever (Java Engineer) 

+uid   [ultimate] Mick Semb Wever 
+sig 3FDD3B769B21C125C 2018-06-01  Mick Semb Wever 
+sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
+sig  91D3EB78F8AEBAD3 2010-11-04  Michael Semb Wever (Java Engineer) 

+uid   [ultimate] [jpeg image of size 4671]
+sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
+sig  91D3EB78F8AEBAD3 2010-11-04  Michael Semb Wever (Java Engineer) 

+uid   [ultimate] Mick Semb Wever 
+sig 3FDD3B769B21C125C 2018-06-01  Mick Semb Wever 
+sub   elg4096 2010-04-26 [E]
+sig  FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
+
+pub   dsa3072 2010-04-26 [SC]
+  ABCD3108336F7CC6567E769FFDD3B769B21C125C
+uid   [ultimate] Mick Semb Wever 
+sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
+uid   [ultimate] Mick Semb Wever 
+sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
+uid   [ultimate] [jpeg image of size 4671]
+sig 3FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
+sub   elg4096 2010-04-26 [E]
+sig  FDD3B769B21C125C 2010-04-26  Mick Semb Wever 
+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQSuBEvV7NIRDAC1ASnxXKXvnbJi6ZaNXgLiU+A9ziX5/xQy7NfnvBwu26v/Xm5g
+OnMTFIpdQBh1YZtCl4zzFdVPOCb0fYBantKYIyYUDZGtWNPJPezd9pPOMxB//O3z
+C2RhPWB2Hoc3Bjgc1IR1VVLGewX/v7+M1qqSx5D7G8QDMLguJxCuisTw45nfY52M
+EO+y0ZT/oCh4iZTbq+PIMKyLfDLnWF0zyXcFK+iMimb+DEOglBctpmpB3kR6bifx
+BlZzkzO65eiuaxFQ0xZjf0R7WYdmfY8piQRyqh/y/kn8Slk6THz0aYRca8Jf6y0L
+avszdiuCAkh4SK74l61SY/J1oXbBKWZkPMAfAxXyKwzh3Nm8eFyNnS0EBuITilvi
+/lWc6MQIoCh/bMNeGCzeoJWehBbCA1o0OxNpsqHjSffE2cM/r0NOQD1weK54osO0
+p+U0AxhXVHPUOuwqPmSUNTm05rVNCLQLvPaVt1M69MTR7bt/mfOJxrIDPvgThpX6
+5Uo4hVAewIyiH48BAOLcCRVSEy+sxKm69qPfps+amvwvNoY3fNm4esGGKIT5DACT
+MrR2lVPY/XGM0F+AOj51XgmCOGn1wmjZiXe1kMHRZBlVzFXuOfZKfZeYDMT8zK0X
+oSfBCy4OOMG2QRb4ICHiIGMRj7XDrS3MrHgJR2vnQ/ZUo+y7pb03Z5a9sMooh/Uk
+BeF9wMkIt5mbtQyxRZYBvG6e6KrlA/ViG5I9QoTjKsjUj4B5s556Po6n3IJqXpW4
+Abtc+FxjhY3SafyQG+nsVZbrsojtityXo6y6R/yTdNB+N0reIgPs5dSWFt/N7SNq
+cwLLvhbTl27cR1afNkxgh6ULVvSZ6I9KMMZIvdLD+HWZijzSi/pYnZlrrp5NRBWw
+ZwRU4TOXuPiMk4eNW4X3YMR/Z3/XqT2JtGex/x/J03YnEDVET5LAcWZ0Nt9QlVNg
+6DwG/cFCgoySszgyIsNiorVslxG62CsDRLF2p6o8lyYix7uAdqnhVEsEMQpa1Ah/
+sYE8JRcVEDTiPL3VCp3ZG/3jPt6A/KDzrEMC2t11ZcoBkBZoIBKfvRPH02yuM3cL
+/ROU5R7FMIv3mqimg+SojUmn6TWhzlDIo/K9p6v+Sj/ujpR5pzbVUGc3SMhQ+us8
+C5BKPDGMgdP155LS2/C1LnSjKQafsrqAiK9rKmLcmIHK/1WoFt04ckjmzqrIJPJd
+bZU8YYA5TTr/ZR3PPkN5n5NXF1K5nckPSVWRf6wyQhcA8Ao7iZO2uwWR07XGUItT
+/FBGPEmgh2S8GXvzFTRhVozNvvL7+WhdQa6OQe5ICM8Wc67u1wUWzuPZguCfmxyA
+LhdnVRUZAoVdbk3IFsovcnquby6vDduIt9CsNhFm2SadKb1JdxJxsgaDNUYwCSpA
+0GKnu5eYA+O8S/vYHnjZVITWr0V7qk81P92W84OKXwVkJ0QwmXkSFfbF8V/Nvjxp
+01+rz0ctndbYf4mXAyzTqv4iC6IWbJ1Zz1GH5JGw+HMx5hk1rcE8jUItDvPWt8+F
+HxO/K8ufoVx1AJEptnZToKe5QtV4EOmzLNyt9jnCPT2qPNKK0Ad4bsDsbqK4Spyz
+lrQgTWljayBTZW1iIFdldmVyIDxtY2tAYXBhY2hlLm9yZz6IkAQTEQoAOAIbAwUL
+CQgHAwUVCgkICwUWAgMBAAIeAQIXgBYhBKvNMQgzb3zGVn52n/3Tt2myHBJcBQJb
+EI9GAAoJEP3Tt2myHBJc1GoA/R3Z/qs3kYuLgIpMF/bIAFHYJErmEa1gkMTSaYvf
+sR65AP9RQqW9Niy0JCwVtlq5gXSjgcYh6Gh1/w0ehidE+Kp7Toh5BBMRCgAhBQJL
+1ezSAhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEP3Tt2myHBJcf6gA/3PW
+HVINw1kWYrfQDIgMqnkFDbG2UFMP5X6FG5mEo4oKAP4jZQchjncch8abmj1lUz7c
+3aB1+rF+9j4FuHatwzcp9ohGBBARCgAGBQJM0hpoAAoJEJHT63j4rrrTgI8An2wr
+J4j6ATOeDMPNSwad3GQV3zsJAJ0QV7dgVYGjRVt+OLJcqa6Wt0yIELQlTWljayBT
+ZW1iIFdldmVyIDxtaWNrQHNlbWIud2V2ZXIub3JnPoiTBBMRCgA7AhsDBQsJCAcD
+BRUKCQgLBRYCAwEAAh4BAheAFiEEq80xCDNvfMZWfnaf/dO3abIcElwFAlsQj0YC
+GQEACgkQ/dO3abIcElyrpAD/W4Ge09788Ks6Qy+ipBewjeUaOC3SMI8To73CLQC5
+p3MBAMH7qqy6SO/s6OxmDWdVN7wjK+e9LY/PhEqb5ReMMekiiEYEEBEKAAYFAkzS
+GmsACgkQkdPrePiuutP43gCeIPrQpZHtO/Pl2Z+yHYPqColcj2gAn2YbjU5P3uTC
+S4UX2+ZYCv5ZgX5Z0dGS0ZABEAABAQAAAP/Y/+AAEEpGSUYAAQEA
+AAEAAQAA/9sAQwAIBgYHBgUIBwcHCQkICgwUDQwLCwwZEhMPFB0aHx4dGhwcICQu
+JyAiLCMcHCg3KSwwMTQ0NB8nOT04MjwuMzQy/9sAQwEJCQkMCwwYDQ0YMiEcITIy
+MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIy
+/8AAEQgAcwCaAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAABAgMEBQYH
+CAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGh
+CCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldY
+WVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1

[jira] [Comment Edited] (CASSANDRA-15360) add mick's gpg key to project's KEYS file

2019-10-23 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957902#comment-16957902
 ] 

Michael Semb Wever edited comment on CASSANDRA-15360 at 10/23/19 2:25 PM:
--

[~rustyrazorblade], I tried to commit the patch, hoping that as a ASF Member i 
would have permissions to the dist/ directory. I do not have those permissions.

As described in the ticket description's references, the PMC would need to vote 
that it's ok for committers to commit to this directory, and then I would have 
to file an INFRA ticket asking that permission.




was (Author: michaelsembwever):
[~rustyrazorblade], I tried to commit the patch, hoping that as a ASF Member i 
would have permissions to the dist/ directory. I do not have though permissions.

As described in the ticket description's references, the PMC would need to vote 
that it's ok for committers to commit to this directory, and then I would have 
to file an INFRA ticket asking that permission.



> add mick's gpg key to project's KEYS file
> -
>
> Key: CASSANDRA-15360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15360
> Project: Cassandra
>  Issue Type: Task
>  Components: Packaging
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Attachments: 15360.patch
>
>
> Currently only four individual's have their key in the project's KEYS file, 
> and only one of these people are taking on the release manager role.
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint 
>  ABCD 3108 336F 7CC6 567E  769F FDD3 B769 B21C 125C
> References:
>  - https://www.apache.org/dev/release-signing#keys-policy
>  - http://www.apache.org/legal/release-policy.html
>  - [dev ML thread "Improving our frequency of (patch) releases, and letting 
> committers make 
> releases"|https://lists.apache.org/thread.html/660ed8c73e7b79afa610f3e45f37914ef43a4358a85a99c8b4b0288a@]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15360) add mick's gpg key to project's KEYS file

2019-10-23 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957902#comment-16957902
 ] 

Michael Semb Wever commented on CASSANDRA-15360:


[~rustyrazorblade], I tried to commit the patch, hoping that as a ASF Member i 
would have permissions to the dist/ directory. I do not have though permissions.

As described in the ticket description's references, the PMC would need to vote 
that it's ok for committers to commit to this directory, and then I would have 
to file an INFRA ticket asking that permission.



> add mick's gpg key to project's KEYS file
> -
>
> Key: CASSANDRA-15360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15360
> Project: Cassandra
>  Issue Type: Task
>  Components: Packaging
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Attachments: 15360.patch
>
>
> Currently only four individual's have their key in the project's KEYS file, 
> and only one of these people are taking on the release manager role.
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint 
>  ABCD 3108 336F 7CC6 567E  769F FDD3 B769 B21C 125C
> References:
>  - https://www.apache.org/dev/release-signing#keys-policy
>  - http://www.apache.org/legal/release-policy.html
>  - [dev ML thread "Improving our frequency of (patch) releases, and letting 
> committers make 
> releases"|https://lists.apache.org/thread.html/660ed8c73e7b79afa610f3e45f37914ef43a4358a85a99c8b4b0288a@]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15367) Memtable memory allocations may deadlock

2019-10-23 Thread Benedict Elliott Smith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict Elliott Smith updated CASSANDRA-15367:
---
Fix Version/s: 3.11.x
   3.0.x
   2.2.x
   4.0

> Memtable memory allocations may deadlock
> 
>
> Key: CASSANDRA-15367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15367
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Assignee: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> * Under heavy contention, we guard modifications to a partition with a mutex, 
> for the lifetime of the memtable.
> * Memtables block for the completion of all {{OpOrder.Group}} started before 
> their flush began
> * Memtables permit operations from this cohort to fall-through to the 
> following Memtable, in order to guarantee a precise commitLogUpperBound
> * Memtable memory limits may be lifted for operations in the first cohort, 
> since they block flush (and hence block future memory allocation)
> With very unfortunate scheduling
> * A contended partition may rapidly escalate to a mutex
> * The system may reach memory limits that prevent allocations for the new 
> Memtable’s cohort (C2) 
> * An operation from C2 may hold the mutex when this occurs
> * Operations from a prior Memtable’s cohort (C1), for a contended partition, 
> may fall-through to the next Memtable
> * The operations from C1 may execute after the above is encountered by those 
> from C2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15368) Failing to flush Memtable without terminating process results in permanent data loss

2019-10-23 Thread Benedict Elliott Smith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict Elliott Smith updated CASSANDRA-15368:
---
Fix Version/s: 3.11.x
   3.0.x
   2.2.x
   4.0

> Failing to flush Memtable without terminating process results in permanent 
> data loss
> 
>
> Key: CASSANDRA-15368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15368
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log, Local/Memtable
>Reporter: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 2.2.x, 3.0.x, 3.11.x
>
>
> {{Memtable}} do not contain records that cover a precise contiguous range of 
> {{ReplayPosition}}, since there are only weak ordering constraints when 
> rolling over to a new {{Memtable}} - the last operations for the old 
> {{Memtable}} may obtain their {{ReplayPosition}} after the first operations 
> for the new {{Memtable}}.
> Unfortunately, we treat the {{Memtable}} range as contiguous, and invalidate 
> the entire range on flush.  Ordinarily we only invalidate records when all 
> prior {{Memtable}} have also successfully flushed.  However, in the event of 
> a flush that does not terminate the process (either because of disk failure 
> policy, or because it is a software error), the later flush is able to 
> invalidate the region of the commit log that includes records that should 
> have been flushed in the prior {{Memtable}}
> More problematically, this can also occur on restart without any associated 
> flush failure, as we use commit log boundaries written to our flushed 
> sstables to filter {{ReplayPosition}} on recovery, which is meant to 
> replicate our {{Memtable}} flush behaviour above.  However, we do not know 
> that earlier flushes have completed, and they may complete successfully 
> out-of-order.  So any flush that completes before the process terminates, but 
> began after another flush that _doesn’t_ complete before the process 
> terminates, has the potential to cause permanent data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15369) Fake row deletions and range tombstones, causing digest mismatch and sstable growth

2019-10-23 Thread Benedict Elliott Smith (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict Elliott Smith updated CASSANDRA-15369:
---
Fix Version/s: 3.11.x
   3.0.x
   4.0

> Fake row deletions and range tombstones, causing digest mismatch and sstable 
> growth
> ---
>
> Key: CASSANDRA-15369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15369
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination, Local/Memtable, Local/SSTable
>Reporter: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> As assessed in CASSANDRA-15363, we generate fake row deletions and fake 
> tombstone markers under various circumstances:
>  * If we perform a clustering key query (or select a compact column):
>  * Serving from a {{Memtable}}, we will generate fake row deletions
>  * Serving from an sstable, we will generate fake row tombstone markers
>  * If we perform a slice query, we will generate only fake row tombstone 
> markers for any range tombstone that begins or ends outside of the limit of 
> the requested slice
>  * If we perform a multi-slice or IN query, this will occur for each 
> slice/clustering
> Unfortunately, these different behaviours can lead to very different data 
> stored in sstables until a full repair is run.  When we read-repair, we only 
> send these fake deletions or range tombstones.  A fake row deletion, 
> clustering RT and slice RT, each produces a different digest.  So for each 
> single point lookup we can produce a digest mismatch twice, and until a full 
> repair is run we can encounter an unlimited number of digest mismatches 
> across different overlapping queries.
> Relatedly, this seems a more problematic variant of our atomicity failures 
> caused by our monotonic reads, since RTs can have an atomic effect across (up 
> to) the entire partition, whereas the propagation may happen on an 
> arbitrarily small portion.  If the RT exists on only one node, this could 
> plausibly lead to fairly problematic scenario if that node fails before the 
> range can be repaired. 
> At the very least, this behaviour can lead to an almost unlimited amount of 
> extraneous data being stored until the range is repaired and compaction 
> happens to overwrite the sub-range RTs and row deletions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15360) add mick's gpg key to project's KEYS file

2019-10-23 Thread Jon Haddad (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-15360:
---
Status: Ready to Commit  (was: Review In Progress)

> add mick's gpg key to project's KEYS file
> -
>
> Key: CASSANDRA-15360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15360
> Project: Cassandra
>  Issue Type: Task
>  Components: Packaging
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Attachments: 15360.patch
>
>
> Currently only four individual's have their key in the project's KEYS file, 
> and only one of these people are taking on the release manager role.
> The patch adds my gpg public key to the project's KEYS file found at 
> https://dist.apache.org/repos/dist/release/cassandra/KEYS
> My gpg public key here has the fingerprint 
>  ABCD 3108 336F 7CC6 567E  769F FDD3 B769 B21C 125C
> References:
>  - https://www.apache.org/dev/release-signing#keys-policy
>  - http://www.apache.org/legal/release-policy.html
>  - [dev ML thread "Improving our frequency of (patch) releases, and letting 
> committers make 
> releases"|https://lists.apache.org/thread.html/660ed8c73e7b79afa610f3e45f37914ef43a4358a85a99c8b4b0288a@]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15366) Backup and Restore

2019-10-23 Thread maxwellguo (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957754#comment-16957754
 ] 

maxwellguo commented on CASSANDRA-15366:


I will  try to reproduce 

> Backup and Restore
> --
>
> Key: CASSANDRA-15366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15366
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kamal
>Priority: Normal
>
> Hi Team,
> We are testing the backup and restore of CASSANDRA  in a cluster and found 
> that while restoring the backup from the second node to the new node (or 
> previous node) then complete data is not restored.
> Please enlight us on it.
>  
> Regards,
> Kamal



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15366) Backup and Restore

2019-10-23 Thread Kamal (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957723#comment-16957723
 ] 

Kamal commented on CASSANDRA-15366:
---

Yes, the 2 Nodes are in the same dc name = datacenter1. And yes the ID1 and 2 
are missing.

Yes all operations are done through cqlsh only.

> Backup and Restore
> --
>
> Key: CASSANDRA-15366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15366
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kamal
>Priority: Normal
>
> Hi Team,
> We are testing the backup and restore of CASSANDRA  in a cluster and found 
> that while restoring the backup from the second node to the new node (or 
> previous node) then complete data is not restored.
> Please enlight us on it.
>  
> Regards,
> Kamal



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org