[jira] [Updated] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-08 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13127:
-
Component/s: Materialized Views
 Local Write-Read Paths

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13509) Change system.paxos default compaction strategy to TWCS

2017-05-08 Thread Jay Zhuang (JIRA)
Jay Zhuang created CASSANDRA-13509:
--

 Summary: Change system.paxos default compaction strategy to TWCS
 Key: CASSANDRA-13509
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13509
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jay Zhuang
Assignee: Jay Zhuang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable

2017-05-08 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13508:
---
Summary: Make system.paxos table compaction strategy configurable  (was: 
Make system.paxos table compaction strategy configurable and make TWCS as 
default)

> Make system.paxos table compaction strategy configurable
> 
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test11.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test11.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable and make TWCS as default

2017-05-08 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13508:
---
Description: 
The default compaction strategy for {{system.paxos}} table is LCS for 
performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
system is busy with {{system.paxos}} compaction.

As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
test, it significantly reduced the number of compaction without impacting the 
latency too much:
!test11.png!
The time window for TWCS is set to 2 minutes for the test.

Here is the p99 latency impact:
!test2.png!
the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
increase.

  was:
The default compaction strategy for {{system.paxos}} table is LCS for 
performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
system is busy with {{system.paxos}} compaction.

As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
test, it significantly reduced the number of compaction without impacting the 
latency too much:
!test1.png!
The time window for TWCS is set to 2 minutes for the test.

Here is the p99 latency impact:
!test2.png!
the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
increase.


> Make system.paxos table compaction strategy configurable and make TWCS as 
> default
> -
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test11.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test11.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable and make TWCS as default

2017-05-08 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13508:
---
Attachment: test11.png

> Make system.paxos table compaction strategy configurable and make TWCS as 
> default
> -
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test11.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test1.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable and make TWCS as default

2017-05-08 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13508:
---
Attachment: (was: test1.png)

> Make system.paxos table compaction strategy configurable and make TWCS as 
> default
> -
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test1.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable and make TWCS as default

2017-05-08 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13508:
---
Description: 
The default compaction strategy for {{system.paxos}} table is LCS for 
performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
system is busy with {{system.paxos}} compaction.

As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
test, it significantly reduced the number of compaction without impacting the 
latency too much:
!test1.png!
The time window for TWCS is set to 2 minutes for the test.

Here is the p99 latency impact:
!test2.png!
the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
increase.

  was:
The default compaction strategy for {{system.paxos}} table is LCS for 
performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
system is busy with {{system.paxos}} compaction.

As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
test, it significantly reduced the number of compaction without impacting the 
latency too much:
!test1.png | height=450,width=650!
The time window for TWCS is set to 2 minutes for the test.

Here is the p99 latency impact:
!test2.png!
the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
increase.


> Make system.paxos table compaction strategy configurable and make TWCS as 
> default
> -
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test1.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test1.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable and make TWCS as default

2017-05-08 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13508:
---
Description: 
The default compaction strategy for {{system.paxos}} table is LCS for 
performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
system is busy with {{system.paxos}} compaction.

As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
test, it significantly reduced the number of compaction without impacting the 
latency too much:
!test1.png | height=450,width=650!
The time window for TWCS is set to 2 minutes for the test.

Here is the p99 latency impact:
!test2.png!
the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
increase.

  was:
The default compaction strategy for {{system.paxos}} table is LCS for 
performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
system is busy with {{system.paxos}} compaction.

As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
test, it significantly reduced the number of compaction without impacting the 
latency too much:
!test1.png!
The time window for TWCS is set to 2 minutes for the test.

Here is the p99 latency impact:
!test2.png!
the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
increase.


> Make system.paxos table compaction strategy configurable and make TWCS as 
> default
> -
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test1.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test1.png | height=450,width=650!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable and make TWCS as default

2017-05-08 Thread Jay Zhuang (JIRA)
Jay Zhuang created CASSANDRA-13508:
--

 Summary: Make system.paxos table compaction strategy configurable 
and make TWCS as default
 Key: CASSANDRA-13508
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jay Zhuang
Assignee: Jay Zhuang
 Fix For: 4.0, 4.x
 Attachments: test1.png, test2.png

The default compaction strategy for {{system.paxos}} table is LCS for 
performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
system is busy with {{system.paxos}} compaction.

As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
test, it significantly reduced the number of compaction without impacting the 
latency too much:
!test1.png!
The time window for TWCS is set to 2 minutes for the test.

Here is the p99 latency impact:
!test2.png!
the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2017-05-08 Thread Anthony Grasso (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000186#comment-16000186
 ] 

Anthony Grasso edited comment on CASSANDRA-10968 at 5/9/17 12:52 AM:
-

Nice find [~Ge]!

I have reviewed code changes and performed the following for all versions of 
the patch available (2.1, 2.2, 3.11).
- Tested as per the example given by [~Ge].
- Ran unit test created in isolation using the following command.
{noformat}
ant testsome -Dtest.name=org.apache.cassandra.db.ColumnFamilyStoreTest 
-Dtest.methods=testSnapshotWithoutFlushWithSecondaryIndexes
{noformat}
- Ran unit tests for {{ColumnFamilyStoreTest}} using the following command.
{noformat}
ant test -Dtest.name=ColumnFamilyStoreTest
{noformat}

I have left a comment on the commits ([2.1.12 | 
https://github.com/Ge/cassandra/commit/ef7ffcd757a4523387a8a078ea75f9c154715b01],
 [2.2.4 | 
https://github.com/Ge/cassandra/commit/83e02225859d4e4b49e0c4d8cc8f0917952981aa],
 [3.11 | 
https://github.com/Ge/cassandra/commit/735ba7b59e06ae06f8287daa0fd0bb04de22cbc0])
 in relation to the new unit test. We will need make a fix to it before we 
commit this patch.


was (Author: anthony grasso):
Nice find [~Ge]!

I have reviewed code changes and performed the following for all versions of 
the patch available (2.1, 2.2, 3.11).
- Tested as per the example given by [~Ge].
- Ran unit test created in isolation using the following command.
{noformat}
ant testsome -Dtest.name=org.apache.cassandra.db.ColumnFamilyStoreTest 
-Dtest.methods=testSnapshotWithoutFlushWithSecondaryIndexes
{noformat}
- Ran unit tests for {{ColumnFamilyStoreTest}} using the following command.
{noformat}
ant test -Dtest.name=ColumnFamilyStoreTest
{noformat}

I have left a comment on the PR in relation to the new unit test. We will need 
make a fix to it before we commit this patch.

> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fred A
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
> Fix For: 2.1.12, 2.2.4, 3.11.0
>
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-05-08 Thread Ben Slater (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001830#comment-16001830
 ] 

Ben Slater commented on CASSANDRA-8780:
---

Excellent! Thanks [~tjake] for the assistance.

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 4.0
>
> Attachments: 8780-trunk-v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13507) dtest failure in paging_test.TestPagingWithDeletions.test_ttl_deletions

2017-05-08 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-13507:
--

 Summary: dtest failure in 
paging_test.TestPagingWithDeletions.test_ttl_deletions 
 Key: CASSANDRA-13507
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13507
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Ariel Weisberg
 Attachments: test_ttl_deletions_fail.txt

{noformat}
Failed 7 times in the last 30 runs. Flakiness: 34%, Stability: 76%
Error Message

4 != 8
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /tmp/dtest-z1xodw
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.5, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.5', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 4.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 4.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
{noformat}
Most output omitted. It's attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12014) IndexSummary > 2G causes an assertion error

2017-05-08 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001726#comment-16001726
 ] 

Jeff Jirsa commented on CASSANDRA-12014:


[~Stefania] - this isn't on my radar at all (I knew about it, but it's not 
anywhere near the top of my list of things I need to do/review). I agree with 
the simplified patch, but if you have someone else who will review, I encourage 
you to find a new victim/volunteer. Otherwise, I suspect it'll be in a month or 
more (which doesn't feel long, when I see how long this ticket's been open).


> IndexSummary > 2G causes an assertion error
> ---
>
> Key: CASSANDRA-12014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0.x, 3.11.x
>
>
> {noformat}
> ERROR [CompactionExecutor:1546280] 2016-06-01 13:21:00,444  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:1546280,1,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.maybeAddEntry(IndexSummaryBuilder.java:171)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.append(SSTableWriter.java:634)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.afterAppend(SSTableWriter.java:179)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:205) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_51]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> {noformat}
> I believe this can be fixed by raising the min_index_interval, but we should 
> have a better method of coping with this than throwing the AE.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-05-08 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001658#comment-16001658
 ] 

Jeff Jirsa commented on CASSANDRA-13216:


+1 to killing magic numbers.


> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log, 
> TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-05-08 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001619#comment-16001619
 ] 

Michael Kjellman commented on CASSANDRA-13216:
--

With this change, the "mocked" Clock in MessagingServiceTest will always return 
0 for getTick()
{code}
private static long time = System.currentTimeMillis();
private static Clock clock = new Clock()
{
public long getTick()
{
return 0;
}

public long getTime()
{
return time;
}
};
{code}

But if we look at the implementation of time() in metrics-core Timer, it does 
the following:
{code}
/**
 * Times and records the duration of event.
 *
 * @param event a {@link Callable} whose {@link Callable#call()} method 
implements a process
 *  whose duration should be timed
 * @paramthe type of the value returned by {@code event}
 * @return the value returned by {@code event}
 * @throws Exception if {@code event} throws an {@link Exception}
 */
public  T time(Callable event) throws Exception {
final long startTime = clock.getTick();
try {
return event.call();
} finally {
update(clock.getTick() - startTime);
}
}
{code}

So, from my understanding this means we will always just do 0-0 for the 
update() call on the Timer... right?

However, I don't think any of this matters in retospect. Took a big step back 
and looked over the actual unit test and what this thing is testing with 
[~jjirsa] and [~jasobrown] and all 3 of us think this magic number a bit 
questionable.

If we look at the original patch that added these magic numbers in the first 
place for CASSANDRA-10580 
(https://github.com/pcmanus/cassandra/commit/c9ef25fd81501005b6484baf064081efc557f3f4)
 there is nothing in the ticket or test or commit that justifies testing for 
these magic numbers and it looks like this is just going to be dependent on how 
fast your system can iterate thru the logic 5000 times.

So: I'd like to propose that we throw away the 2nd assert in this test. The 
first and last are good (counting the number that we expect to get) but doing a 
literal string compare on the entire log message is kinda unhelpful. Instead, 
we should throw a regex here on the log message, parse out the times and just 
check that they are > 0. Thoughts?

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log, 
> TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13506) dtest failure in bootstrap_test.TestBootstrap.simultaneous_bootstrap_test

2017-05-08 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-13506:
--

 Summary: dtest failure in 
bootstrap_test.TestBootstrap.simultaneous_bootstrap_test
 Key: CASSANDRA-13506
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13506
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Ariel Weisberg


{noformat}
Failed 11 times in the last 30 runs. Flakiness: 62%, Stability: 63%
Error Message

errors={: ReadTimeout('Error from server: 
code=1200 [Coordinator node timed out waiting for replica nodes\' responses] 
message="Operation timed out - received only 0 responses." 
info={\'received_responses\': 0, \'required_responses\': 1, \'consistency\': 
\'ONE\'}',)}, last_host=127.0.0.2
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /tmp/dtest-VsuThg
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
cassandra.cluster: INFO: New Cassandra host  
discovered
cassandra.protocol: WARNING: Server warning: Aggregation query used without 
partition key
dtest: DEBUG: Retrying read after timeout. Attempt #0
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/cassandra-dtest/tools/decorators.py",
 line 48, in wrapped
f(obj)
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/cassandra-dtest/bootstrap_test.py",
 line 659, in simultaneous_bootstrap_test
assert_one(session, "SELECT count(*) from keyspace1.standard1", [50], 
cl=ConsistencyLevel.ONE)
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/cassandra-dtest/tools/assertions.py",
 line 128, in assert_one
res = session.execute(simple_query)
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/venv/src/cassandra-driver/cassandra/cluster.py",
 line 2018, in execute
return self.execute_async(query, parameters, trace, custom_payload, 
timeout, execution_profile, paging_state).result()
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/venv/src/cassandra-driver/cassandra/cluster.py",
 line 3822, in result
raise self._final_exception
'errors={: ReadTimeout(\'Error from server: 
code=1200 [Coordinator node timed out waiting for replica nodes\\\' responses] 
message="Operation timed out - received only 0 responses." 
info={\\\'received_responses\\\': 0, \\\'required_responses\\\': 1, 
\\\'consistency\\\': \\\'ONE\\\'}\',)}, 
last_host=127.0.0.2\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
/tmp/dtest-VsuThg\ndtest: DEBUG: Done setting configuration options:\n{   
\'initial_token\': None,\n\'num_tokens\': \'32\',\n
\'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n
\'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': 1,\n 
   \'truncate_request_timeout_in_ms\': 1,\n
\'write_request_timeout_in_ms\': 1}\ncassandra.cluster: INFO: New Cassandra 
host  discovered\ncassandra.protocol: WARNING: 
Server warning: Aggregation query used without partition key\ndtest: DEBUG: 
Retrying read after timeout. Attempt #0\n- >> end captured 
logging << -'
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13182) test failure in sstableutil_test.SSTableUtilTest.compaction_test

2017-05-08 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13182:
---
Labels: dtest test-failure test-failure-fresh  (was: dtest test-failure)

> test failure in sstableutil_test.SSTableUtilTest.compaction_test
> 
>
> Key: CASSANDRA-13182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>  Labels: dtest, test-failure, test-failure-fresh
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/506/testReport/sstableutil_test/SSTableUtilTest/compaction_test
> {noformat}
> Error Message
> Lists differ: ['/tmp/dtest-Rk_3Cs/test/node1... != 
> ['/tmp/dtest-Rk_3Cs/test/node1...
> First differing element 8:
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db'
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db'
> First list contains 7 additional elements.
> First extra element 24:
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db'
>   
> ['/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-TOC.txt',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Digest.crc32',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Filter.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Index.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Statistics.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Summary.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Summary.db',
>  

[jira] [Created] (CASSANDRA-13505) dtest failure in user_functions_test.TestUserFunctions.test_migration

2017-05-08 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-13505:
--

 Summary: dtest failure in 
user_functions_test.TestUserFunctions.test_migration
 Key: CASSANDRA-13505
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13505
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Ariel Weisberg


{noformat}
Failed 1 times in the last 10 runs. Flakiness: 11%, Stability: 90%
Error Message


 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /tmp/dtest-c0Kk_e
dtest: DEBUG: Done setting configuration options:
{   'enable_scripted_user_defined_functions': 'true',
'enable_user_defined_functions': 'true',
'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5}
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, scheduling 
retry in 600.0 seconds: [Errno 111] Tried connecting to [('127.0.0.3', 9042)]. 
Last error: Connection refused
cassandra.policies: INFO: Using datacenter 'datacenter1' for 
DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify a 
local_dc to the constructor, or limit contact points to local cluster nodes
cassandra.cluster: INFO: New Cassandra host  
discovered
cassandra.cluster: INFO: New Cassandra host  
discovered
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/cassandra-dtest/user_functions_test.py",
 line 47, in test_migration
create_ks(schema_wait_session, 'ks', 1)
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/cassandra-dtest/dtest.py",
 line 725, in create_ks
session.execute(query % (name, "'class':'SimpleStrategy', 
'replication_factor':%d" % rf))
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/venv/src/cassandra-driver/cassandra/cluster.py",
 line 2018, in execute
return self.execute_async(query, parameters, trace, custom_payload, 
timeout, execution_profile, paging_state).result()
  File 
"/home/jenkins/jenkins-slave/workspace/Cassandra-trunk-dtest/venv/src/cassandra-driver/cassandra/cluster.py",
 line 3822, in result
raise self._final_exception
'\n
 >> begin captured logging << \ndtest: DEBUG: cluster ccm 
directory: /tmp/dtest-c0Kk_e\ndtest: DEBUG: Done setting configuration 
options:\n{   \'enable_scripted_user_defined_functions\': \'true\',\n
\'enable_user_defined_functions\': \'true\',\n\'initial_token\': None,\n
\'num_tokens\': \'32\',\n\'phi_convict_threshold\': 5}\ncassandra.pool: 
WARNING: Error attempting to reconnect to 127.0.0.3, scheduling retry in 600.0 
seconds: 

[jira] [Updated] (CASSANDRA-13504) Prevent duplicate notification messages

2017-05-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-13504:
-
   Resolution: Invalid
 Reviewer:   (was: Aleksey Yeschenko)
Fix Version/s: (was: 4.0)
   Status: Resolved  (was: Patch Available)

Ah, bummer. The one thing is for notifications, the other for the result 
message.
TL;DR all's good as it is.

> Prevent duplicate notification messages
> ---
>
> Key: CASSANDRA-13504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13504
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>
> Since CASSANDRA-9425 duplicate schema change notifications are being sent. 
> One time via {{SchemaAlteringStatement#announceMigration}} and one time via 
> {{Schema#merge}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-05-08 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13369:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed as 
[https://github.com/apache/cassandra/commit/1a83efe2047d0138725d5e102cc40774f3b14641|1a83efe2047d0138725d5e102cc40774f3b14641].
 Thanks.

> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 3.11.0, 4.0
>
> Attachments: 3.X.diff, test_stdout.txt, trunk.diff
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> {code}
> CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> {code}
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-05-08 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13369:
---
Fix Version/s: 4.0
   3.11.0

> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Fix For: 3.11.0, 4.0
>
> Attachments: 3.X.diff, test_stdout.txt, trunk.diff
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> {code}
> CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> {code}
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-05-08 Thread aweisberg
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2304363e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2304363e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2304363e

Branch: refs/heads/trunk
Commit: 2304363e4eae665a1ba7f238c81a9237906394da
Parents: 07c11ca 1a83efe
Author: Ariel Weisberg 
Authored: Mon May 8 16:27:57 2017 -0400
Committer: Ariel Weisberg 
Committed: Mon May 8 16:40:19 2017 -0400

--
 CHANGES.txt |  1 +
 src/antlr/Parser.g  |  5 +++-
 .../apache/cassandra/cql3/CqlParserTest.java| 30 
 .../cql3/validation/operations/AlterTest.java   | 19 +
 .../cql3/validation/operations/CreateTest.java  | 14 +
 5 files changed, 68 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2304363e/CHANGES.txt
--
diff --cc CHANGES.txt
index 6b5d114,3166780..93096fe
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,68 -1,5 +1,69 @@@
 +4.0
 + * Add multiple table operation support to cassandra-stress (CASSANDRA-8780)
 + * Fix incorrect cqlsh results when selecting same columns multiple times 
(CASSANDRA-13262)
 + * Fix WriteResponseHandlerTest is sensitive to test execution order 
(CASSANDRA-13421)
 + * Improve incremental repair logging (CASSANDRA-13468)
 + * Start compaction when incremental repair finishes (CASSANDRA-13454)
 + * Add repair streaming preview (CASSANDRA-13257)
 + * Cleanup isIncremental/repairedAt usage (CASSANDRA-13430)
 + * Change protocol to allow sending key space independent of query string 
(CASSANDRA-10145)
 + * Make gc_log and gc_warn settable at runtime (CASSANDRA-12661)
 + * Take number of files in L0 in account when estimating remaining compaction 
tasks (CASSANDRA-13354)
 + * Skip building views during base table streams on range movements 
(CASSANDRA-13065)
 + * Improve error messages for +/- operations on maps and tuples 
(CASSANDRA-13197)
 + * Remove deprecated repair JMX APIs (CASSANDRA-11530)
 + * Fix version check to enable streaming keep-alive (CASSANDRA-12929)
 + * Make it possible to monitor an ideal consistency level separate from 
actual consistency level (CASSANDRA-13289)
 + * Outbound TCP connections ignore internode authenticator (CASSANDRA-13324)
 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360)
 + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359)
 + * Incremental repair not streaming correct sstables (CASSANDRA-13328)
 + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300)
 + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID 
functions (CASSANDRA-13132)
 + * Remove config option index_interval (CASSANDRA-10671)
 + * Reduce lock contention for collection types and serializers 
(CASSANDRA-13271)
 + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283)
 + * Avoid synchronized on prepareForRepair in ActiveRepairService 
(CASSANDRA-9292)
 + * Adds the ability to use uncompressed chunks in compressed files 
(CASSANDRA-10520)
 + * Don't flush sstables when streaming for incremental repair 
(CASSANDRA-13226)
 + * Remove unused method (CASSANDRA-13227)
 + * Fix minor bugs related to #9143 (CASSANDRA-13217)
 + * Output warning if user increases RF (CASSANDRA-13079)
 + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081)
 + * Add support for + and - operations on dates (CASSANDRA-11936)
 + * Fix consistency of incrementally repaired data (CASSANDRA-9143)
 + * Increase commitlog version (CASSANDRA-13161)
 + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425)
 + * Refactor ColumnCondition (CASSANDRA-12981)
 + * Parallelize streaming of different keyspaces (CASSANDRA-4663)
 + * Improved compactions metrics (CASSANDRA-13015)
 + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031)
 + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855)
 + * Thrift removal (CASSANDRA-5)
 + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716)
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter 
(CASSANDRA-12422)
 + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080)
 + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084)
 + * Require forceful decommission if number of nodes is less than 

[1/3] cassandra git commit: Reject multiple values for a key in CQL grammar.

2017-05-08 Thread aweisberg
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 d56c64a88 -> 1a83efe20
  refs/heads/trunk 07c11ca45 -> 2304363e4


Reject multiple values for a key in CQL grammar.

Patch by Nachiket Patil; Reviewed by Ariel Weisberg for CASSANDRA-13369


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a83efe2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a83efe2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a83efe2

Branch: refs/heads/cassandra-3.11
Commit: 1a83efe2047d0138725d5e102cc40774f3b14641
Parents: d56c64a
Author: Nachiket Patil 
Authored: Mon May 8 16:23:13 2017 -0400
Committer: Ariel Weisberg 
Committed: Mon May 8 16:25:41 2017 -0400

--
 CHANGES.txt |  1 +
 src/antlr/Parser.g  |  5 +++-
 .../apache/cassandra/cql3/CqlParserTest.java| 30 
 .../cql3/validation/operations/AlterTest.java   | 19 +
 .../cql3/validation/operations/CreateTest.java  | 14 +
 5 files changed, 68 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a83efe2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6603911..3166780 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.0
+ * Reject multiple values for a key in CQL grammar. (CASSANDRA-13369)
  * UDA fails without input rows (CASSANDRA-13399)
  * Fix compaction-stress by using daemonInitialization (CASSANDRA-13188)
  * V5 protocol flags decoding broken (CASSANDRA-13443)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a83efe2/src/antlr/Parser.g
--
diff --git a/src/antlr/Parser.g b/src/antlr/Parser.g
index 3d06dc3..e5b7584 100644
--- a/src/antlr/Parser.g
+++ b/src/antlr/Parser.g
@@ -127,7 +127,10 @@ options {
 break;
 }
 
-res.put(((Constants.Literal)entry.left).getRawText(), 
((Constants.Literal)entry.right).getRawText());
+if (res.put(((Constants.Literal)entry.left).getRawText(), 
((Constants.Literal)entry.right).getRawText()) != null)
+{
+addRecognitionError(String.format("Multiple definition for 
property " + ((Constants.Literal)entry.left).getRawText()));
+}
 }
 
 return res;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a83efe2/test/unit/org/apache/cassandra/cql3/CqlParserTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CqlParserTest.java 
b/test/unit/org/apache/cassandra/cql3/CqlParserTest.java
index 4b76dbc..4871c09 100644
--- a/test/unit/org/apache/cassandra/cql3/CqlParserTest.java
+++ b/test/unit/org/apache/cassandra/cql3/CqlParserTest.java
@@ -25,6 +25,7 @@ import org.antlr.runtime.CharStream;
 import org.antlr.runtime.CommonTokenStream;
 import org.antlr.runtime.RecognitionException;
 import org.antlr.runtime.TokenStream;
+import org.apache.cassandra.cql3.statements.PropertyDefinitions;
 
 import static org.junit.Assert.*;
 
@@ -75,6 +76,35 @@ public class CqlParserTest
 assertEquals(0, secondCounter.count);
 }
 
+@Test
+public void testDuplicateProperties() throws Exception
+{
+parseAndCountErrors("properties = { 'foo' : 'value1', 'bar': 'value2' 
};", 0, (p) -> p.properties(new PropertyDefinitions()));
+parseAndCountErrors("properties = { 'foo' : 'value1', 'foo': 'value2' 
};", 1, (p) -> p.properties(new PropertyDefinitions()));
+parseAndCountErrors("foo = 'value1' AND bar = 'value2' };", 0, (p) -> 
p.properties(new PropertyDefinitions()));
+parseAndCountErrors("foo = 'value1' AND foo = 'value2' };", 1, (p) -> 
p.properties(new PropertyDefinitions()));
+}
+
+private void parseAndCountErrors(String cql, int expectedErrors, 
ParserOperation operation) throws RecognitionException
+{
+SyntaxErrorCounter counter = new SyntaxErrorCounter();
+CharStream stream = new ANTLRStringStream(cql);
+CqlLexer lexer = new CqlLexer(stream);
+TokenStream tokenStream = new CommonTokenStream(lexer);
+CqlParser parser = new CqlParser(tokenStream);
+parser.addErrorListener(counter);
+
+operation.perform(parser);
+
+assertEquals(expectedErrors, counter.count);
+}
+
+@FunctionalInterface
+private interface ParserOperation
+{
+void perform(CqlParser cqlParser) throws RecognitionException;
+}
+
 private static final class SyntaxErrorCounter implements ErrorListener
 {
 private int count;


[2/3] cassandra git commit: Reject multiple values for a key in CQL grammar.

2017-05-08 Thread aweisberg
Reject multiple values for a key in CQL grammar.

Patch by Nachiket Patil; Reviewed by Ariel Weisberg for CASSANDRA-13369


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a83efe2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a83efe2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a83efe2

Branch: refs/heads/trunk
Commit: 1a83efe2047d0138725d5e102cc40774f3b14641
Parents: d56c64a
Author: Nachiket Patil 
Authored: Mon May 8 16:23:13 2017 -0400
Committer: Ariel Weisberg 
Committed: Mon May 8 16:25:41 2017 -0400

--
 CHANGES.txt |  1 +
 src/antlr/Parser.g  |  5 +++-
 .../apache/cassandra/cql3/CqlParserTest.java| 30 
 .../cql3/validation/operations/AlterTest.java   | 19 +
 .../cql3/validation/operations/CreateTest.java  | 14 +
 5 files changed, 68 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a83efe2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6603911..3166780 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.0
+ * Reject multiple values for a key in CQL grammar. (CASSANDRA-13369)
  * UDA fails without input rows (CASSANDRA-13399)
  * Fix compaction-stress by using daemonInitialization (CASSANDRA-13188)
  * V5 protocol flags decoding broken (CASSANDRA-13443)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a83efe2/src/antlr/Parser.g
--
diff --git a/src/antlr/Parser.g b/src/antlr/Parser.g
index 3d06dc3..e5b7584 100644
--- a/src/antlr/Parser.g
+++ b/src/antlr/Parser.g
@@ -127,7 +127,10 @@ options {
 break;
 }
 
-res.put(((Constants.Literal)entry.left).getRawText(), 
((Constants.Literal)entry.right).getRawText());
+if (res.put(((Constants.Literal)entry.left).getRawText(), 
((Constants.Literal)entry.right).getRawText()) != null)
+{
+addRecognitionError(String.format("Multiple definition for 
property " + ((Constants.Literal)entry.left).getRawText()));
+}
 }
 
 return res;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a83efe2/test/unit/org/apache/cassandra/cql3/CqlParserTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CqlParserTest.java 
b/test/unit/org/apache/cassandra/cql3/CqlParserTest.java
index 4b76dbc..4871c09 100644
--- a/test/unit/org/apache/cassandra/cql3/CqlParserTest.java
+++ b/test/unit/org/apache/cassandra/cql3/CqlParserTest.java
@@ -25,6 +25,7 @@ import org.antlr.runtime.CharStream;
 import org.antlr.runtime.CommonTokenStream;
 import org.antlr.runtime.RecognitionException;
 import org.antlr.runtime.TokenStream;
+import org.apache.cassandra.cql3.statements.PropertyDefinitions;
 
 import static org.junit.Assert.*;
 
@@ -75,6 +76,35 @@ public class CqlParserTest
 assertEquals(0, secondCounter.count);
 }
 
+@Test
+public void testDuplicateProperties() throws Exception
+{
+parseAndCountErrors("properties = { 'foo' : 'value1', 'bar': 'value2' 
};", 0, (p) -> p.properties(new PropertyDefinitions()));
+parseAndCountErrors("properties = { 'foo' : 'value1', 'foo': 'value2' 
};", 1, (p) -> p.properties(new PropertyDefinitions()));
+parseAndCountErrors("foo = 'value1' AND bar = 'value2' };", 0, (p) -> 
p.properties(new PropertyDefinitions()));
+parseAndCountErrors("foo = 'value1' AND foo = 'value2' };", 1, (p) -> 
p.properties(new PropertyDefinitions()));
+}
+
+private void parseAndCountErrors(String cql, int expectedErrors, 
ParserOperation operation) throws RecognitionException
+{
+SyntaxErrorCounter counter = new SyntaxErrorCounter();
+CharStream stream = new ANTLRStringStream(cql);
+CqlLexer lexer = new CqlLexer(stream);
+TokenStream tokenStream = new CommonTokenStream(lexer);
+CqlParser parser = new CqlParser(tokenStream);
+parser.addErrorListener(counter);
+
+operation.perform(parser);
+
+assertEquals(expectedErrors, counter.count);
+}
+
+@FunctionalInterface
+private interface ParserOperation
+{
+void perform(CqlParser cqlParser) throws RecognitionException;
+}
+
 private static final class SyntaxErrorCounter implements ErrorListener
 {
 private int count;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a83efe2/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java

[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-05-08 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001473#comment-16001473
 ] 

Michael Kjellman commented on CASSANDRA-13216:
--

Yup. I'm really sorry I let this slip. I owe you a beer the next time I see 
you. Will do it today.

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log, 
> TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-05-08 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13369:
---
Status: Ready to Commit  (was: Patch Available)

> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Attachments: 3.X.diff, test_stdout.txt, trunk.diff
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> {code}
> CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> {code}
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2017-05-08 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001349#comment-16001349
 ] 

Benjamin Roth commented on CASSANDRA-12888:
---

I am absolutely aware of that! That's why I also added some tests. All unit 
tests ran well so far. I also ran a bunch of probably related dtests like the 
MV test suite. It also looked good. Nevertheless, I don't want to urge you, 
take the time you need! I appreciate any feedback!

> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2017-05-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reassigned CASSANDRA-12888:
---

Assignee: Benjamin Roth  (was: Paulo Motta)

> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2017-05-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001335#comment-16001335
 ] 

Paulo Motta commented on CASSANDRA-12888:
-

Sorry for the delay here. The approach looks good but the devil is on the 
details, so we need to be careful about introducing changes in the critical 
path.

I will take a look this week.

> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Assignee: Paulo Motta
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2017-05-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reassigned CASSANDRA-12888:
---

Assignee: Paulo Motta  (was: Benjamin Roth)

> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Assignee: Paulo Motta
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10130:

Reviewer: Paulo Motta

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13369) If there are multiple values for a key, CQL grammar choses last value. This should not be silent or should not be allowed.

2017-05-08 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13369:
---
Attachment: test_stdout.txt

bootstrap_test.TestBootstrap.consistent_range_movement_false_with_replica_down_should_succeed_test
bootstrap_test.TestBootstrap.simultaneous_bootstrap_test
cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts
materialized_views_test.TestMaterializedViews.clustering_column_test
materialized_views_test.TestMaterializedViews.clustering_column_test
paxos_tests.TestPaxos.contention_test_many_threads
secondary_indexes_test.TestPreJoinCallback.write_survey_test
topology_test.TestTopology.size_estimates_multidc_test
topology_test.TestTopology.size_estimates_multidc_test

> If there are multiple values for a key, CQL grammar choses last value. This 
> should not be silent or should not be allowed.
> --
>
> Key: CASSANDRA-13369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13369
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nachiket Patil
>Assignee: Nachiket Patil
>Priority: Minor
> Attachments: 3.X.diff, test_stdout.txt, trunk.diff
>
>
> If through CQL, multiple values are specified for a key, grammar parses the 
> map and last value for the key wins. This behavior is bad.
> e.g. 
> {code}
> CREATE KEYSPACE Excalibur WITH REPLICATION = {'class': 
> 'NetworkTopologyStrategy', 'dc1': 2, 'dc1': 5};
> {code}
> Parsing this statement, 'dc1' gets RF = 5. This can be catastrophic, may even 
> result in loss of data. This behavior should not be silent or not be allowed 
> at all.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-05-08 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8780:
--
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   4.0
   Status: Resolved  (was: Patch Available)

Committed {{d345ef5d57e303cd8c642640bd65d7212bbb2436}} Thanks Ben!

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 4.0
>
> Attachments: 8780-trunk-v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Add support for multiple table operations to cassandra-stress

2017-05-08 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk cff8dadbe -> 07c11ca45


Add support for multiple table operations to cassandra-stress

Patch by Ben Slater; reviewed by tjake for CASSANDRA-8780


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07c11ca4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07c11ca4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07c11ca4

Branch: refs/heads/trunk
Commit: 07c11ca45b8b55b11a91812a741e85a930ac537e
Parents: cff8dad
Author: Ben Slater 
Authored: Mon May 8 10:18:34 2017 -0400
Committer: T Jake Luciani 
Committed: Mon May 8 13:36:18 2017 -0400

--
 CHANGES.txt |  1 +
 doc/source/tools/cassandra_stress.rst   | 13 ++-
 doc/source/tools/stress-example.yaml|  1 +
 .../apache/cassandra/stress/StressProfile.java  | 14 ++--
 .../org/apache/cassandra/stress/StressYaml.java |  1 +
 .../SampledOpDistributionFactory.java   |  7 +-
 .../cassandra/stress/report/StressMetrics.java  |  4 +-
 .../SettingsCommandPreDefinedMixed.java |  9 +-
 .../stress/settings/SettingsCommandUser.java| 86 ++--
 .../stress/settings/SettingsSchema.java |  2 +-
 .../stress/settings/StressSettings.java | 21 +++--
 11 files changed, 110 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07c11ca4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ed69d14..6b5d114 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add multiple table operation support to cassandra-stress (CASSANDRA-8780)
  * Fix incorrect cqlsh results when selecting same columns multiple times 
(CASSANDRA-13262)
  * Fix WriteResponseHandlerTest is sensitive to test execution order 
(CASSANDRA-13421)
  * Improve incremental repair logging (CASSANDRA-13468)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/07c11ca4/doc/source/tools/cassandra_stress.rst
--
diff --git a/doc/source/tools/cassandra_stress.rst 
b/doc/source/tools/cassandra_stress.rst
index 417288f..322a981 100644
--- a/doc/source/tools/cassandra_stress.rst
+++ b/doc/source/tools/cassandra_stress.rst
@@ -107,7 +107,12 @@ doesn't scale.
 Profile
 +++
 
-User mode requires a profile defined in YAML. 
+User mode requires a profile defined in YAML.
+Multiple YAML files may be specified in which case operations in the ops 
argument are referenced as specname.opname.
+
+An identifier for the profile::
+
+  specname: staff_activities
 
 The keyspace for the test::
 
@@ -209,6 +214,12 @@ queries. Additionally the table will be truncated once 
before the test.
 
 The full example can be found here :download:`yaml <./stress-example.yaml>`
 
+Running a user mode test with multiple yaml files::
+cassandra-stress user profile=./example.yaml,./example2.yaml duration=1m 
"ops(ex1.insert=1,ex1.latest_event=1,ex2.insert=2)" truncate=once
+
+This will run operations as specified in both the example.yaml and 
example2.yaml files. example.yaml and example2.yaml can reference the same table
+ although care must be taken that the table definition is identical (data 
generation specs can be different).
+
 Graphing
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/07c11ca4/doc/source/tools/stress-example.yaml
--
diff --git a/doc/source/tools/stress-example.yaml 
b/doc/source/tools/stress-example.yaml
index 0384de2..17161af 100644
--- a/doc/source/tools/stress-example.yaml
+++ b/doc/source/tools/stress-example.yaml
@@ -1,3 +1,4 @@
+spacenam: example # idenitifier for this spec if running with multiple yaml 
files
 keyspace: example
 
 # Would almost always be network topology unless running something locally

http://git-wip-us.apache.org/repos/asf/cassandra/blob/07c11ca4/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java 
b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
index 3662632..2420d68 100644
--- a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
+++ b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
@@ -66,6 +66,7 @@ public class StressProfile implements Serializable
 private List extraSchemaDefinitions;
 public final String seedStr = "seed for stress";
 
+public String specName;
 public String keyspaceName;
 public String tableName;
 private Map 

[jira] [Commented] (CASSANDRA-13497) Add in-tree testing guidelines

2017-05-08 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001047#comment-16001047
 ] 

Blake Eggleston commented on CASSANDRA-13497:
-

It's meant as more of a companion to CONTRIBUTING.md (which also needs to be 
updated).

> Add in-tree testing guidelines
> --
>
> Key: CASSANDRA-13497
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13497
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> Per the discussions in the dev@ list "_Code quality, principles and rules_" 
> and "_\[DISCUSS] Implementing code quality principles, and rules (was: Code 
> quality, principles and rules)_", I've put together some guidelines on 
> testing contributions 
> [here|https://github.com/bdeggleston/cassandra/blob/testing-doc/TESTING.md]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13504) Prevent duplicate notification messages

2017-05-08 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000868#comment-16000868
 ] 

Robert Stupp edited comment on CASSANDRA-13504 at 5/8/17 3:01 PM:
--

Proposed patch removes the schema notifications emitted via 
{{SchemaAlteringStatement}}. This also simplified calling 
{{grantPermissionsToCreator}}.
Also applied some code cleanup in {{MigrationManager}}.

||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:13504-dup-notif-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13504-dup-notif-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13504-dup-notif-trunk-dtest/lastSuccessfulBuild/]

(CI just triggered)


was (Author: snazy):
Proposed patch removes the schema notifications emitted via 
{{SchemaAlteringStatement}}. This also simplified calling 
{{grantPermissionsToCreator}}.
Also applied some code cleanup in {{MigrationManager}}.

||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:13504-dup-notif-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13504-dup-notif-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13504-dup-notif-trunk-dtest/lastSuccessfulBuild/]
(CI just triggered)

> Prevent duplicate notification messages
> ---
>
> Key: CASSANDRA-13504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13504
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.0
>
>
> Since CASSANDRA-9425 duplicate schema change notifications are being sent. 
> One time via {{SchemaAlteringStatement#announceMigration}} and one time via 
> {{Schema#merge}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13504) Prevent duplicate notification messages

2017-05-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-13504:
-
Status: Patch Available  (was: Open)

Proposed patch removes the schema notifications emitted via 
{{SchemaAlteringStatement}}. This also simplified calling 
{{grantPermissionsToCreator}}.
Also applied some code cleanup in {{MigrationManager}}.

||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:13504-dup-notif-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13504-dup-notif-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13504-dup-notif-trunk-dtest/lastSuccessfulBuild/]
(CI just triggered)

> Prevent duplicate notification messages
> ---
>
> Key: CASSANDRA-13504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13504
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.0
>
>
> Since CASSANDRA-9425 duplicate schema change notifications are being sent. 
> One time via {{SchemaAlteringStatement#announceMigration}} and one time via 
> {{Schema#merge}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13385) Delegate utests index name creation to CQLTester.createIndex

2017-05-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13385:
---
   Resolution: Fixed
Fix Version/s: 4.0
   Status: Resolved  (was: Ready to Commit)

Committed into trunk at cff8dadbe853c43fc53a827fce965d85e30d5de7

> Delegate utests index name creation to CQLTester.createIndex
> 
>
> Key: CASSANDRA-13385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13385
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>  Labels: cql, unit-test
> Fix For: 4.0
>
>
> Currently, many unit tests rely on {{CQLTester.createIndex}} to create 
> indexes. The index name should be specified by the test itself, for example:
> {code}
> createIndex("CREATE CUSTOM INDEX myindex ON %s(c) USING 
> 'org.apache.cassandra.index.internal.CustomCassandraIndex'");
> {code}
> Two different tests using the same index name can produce racy {{Index 
> myindex already exists}} errors due to the asynchronicity of 
> {{CQLTester.afterTest}} cleanup methods. 
> It would be nice to modify {{CQLTester.createIndex}} to make it generate its 
> own index names, as it is done by {{CQLTester.createTable}}:
> {code}
> createIndex("CREATE CUSTOM INDEX %s ON %s(c) USING 
> 'org.apache.cassandra.index.internal.CustomCassandraIndex'");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13504) Prevent duplicate notification messages

2017-05-08 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-13504:


 Summary: Prevent duplicate notification messages
 Key: CASSANDRA-13504
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13504
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp
Assignee: Robert Stupp
Priority: Minor
 Fix For: 4.0


Since CASSANDRA-9425 duplicate schema change notifications are being sent. One 
time via {{SchemaAlteringStatement#announceMigration}} and one time via 
{{Schema#merge}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-05-08 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000812#comment-16000812
 ] 

T Jake Luciani commented on CASSANDRA-8780:
---

Thanks, kicked off tests again :)

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.11.x
>
> Attachments: 8780-trunk-v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11471) Add SASL mechanism negotiation to the native protocol

2017-05-08 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000796#comment-16000796
 ] 

Sam Tunnicliffe commented on CASSANDRA-11471:
-

Sorry I'm a bit late to the party, and further apologies that I've not had 
chance to dig too deeply into this yet, but I have a couple of comments on the 
proposed implementation:

* The original intent of this ticket was to enable multiple mechanisms to be 
supported simultaneously (e.g. use common name auth for encrypted connections 
if the certificates would allow it and fall back to password auth if not), but 
the patch as it is doesn't exactly do that. It seems to me that an admin could 
provide a custom {{IAuthenticator}} which had a list of mechanisms > 1 but it 
feels like that doesn't really improve on the status quo that much. Ideally, I 
think we need to be able to configure multiple {{IAuthenticators}} in yaml and 
have the client choose which of them to interact with. There are a few places 
which make an assumption that there is only a single {{IAuthenticator}}, so 
those would need to be addressed.
* Following from that, I don't think that negotiation of the actual mechanism 
ought to be a function of the {{SASLNegotiator}} itself, at least not in it's 
current form ({{NegotiatingSaslNegotiator}}). Maybe we can compose the 
available/supported {{IAuthenticators}} into some class which aggregates them & 
have it perform the negotiation (i.e. selecting the instance based on the 
client's chosen mechanism). Or maybe this just happens in 
{{AuthResponse::execute}}. Basically, the actual {{IAuthenticator}} doesn't 
need to get involved until its mechanism has been selected.
* Rather than adding a new factory method to {{IAuthenticator}}, wouldn't it be 
cleaner to add a {{withCertificates(Certificate[])}} method with a default 
no-op implementation to {{SaslNegotiator}}? That way, the branching in 
{{ServerConnection}} is simplified, the need for the {{Optional}} is removed 
(because we just don't call it if the certs are null) and {{IAuthenticator}} 
impls which don't care about certs don't have to change at all.


> Add SASL mechanism negotiation to the native protocol
> -
>
> Key: CASSANDRA-11471
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11471
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Sam Tunnicliffe
>Assignee: Ben Bromhead
>  Labels: client-impacting
> Fix For: 4.x
>
> Attachments: CASSANDRA-11471
>
>
> Introducing an additional message exchange into the authentication sequence 
> would allow us to support multiple authentication schemes and [negotiation of 
> SASL mechanisms|https://tools.ietf.org/html/rfc4422#section-3.2]. 
> The current {{AUTHENTICATE}} message sent from Client to Server includes the 
> java classname of the configured {{IAuthenticator}}. This could be superceded 
> by a new message which lists the SASL mechanisms supported by the server. The 
> client would then respond with a new message which indicates it's choice of 
> mechanism.  This would allow the server to support multiple mechanisms, for 
> example enabling both {{PLAIN}} for username/password authentication and 
> {{EXTERNAL}} for a mechanism for extracting credentials from SSL 
> certificates\* (see the example in 
> [RFC-4422|https://tools.ietf.org/html/rfc4422#appendix-A]). Furthermore, the 
> server could tailor the list of supported mechanisms on a per-connection 
> basis, e.g. only offering certificate based auth to encrypted clients. 
> The client's response should include the selected mechanism and any initial 
> response data. This is mechanism-specific; the {{PLAIN}} mechanism consists 
> of a single round in which the client sends encoded credentials as the 
> initial response data and the server response indicates either success or 
> failure with no futher challenges required.
> From a protocol perspective, after the mechanism negotiation the exchange 
> would continue as in protocol v4, with one or more rounds of 
> {{AUTH_CHALLENGE}} and {{AUTH_RESPONSE}} messages, terminated by an 
> {{AUTH_SUCCESS}} sent from Server to Client upon successful authentication or 
> an {{ERROR}} on auth failure. 
> XMPP performs mechanism negotiation in this way, 
> [RFC-3920|http://tools.ietf.org/html/rfc3920#section-6] includes a good 
> overview.
> \* Note: this would require some a priori agreement between client and server 
> over the implementation of the {{EXTERNAL}} mechanism.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Make CQLTester.createIndex return the index name

2017-05-08 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk aaf201128 -> cff8dadbe


Make CQLTester.createIndex return the index name

patch by Andrés de la Peña; reviewed by Benjamin Lerer for CASSANDRA-13385


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cff8dadb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cff8dadb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cff8dadb

Branch: refs/heads/trunk
Commit: cff8dadbe853c43fc53a827fce965d85e30d5de7
Parents: aaf2011
Author: Andrés de la Peña 
Authored: Mon May 8 15:54:01 2017 +0200
Committer: Benjamin Lerer 
Committed: Mon May 8 15:54:01 2017 +0200

--
 .../org/apache/cassandra/cql3/CQLTester.java| 42 ++--
 .../apache/cassandra/cql3/KeyCacheCqlTest.java  | 12 +++---
 .../validation/entities/SecondaryIndexTest.java | 35 
 .../apache/cassandra/index/CustomIndexTest.java |  4 +-
 4 files changed, 66 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cff8dadb/test/unit/org/apache/cassandra/cql3/CQLTester.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CQLTester.java 
b/test/unit/org/apache/cassandra/cql3/CQLTester.java
index 26437c9..5a73c8d 100644
--- a/test/unit/org/apache/cassandra/cql3/CQLTester.java
+++ b/test/unit/org/apache/cassandra/cql3/CQLTester.java
@@ -29,9 +29,12 @@ import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
 import java.util.stream.Collectors;
 
 import com.google.common.base.Objects;
+import com.google.common.base.Strings;
 import com.google.common.collect.ImmutableSet;
 import org.junit.*;
 import org.slf4j.Logger;
@@ -96,6 +99,14 @@ public abstract class CQLTester
 
 public static final List PROTOCOL_VERSIONS = new 
ArrayList<>(ProtocolVersion.SUPPORTED.size());
 
+private static final String CREATE_INDEX_NAME_REGEX = 
"(\\s*(\\w*|\"\\w*\")\\s*)";
+private static final String CREATE_INDEX_REGEX = 
String.format("\\A\\s*CREATE(?:\\s+CUSTOM)?\\s+INDEX" +
+   
"(?:\\s+IF\\s+NOT\\s+EXISTS)?\\s*" +
+   
"%s?\\s*ON\\s+(%

[jira] [Commented] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2017-05-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000749#comment-16000749
 ] 

Benjamin Lerer commented on CASSANDRA-10271:


[~bsnyder788] do you have some time available for finishing the patch? If not I 
can take over.

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Brett Snyder
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: 10271-3.x.txt, cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-05-08 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000735#comment-16000735
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 5/8/17 1:38 PM:
---

I am trying things out by merging your ideas [~iksaif], [~jjirsa], 
[~adejanovski]
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417

but I am not sure of what do if one node of the ring has not activated cassadra 
with -Dcassandra.unsafe.xxx
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417#diff-e8e282423dcbf34d30a3578c8dec15cdR101
for now I just disable it with a warning even if the compactionParams says 
otherwise.

Let me know if this is not the right direction for you



was (Author: rgerard):
I am trying things out by merging your ideas [~iksaif], [~jjirsa], 
[~adejanovski]
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417

but I am not sure of what do if one node of the ring has not activated cassadra 
with -Dcassandra.unsafe.xxx
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417#diff-e8e282423dcbf34d30a3578c8dec15cdR101

Let me know if this is not the right direction for you


> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-05-08 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000735#comment-16000735
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 5/8/17 1:37 PM:
---

I am trying things out by merging your ideas [~iksaif], [~jjirsa], 
[~adejanovski]
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417

but I am not sure of what do if one node of the ring has not activated cassadra 
with -Dcassandra.unsafe.xxx
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417#diff-e8e282423dcbf34d30a3578c8dec15cdR101

Let me know if this is not the right direction for you



was (Author: rgerard):
I am trying things out by merging your ideas [~iksaif] [~jjirsa] [~adejanovski]
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417

but I am not sure of what do if one node of the ring has not activated cassadra 
with -Dcassandra.unsafe.xxx
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417#diff-e8e282423dcbf34d30a3578c8dec15cdR101

Let me know if this is not the right direction for you


> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-05-08 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000735#comment-16000735
 ] 

Romain GERARD commented on CASSANDRA-13418:
---

I am trying things out by merging your ideas [~iksaif] [~jjirsa] [~adejanovski]
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417

but I am not sure of what do if one node of the ring has not activated cassadra 
with -Dcassandra.unsafe.xxx
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417#diff-e8e282423dcbf34d30a3578c8dec15cdR101

Let me know if this is not the right direction for you


> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13385) Delegate utests index name creation to CQLTester.createIndex

2017-05-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000693#comment-16000693
 ] 

Benjamin Lerer commented on CASSANDRA-13385:


Thanks for the patch. +1

> Delegate utests index name creation to CQLTester.createIndex
> 
>
> Key: CASSANDRA-13385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13385
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>  Labels: cql, unit-test
>
> Currently, many unit tests rely on {{CQLTester.createIndex}} to create 
> indexes. The index name should be specified by the test itself, for example:
> {code}
> createIndex("CREATE CUSTOM INDEX myindex ON %s(c) USING 
> 'org.apache.cassandra.index.internal.CustomCassandraIndex'");
> {code}
> Two different tests using the same index name can produce racy {{Index 
> myindex already exists}} errors due to the asynchronicity of 
> {{CQLTester.afterTest}} cleanup methods. 
> It would be nice to modify {{CQLTester.createIndex}} to make it generate its 
> own index names, as it is done by {{CQLTester.createTable}}:
> {code}
> createIndex("CREATE CUSTOM INDEX %s ON %s(c) USING 
> 'org.apache.cassandra.index.internal.CustomCassandraIndex'");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13385) Delegate utests index name creation to CQLTester.createIndex

2017-05-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13385:
---
Status: Ready to Commit  (was: Patch Available)

> Delegate utests index name creation to CQLTester.createIndex
> 
>
> Key: CASSANDRA-13385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13385
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>  Labels: cql, unit-test
>
> Currently, many unit tests rely on {{CQLTester.createIndex}} to create 
> indexes. The index name should be specified by the test itself, for example:
> {code}
> createIndex("CREATE CUSTOM INDEX myindex ON %s(c) USING 
> 'org.apache.cassandra.index.internal.CustomCassandraIndex'");
> {code}
> Two different tests using the same index name can produce racy {{Index 
> myindex already exists}} errors due to the asynchronicity of 
> {{CQLTester.afterTest}} cleanup methods. 
> It would be nice to modify {{CQLTester.createIndex}} to make it generate its 
> own index names, as it is done by {{CQLTester.createTable}}:
> {code}
> createIndex("CREATE CUSTOM INDEX %s ON %s(c) USING 
> 'org.apache.cassandra.index.internal.CustomCassandraIndex'");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13493) RPM Init: Service startup ordering

2017-05-08 Thread martin a langhoff (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000678#comment-16000678
 ] 

martin a langhoff commented on CASSANDRA-13493:
---

For the BEGIN INIT INFO section, it seems to me that they could be made the 
same. 

For the chkconfig bits, it's not priority, it's _ordering_. If you want to 
start late, you need a high number. A good hint is to look at similar services, 
or services with similar needs, which are popular and packaged in Fedora/RHEL. 
So we could look at PostgreSQL or MySQL, which need the network, network-based 
storage, etc. And they both have "chkconfig: - 64 36". So we could match those, 
or keep the ones in my patch which are more conservative.

> RPM Init: Service startup ordering
> --
>
> Key: CASSANDRA-13493
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13493
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: martin a langhoff
> Fix For: 3.11.0
>
> Attachments: 
> 0001-RPM-Init-ordering-start-after-network-and-name-servi.patch
>
>
> Currently, Cassandra is setup to start _before_ network and name services 
> come up, and setup to be town down _after_ them, dangerously close to the 
> final shutdown call.
> A service daemon which may use network-based storage, and serves requests 
> over a network needs to start clearly after network and network mounts, and 
> come down clearly after.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test

2017-05-08 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-13113:

Status: Ready to Commit  (was: Patch Available)

> test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
> ---
>
> Key: CASSANDRA-13113
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13113
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1466/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 582, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> {code}{code}
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [Native-Transport-Requests-1] 2017-01-08 21:10:55,056 Message.java:623 
> - Unexpected exception during request; channel = [id: 0xf39c6dae, 
> L:/127.0.0.2:9042 - R:/127.0.0.1:43640]
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
> consistency level QUORUM
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310)
>  ~[main/:na]
>   at org.apache.cassandra.service.ClientState.login(ClientState.java:271) 
> ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot 
> achieve consistency level QUORUM
>   at 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1734)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1696) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1642) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1557) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:964)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:282)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:252)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:511)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493)
>  ~[main/:na]
>   ... 13 common frames omitted
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: 

[jira] [Commented] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test

2017-05-08 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000663#comment-16000663
 ] 

Sam Tunnicliffe commented on CASSANDRA-13113:
-

Sorry [~ifesdjeen], this slipped by me. Your patch LGTM, though I think there's 
on other place where the wrapping & rethrowing can be removed - I pushed a 
commit 
[here|https://github.com/beobal/cassandra/commit/4d9525de6709ab3887c6af48785f64a82b92e40d]

> test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
> ---
>
> Key: CASSANDRA-13113
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13113
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1466/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 582, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> {code}{code}
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [Native-Transport-Requests-1] 2017-01-08 21:10:55,056 Message.java:623 
> - Unexpected exception during request; channel = [id: 0xf39c6dae, 
> L:/127.0.0.2:9042 - R:/127.0.0.1:43640]
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
> consistency level QUORUM
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310)
>  ~[main/:na]
>   at org.apache.cassandra.service.ClientState.login(ClientState.java:271) 
> ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot 
> achieve consistency level QUORUM
>   at 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1734)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1696) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1642) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1557) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:964)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:282)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:252)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:511)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493)
>  

[jira] [Commented] (CASSANDRA-13503) Segfault during compaction

2017-05-08 Thread James Ravn (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000615#comment-16000615
 ] 

James Ravn commented on CASSANDRA-13503:


{noformat}
Cassandra 2.1.17

java -version:
java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)

uname -a (Ubuntu 16.04.2):
Linux ip-10-50-194-251 4.8.0-51-generic #54~16.04.1-Ubuntu SMP Wed Apr 26 
16:00:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

lstopo:
Machine (16GB total)
  NUMANode L#0 (P#0 7998MB) + Package L#0 + L3 L#0 (45MB) + L2 L#0 (256KB) + 
L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#1)
  NUMANode L#1 (P#1 8047MB) + Package L#1 + L3 L#1 (45MB) + L2 L#1 (256KB) + 
L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#3)
{noformat}

> Segfault during compaction
> --
>
> Key: CASSANDRA-13503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13503
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: {noformat}
> Cassandra 2.1.17
> java -version:
> java version "1.8.0_112"
> Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)
> uname -a (Ubuntu 16.04.2):
> Linux ip-10-50-194-251 4.8.0-51-generic #54~16.04.1-Ubuntu SMP Wed Apr 26 
> 16:00:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> lstopo:
> Machine (16GB total)
>   NUMANode L#0 (P#0 7998MB) + Package L#0 + L3 L#0 (45MB) + L2 L#0 (256KB) + 
> L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
> PU L#0 (P#0)
> PU L#1 (P#1)
>   NUMANode L#1 (P#1 8047MB) + Package L#1 + L3 L#1 (45MB) + L2 L#1 (256KB) + 
> L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
> PU L#2 (P#2)
> PU L#3 (P#3)
> {noformat}
>Reporter: James Ravn
> Attachments: hs_err_pid29774.log
>
>
> One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
> like it was loading sstables from disk:
> {noformat}
> Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" 
> daemon [_thread_in_Java, id=30023, 
> stack(0x7f3dce922000,0x7f3dce963000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0xbd49a8b6
> Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
> space=253k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 11168 C2 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
>  (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
> J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
> 0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
> J 14662 C2 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
>  (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
> J 13756 C2 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
>  (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
> J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
> (1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
> J 18294 C1 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
>  (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
> J 18503 C1 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
>  (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
> J 17908 C2 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
>  (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
> {noformat}
> No errors in kernel logs, and no other noticeable issues on the node.
> We're using {{offheap_objects}}, could that be related? Cassandra logs show 
> the usual memtable flushing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13503) Segfault during compaction

2017-05-08 Thread James Ravn (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Ravn updated CASSANDRA-13503:
---
Environment: Ubuntu 16.04.2  (was: {noformat}
Cassandra 2.1.17

java -version:
java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)

uname -a (Ubuntu 16.04.2):
Linux ip-10-50-194-251 4.8.0-51-generic #54~16.04.1-Ubuntu SMP Wed Apr 26 
16:00:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

lstopo:
Machine (16GB total)
  NUMANode L#0 (P#0 7998MB) + Package L#0 + L3 L#0 (45MB) + L2 L#0 (256KB) + 
L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#1)
  NUMANode L#1 (P#1 8047MB) + Package L#1 + L3 L#1 (45MB) + L2 L#1 (256KB) + 
L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#3)
{noformat})

> Segfault during compaction
> --
>
> Key: CASSANDRA-13503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13503
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 16.04.2
>Reporter: James Ravn
> Attachments: hs_err_pid29774.log
>
>
> One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
> like it was loading sstables from disk:
> {noformat}
> Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" 
> daemon [_thread_in_Java, id=30023, 
> stack(0x7f3dce922000,0x7f3dce963000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0xbd49a8b6
> Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
> space=253k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 11168 C2 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
>  (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
> J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
> 0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
> J 14662 C2 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
>  (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
> J 13756 C2 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
>  (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
> J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
> (1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
> J 18294 C1 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
>  (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
> J 18503 C1 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
>  (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
> J 17908 C2 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
>  (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
> {noformat}
> No errors in kernel logs, and no other noticeable issues on the node.
> We're using {{offheap_objects}}, could that be related? Cassandra logs show 
> the usual memtable flushing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13503) Segfault during compaction

2017-05-08 Thread James Ravn (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Ravn updated CASSANDRA-13503:
---
Environment: Ubuntu 16.04.2, Java1.8.0_112, Cassandra 2.1.17  (was: Ubuntu 
16.04.2)

> Segfault during compaction
> --
>
> Key: CASSANDRA-13503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13503
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 16.04.2, Java1.8.0_112, Cassandra 2.1.17
>Reporter: James Ravn
> Attachments: hs_err_pid29774.log
>
>
> One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
> like it was loading sstables from disk:
> {noformat}
> Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" 
> daemon [_thread_in_Java, id=30023, 
> stack(0x7f3dce922000,0x7f3dce963000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0xbd49a8b6
> Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
> space=253k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 11168 C2 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
>  (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
> J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
> 0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
> J 14662 C2 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
>  (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
> J 13756 C2 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
>  (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
> J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
> (1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
> J 18294 C1 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
>  (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
> J 18503 C1 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
>  (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
> J 17908 C2 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
>  (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
> {noformat}
> No errors in kernel logs, and no other noticeable issues on the node.
> We're using {{offheap_objects}}, could that be related? Cassandra logs show 
> the usual memtable flushing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13503) Segfault during compaction

2017-05-08 Thread James Ravn (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Ravn updated CASSANDRA-13503:
---
Description: 
One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
like it was loading sstables from disk:

{noformat}
Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" daemon 
[_thread_in_Java, id=30023, stack(0x7f3dce922000,0x7f3dce963000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
0xbd49a8b6

Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
space=253k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 11168 C2 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
 (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
J 14662 C2 
org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
 (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
J 13756 C2 
org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
 (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
(1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
J 18294 C1 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
J 18503 C1 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
J 17908 C2 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
 (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
{noformat}

No errors in kernel logs, and no other noticeable issues on the node.

We're using {{offheap_objects}}, could that be related? Cassandra logs show the 
usual memtable flushing.

  was:
One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
like it was loading sstables from disk:

{noformat}
Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" daemon 
[_thread_in_Java, id=30023, stack(0x7f3dce922000,0x7f3dce963000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
0xbd49a8b6

Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
space=253k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 11168 C2 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
 (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
J 14662 C2 
org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
 (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
J 13756 C2 
org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
 (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
(1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
J 18294 C1 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
J 18503 C1 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
J 17908 C2 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
 (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
{noformat}

No errors in kernel logs, and no other noticeable issues on the node.


> Segfault during compaction
> --
>
> Key: CASSANDRA-13503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13503
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: {noformat}
> Cassandra 2.1.17
> java -version:
> java version "1.8.0_112"
> Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)
> 

[jira] [Updated] (CASSANDRA-13503) Segfault during compaction

2017-05-08 Thread James Ravn (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Ravn updated CASSANDRA-13503:
---
Description: 
One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
like it was loading sstables from disk:

{noformat}
Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" daemon 
[_thread_in_Java, id=30023, stack(0x7f3dce922000,0x7f3dce963000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
0xbd49a8b6

Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
space=253k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 11168 C2 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
 (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
J 14662 C2 
org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
 (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
J 13756 C2 
org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
 (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
(1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
J 18294 C1 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
J 18503 C1 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
J 17908 C2 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
 (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
{noformat}

No errors in kernel logs, and no other noticeable issues on the node.

  was:
One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
like it was loading sstables from disk:

{noformat}
Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" daemon 
[_thread_in_Java, id=30023, stack(0x7f3dce922000,0x7f3dce963000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
0xbd49a8b6

Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
space=253k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 11168 C2 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
 (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
J 14662 C2 
org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
 (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
J 13756 C2 
org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
 (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
(1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
J 18294 C1 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
J 18503 C1 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
J 17908 C2 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
 (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
{noformat}

No errors in kernel logs, and no other noticeable issues on the node (lots of 
ram, not under heavy load).


> Segfault during compaction
> --
>
> Key: CASSANDRA-13503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13503
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: {noformat}
> Cassandra 2.1.17
> java -version:
> java version "1.8.0_112"
> Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)
> uname -a (Ubuntu 16.04.2):
> Linux ip-10-50-194-251 4.8.0-51-generic 

[jira] [Updated] (CASSANDRA-13503) Segfault during compaction

2017-05-08 Thread James Ravn (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Ravn updated CASSANDRA-13503:
---
Reproduced In: 2.1.17  (was: 2.1.7)

> Segfault during compaction
> --
>
> Key: CASSANDRA-13503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13503
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: {noformat}
> Cassandra 2.1.17
> java -version:
> java version "1.8.0_112"
> Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
> Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)
> uname -a (Ubuntu 16.04.2):
> Linux ip-10-50-194-251 4.8.0-51-generic #54~16.04.1-Ubuntu SMP Wed Apr 26 
> 16:00:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> lstopo:
> Machine (16GB total)
>   NUMANode L#0 (P#0 7998MB) + Package L#0 + L3 L#0 (45MB) + L2 L#0 (256KB) + 
> L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
> PU L#0 (P#0)
> PU L#1 (P#1)
>   NUMANode L#1 (P#1 8047MB) + Package L#1 + L3 L#1 (45MB) + L2 L#1 (256KB) + 
> L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
> PU L#2 (P#2)
> PU L#3 (P#3)
> {noformat}
>Reporter: James Ravn
> Attachments: hs_err_pid29774.log
>
>
> One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
> like it was loading sstables from disk:
> {noformat}
> Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" 
> daemon [_thread_in_Java, id=30023, 
> stack(0x7f3dce922000,0x7f3dce963000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0xbd49a8b6
> Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
> space=253k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 11168 C2 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
>  (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
> J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
> 0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
> J 14662 C2 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
>  (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
> J 13756 C2 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
>  (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
> J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
> (1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
> J 18294 C1 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
>  (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
> J 18503 C1 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
>  (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
> J 17908 C2 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
>  (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
> {noformat}
> No errors in kernel logs, and no other noticeable issues on the node (lots of 
> ram, not under heavy load).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13503) Segfault during compaction

2017-05-08 Thread James Ravn (JIRA)
James Ravn created CASSANDRA-13503:
--

 Summary: Segfault during compaction
 Key: CASSANDRA-13503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13503
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
 Environment: {noformat}
Cassandra 2.1.17

java -version:
java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)

uname -a (Ubuntu 16.04.2):
Linux ip-10-50-194-251 4.8.0-51-generic #54~16.04.1-Ubuntu SMP Wed Apr 26 
16:00:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

lstopo:
Machine (16GB total)
  NUMANode L#0 (P#0 7998MB) + Package L#0 + L3 L#0 (45MB) + L2 L#0 (256KB) + 
L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#1)
  NUMANode L#1 (P#1 8047MB) + Package L#1 + L3 L#1 (45MB) + L2 L#1 (256KB) + 
L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#3)
{noformat}
Reporter: James Ravn
 Attachments: hs_err_pid29774.log

One of our cassandra nodes segfaulted. I've attached the hs_err.log. It looks 
like it was loading sstables from disk:

{noformat}
Current thread (0x7f3df4e7b930):  JavaThread "CompactionExecutor:2" daemon 
[_thread_in_Java, id=30023, stack(0x7f3dce922000,0x7f3dce963000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
0xbd49a8b6

Stack: [0x7f3dce922000,0x7f3dce963000],  sp=0x7f3dce961430,  free 
space=253k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 11168 C2 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
 (48 bytes) @ 0x7f3de6097df4 [0x7f3de6097cc0+0x134]
J 2042 C2 com.google.common.collect.AbstractIterator.hasNext()Z (65 bytes) @ 
0x7f3de560fbc0 [0x7f3de560fb00+0xc0]
J 14662 C2 
org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(Ljava/util/Iterator;)Lorg/apache/cassandra/db/ColumnIndex;
 (36 bytes) @ 0x7f3de7143a08 [0x7f3de7143440+0x5c8]
J 13756 C2 
org.apache.cassandra.io.sstable.SSTableRewriter.append(Lorg/apache/cassandra/db/compaction/AbstractCompactedRow;)Lorg/apache/cassandra/db/RowIndexEntry;
 (119 bytes) @ 0x7f3de6a98b88 [0x7f3de6a985c0+0x5c8]
J 14627 C2 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow()V 
(1622 bytes) @ 0x7f3de710eed4 [0x7f3de710daa0+0x1434]
J 18294 C1 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (19 bytes) @ 0x7f3de7981aa4 [0x7f3de79818c0+0x1e4]
J 18503 C1 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(Lorg/apache/cassandra/db/compaction/CompactionManager$CompactionExecutorStatsCollector;)I
 (39 bytes) @ 0x7f3de7a4cf2c [0x7f3de7a4ce20+0x10c]
J 17908 C2 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run()V
 (182 bytes) @ 0x7f3de789e0a4 [0x7f3de789aca0+0x3404]
{noformat}

No errors in kernel logs, and no other noticeable issues on the node (lots of 
ram, not under heavy load).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8272) 2ndary indexes can return stale data

2017-05-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-8272:
-
Fix Version/s: (was: 2.1.x)
   3.0.x
   Status: Patch Available  (was: Open)

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Andrés de la Peña
> Fix For: 3.0.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-10130:
--
Status: Patch Available  (was: Reopened)

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13275) Cassandra throws an exception during CQL select query filtering on map key

2017-05-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13275:
---
Reviewer: Benjamin Lerer

> Cassandra throws an exception during CQL select query filtering on map key 
> ---
>
> Key: CASSANDRA-13275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13275
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Abderrahmane CHRAIBI
>Assignee: Alex Petrov
>
> Env: cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4
> Using this table structure:
> {code}CREATE TABLE mytable (
> mymap frozen>> PRIMARY KEY
> )
> {code}
> Executing:
> {code} select * from mytable where mymap contains key UUID;
> {code}
> Within cqlsh shows this message:
> {code}
> ServerError: java.lang.UnsupportedOperationException
> system.log:
> java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.cql3.restrictions.SingleColumnRestriction$ContainsRestriction.appendTo(SingleColumnRestriction.java:456)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.restrictions.PartitionKeySingleRestrictionSet.values(PartitionKeySingleRestrictionSet.java:86)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.getPartitionKeys(StatementRestrictions.java:585)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:474)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.getQuery(SelectStatement.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:227)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:219) 
> ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:204) 
> ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513)
>  [apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407)
>  [apache-cassandra-3.9.jar:3.9]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13493) RPM Init: Service startup ordering

2017-05-08 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000488#comment-16000488
 ] 

Stefan Podkowinski commented on CASSANDRA-13493:


The {{debian/init}} script contains the following init stanzas:

{noformat}
### BEGIN INIT INFO
# Provides:  cassandra
# Required-Start:$remote_fs $network $named $time
# Required-Stop: $remote_fs $network $named $time
# Should-Start:  ntp mdadm
# Should-Stop:   ntp mdadm
# Default-Start: 2 3 4 5
# Default-Stop:  0 1 6
# Short-Description: distributed storage system for structured data
# Description:   Cassandra is a distributed (peer-to-peer) system for
#the management and storage of structured data.
### END INIT INFO
{noformat}

Are there any reasons to use different values for redhat based systems?

Regardings the changed {{chkconfig:}} directive, won't the patch increase 
priority for Cassandra for startup? Doesn't that contradict the ticket 
description?


> RPM Init: Service startup ordering
> --
>
> Key: CASSANDRA-13493
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13493
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: martin a langhoff
> Fix For: 3.11.0
>
> Attachments: 
> 0001-RPM-Init-ordering-start-after-network-and-name-servi.patch
>
>
> Currently, Cassandra is setup to start _before_ network and name services 
> come up, and setup to be town down _after_ them, dangerously close to the 
> final shutdown call.
> A service daemon which may use network-based storage, and serves requests 
> over a network needs to start clearly after network and network mounts, and 
> come down clearly after.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13495) dtest failure in snapshot_test.TestSnapshot.test_snapshot_and_restore_dropping_a_column

2017-05-08 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov resolved CASSANDRA-13495.
-
Resolution: Duplicate

Fixed in [CASSANDRA-13483]

> dtest failure in 
> snapshot_test.TestSnapshot.test_snapshot_and_restore_dropping_a_column
> ---
>
> Key: CASSANDRA-13495
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13495
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_jdk8/342/testReport/snapshot_test/TestSnapshot/test_snapshot_and_restore_dropping_a_column
> {code}
> Error Message
> Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', ['refresh', 'ks', 
> 'cf']] exited with non-zero status; exit status: 1; 
> stdout: nodetool: Unknown keyspace/cf pair (ks.cf)
> See 'nodetool help' or 'nodetool help '.
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/snapshot_test.py", line 145, in 
> test_snapshot_and_restore_dropping_a_column
> node1.nodetool('refresh ks cf')
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 789, in nodetool
> return handle_external_tool_process(p, ['nodetool', '-h', 'localhost', 
> '-p', str(self.jmx_port), cmd.split()])
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 2002, in handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13492) RPM Init: don't attempt to start if it's running

2017-05-08 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000453#comment-16000453
 ] 

Stefan Podkowinski commented on CASSANDRA-13492:


I'd expect CASSANDRA-13434 to address this issue by letting systemd know about 
the PID file. 

> RPM Init: don't attempt to start if it's running
> 
>
> Key: CASSANDRA-13492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13492
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: martin a langhoff
> Fix For: 3.11.0
>
> Attachments: 0002-RPM-Init-avoid-starting-cassandra-if-it-is-up.patch
>
>
> We don't check whether Cassandra is running. Attempts to start Cassandra when 
> it's already running overwrite the pidfile, make a confusing mess of 
> logfiles, _fail to start, as the port is taken_; and then the init script 
> cannot bring the first Cassandra process down because the init file has been 
> clobbered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13490) RPM Spec - disable binary check, improve readme instructions

2017-05-08 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000451#comment-16000451
 ] 

Stefan Podkowinski commented on CASSANDRA-13490:


Creating the tarball by running {{ant artifacts}} will create a snapshot file, 
e.g. {{build/apache-cassandra-2.2.10-SNAPSHOT-src.tar.gz}}, while rpmbuild is 
looking for {{build/apache-cassandra-2.2.10-src.tar.gz}}. In that case ant 
should be called as {{ant artifacts -Drelease=true}}.

Building RPMs for snapshots also requires other changes and some efforts on 
that went into CASSANDRA-13487. [~martin.langh...@gmail.com], if you have any 
feedback or suggestions related to the RPM packaging, as implemented in the 
mentioned ticket, that would be much appreciated.

> RPM Spec - disable binary check, improve readme instructions
> 
>
> Key: CASSANDRA-13490
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13490
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: martin a langhoff
> Fix For: 3.11.0
>
> Attachments: 
> 0001-RPM-rpmbuild-tolerate-binaries-in-noarch-README-upda.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test

2017-05-08 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000429#comment-16000429
 ] 

Alex Petrov commented on CASSANDRA-13113:
-

Hey [~beobal] will you get some time to review this one any time soon? It'd be 
great to get it committed, as it'll make our build much greener..

> test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
> ---
>
> Key: CASSANDRA-13113
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13113
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1466/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 582, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> {code}{code}
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [Native-Transport-Requests-1] 2017-01-08 21:10:55,056 Message.java:623 
> - Unexpected exception during request; channel = [id: 0xf39c6dae, 
> L:/127.0.0.2:9042 - R:/127.0.0.1:43640]
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
> consistency level QUORUM
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310)
>  ~[main/:na]
>   at org.apache.cassandra.service.ClientState.login(ClientState.java:271) 
> ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot 
> achieve consistency level QUORUM
>   at 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1734)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1696) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1642) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1557) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:964)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:282)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:252)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:511)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493)
>  ~[main/:na]
>   ... 13 common frames omitted
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-05-08 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000427#comment-16000427
 ] 

Alex Petrov commented on CASSANDRA-13216:
-

Do you think we can get that reviewed any time soon? It'd be great, as this is 
one of the few test failures we get quite often.

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log, 
> TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13275) Cassandra throws an exception during CQL select query filtering on map key

2017-05-08 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13275:

Status: Patch Available  (was: Open)

> Cassandra throws an exception during CQL select query filtering on map key 
> ---
>
> Key: CASSANDRA-13275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13275
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Abderrahmane CHRAIBI
>Assignee: Alex Petrov
>
> Env: cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4
> Using this table structure:
> {code}CREATE TABLE mytable (
> mymap frozen>> PRIMARY KEY
> )
> {code}
> Executing:
> {code} select * from mytable where mymap contains key UUID;
> {code}
> Within cqlsh shows this message:
> {code}
> ServerError: java.lang.UnsupportedOperationException
> system.log:
> java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.cql3.restrictions.SingleColumnRestriction$ContainsRestriction.appendTo(SingleColumnRestriction.java:456)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.restrictions.PartitionKeySingleRestrictionSet.values(PartitionKeySingleRestrictionSet.java:86)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.getPartitionKeys(StatementRestrictions.java:585)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:474)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.getQuery(SelectStatement.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:227)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:219) 
> ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:204) 
> ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513)
>  [apache-cassandra-3.9.jar:3.9]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407)
>  [apache-cassandra-3.9.jar:3.9]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13478) SASIndex has a time to live issue in Cassandra

2017-05-08 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000417#comment-16000417
 ] 

Alex Petrov commented on CASSANDRA-13478:
-

Could you please explain why you think this is related to the TTL? I've tried 
it locally and query by the timestamp field works fine with/without flush, 
with/without restart. There's no default TTL on the table, do you maybe use the 
TTL on insert? Could you paste a complete example to reproduce it maybe?

> SASIndex has a time to live issue in Cassandra
> --
>
> Key: CASSANDRA-13478
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13478
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native 
> protocol v4 | ubuntu 14.04
>Reporter: jack chen
>Assignee: Alex Petrov
>Priority: Minor
> Attachments: schema
>
>
> I have a table, the schema can be seen in attach file
> I would like to search the data using the timestamp data type with lt gt eq 
> as a query condition,
> Ex:
> {code}
> CREATE TABLE XXX.userlist (
> userid text PRIMARY KEY,
> lastposttime timestamp
> )
> Select * from userlist where lastposttime> '2017-04-01 16:00:00+';
> {code}
> There are 2 conditions :
> If I insert the data and then select it, the result will be correct
> But in case I insert data and then the next day I restart Cassandra, and 
> after that select the data, there will be no data selected
> The difference is that there is no Service restart on th next day in the 
> first manner. Actually, the data are still living in Cassandra, but TimeStamp 
> can’t be used as the query condition



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13478) SASIndex has a time to live issue in Cassandra

2017-05-08 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13478:

Description: 
I have a table, the schema can be seen in attach file

I would like to search the data using the timestamp data type with lt gt eq as 
a query condition,
Ex:
{code}
CREATE TABLE XXX.userlist (
userid text PRIMARY KEY,
lastposttime timestamp
)
Select * from userlist where lastposttime> '2017-04-01 16:00:00+';
{code}

There are 2 conditions :
If I insert the data and then select it, the result will be correct
But in case I insert data and then the next day I restart Cassandra, and after 
that select the data, there will be no data selected

The difference is that there is no Service restart on th next day in the first 
manner. Actually, the data are still living in Cassandra, but TimeStamp can’t 
be used as the query condition

  was:
I have a table, the schema can be seen in attach file

I would like to search the data using the timestamp data type with lt gt eq as 
a query condition,
Ex:
Select * from userlist where lastposttime> '2017-04-01 16:00:00+';

There are 2 conditions :
If I insert the data and then select it, the result will be correct
But in case I insert data and then the next day I restart Cassandra, and after 
that select the data, there will be no data selected

The difference is that there is no Service restart on th next day in the first 
manner. Actually, the data are still living in Cassandra, but TimeStamp can’t 
be used as the query condition


> SASIndex has a time to live issue in Cassandra
> --
>
> Key: CASSANDRA-13478
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13478
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native 
> protocol v4 | ubuntu 14.04
>Reporter: jack chen
>Assignee: Alex Petrov
>Priority: Minor
> Attachments: schema
>
>
> I have a table, the schema can be seen in attach file
> I would like to search the data using the timestamp data type with lt gt eq 
> as a query condition,
> Ex:
> {code}
> CREATE TABLE XXX.userlist (
> userid text PRIMARY KEY,
> lastposttime timestamp
> )
> Select * from userlist where lastposttime> '2017-04-01 16:00:00+';
> {code}
> There are 2 conditions :
> If I insert the data and then select it, the result will be correct
> But in case I insert data and then the next day I restart Cassandra, and 
> after that select the data, there will be no data selected
> The difference is that there is no Service restart on th next day in the 
> first manner. Actually, the data are still living in Cassandra, but TimeStamp 
> can’t be used as the query condition



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10786) Include hash of result set metadata in prepared statement id

2017-05-08 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-10786:

Reviewer:   (was: Tyler Hobbs)

> Include hash of result set metadata in prepared statement id
> 
>
> Key: CASSANDRA-10786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10786
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Olivier Michallat
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, protocolv5
> Fix For: 4.x
>
>
> *_Initial description:_*
> This is a follow-up to CASSANDRA-7910, which was about invalidating a 
> prepared statement when the table is altered, to force clients to update 
> their local copy of the metadata.
> There's still an issue if multiple clients are connected to the same host. 
> The first client to execute the query after the cache was invalidated will 
> receive an UNPREPARED response, re-prepare, and update its local metadata. 
> But other clients might miss it entirely (the MD5 hasn't changed), and they 
> will keep using their old metadata. For example:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, 
> clientA and clientB both have a cache of the metadata (columns b and c) 
> locally
> # column a gets added to the table, C* invalidates its cache entry
> # clientA sends an EXECUTE request for md5 abc123, gets UNPREPARED response, 
> re-prepares on the fly and updates its local metadata to (a, b, c)
> # prepared statement is now in C*’s cache again, with the same md5 abc123
> # clientB sends an EXECUTE request for id abc123. Because the cache has been 
> populated again, the query succeeds. But clientB still has not updated its 
> metadata, it’s still (b,c)
> One solution that was suggested is to include a hash of the result set 
> metadata in the md5. This way the md5 would change at step 3, and any client 
> using the old md5 would get an UNPREPARED, regardless of whether another 
> client already reprepared.
> -
> *_Resolution (2017/02/13):_*
> The following changes were made to native protocol v5:
> - the PREPARED response includes {{result_metadata_id}}, a hash of the result 
> set metadata.
> - every EXECUTE message must provide {{result_metadata_id}} in addition to 
> the prepared statement id. If it doesn't match the current one on the server, 
> it means the client is operating on a stale schema.
> - to notify the client, the server returns a ROWS response with a new 
> {{Metadata_changed}} flag, the new {{result_metadata_id}} and the updated 
> result metadata (this overrides the {{No_metadata}} flag, even if the client 
> had requested it)
> - the client updates its copy of the result metadata before it decodes the 
> results.
> So the scenario above would now look like:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, and 
> result set (b, c) that hashes to cde456
> # column a gets added to the table, C* does not invalidate its cache entry, 
> but only updates the result set to (a, b, c) which hashes to fff789
> # client sends an EXECUTE request for (statementId=abc123, resultId=cde456) 
> and skip_metadata flag
> # cde456!=fff789, so C* responds with ROWS(..., no_metadata=false, 
> metadata_changed=true, new_metadata_id=fff789,col specs for (a,b,c))
> # client updates its column specifications, and will send the next execute 
> queries with (statementId=abc123, resultId=fff789)
> This works the same with multiple clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org