[jira] [Commented] (CASSANDRA-12014) IndexSummary > 2G causes an assertion error

2017-09-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164181#comment-16164181
 ] 

Stefania commented on CASSANDRA-12014:
--

So, both builds were fine and in both cases the tests were skipped (although I 
don't understand why I got a different result by ssh-ing into the container). I 
would still prefer to rely on the environment variable though, since the memory 
limit was arbitrary and not related to what the test actually uses (for that 
we'd need a way to calculate the off-heap memory available).

> IndexSummary > 2G causes an assertion error
> ---
>
> Key: CASSANDRA-12014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> {noformat}
> ERROR [CompactionExecutor:1546280] 2016-06-01 13:21:00,444  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:1546280,1,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.maybeAddEntry(IndexSummaryBuilder.java:171)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.append(SSTableWriter.java:634)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.afterAppend(SSTableWriter.java:179)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:205) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_51]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> {noformat}
> I believe this can be fixed by raising the min_index_interval, but we should 
> have a better method of coping with this than throwing the AE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-09-12 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164175#comment-16164175
 ] 

Jeff Jirsa commented on CASSANDRA-13863:


Was similarly surprised when I first saw this in code. I suspect? the logic is 
that we've already done the read, it seems silly NOT to repair the data, but 
twcs/dtcs is a pretty good example of it not being a strict win. 



> Speculative retry causes read repair even if read_repair_chance is 0.0.
> ---
>
> Key: CASSANDRA-13863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13863
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Hiro Wakabayashi
>
> {{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
> cause no read repair, but read repair happens with speculative retry. I think 
> {{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
> stop read repair completely because the user wants to stop read repair in 
> some cases.
> {panel:title=Case 1: TWCS users}
> The 
> [documentation|http://cassandra.apache.org/doc/latest/operating/compaction.html?highlight=read_repair_chance]
>  states how to disable read repair.
> {quote}While TWCS tries to minimize the impact of comingled data, users 
> should attempt to avoid this behavior. Specifically, users should avoid 
> queries that explicitly set the timestamp via CQL USING TIMESTAMP. 
> Additionally, users should run frequent repairs (which streams data in such a 
> way that it does not become comingled), and disable background read repair by 
> setting the table’s read_repair_chance and dclocal_read_repair_chance to 0.
> {quote}
> {panel}
> {panel:title=Case 2. Strict SLA for read latency}
> In a peak time, read latency is a key for us but, read repair causes latency 
> higher than no read repair. We can use anti entropy repair in off peak time 
> for consistency.
> {panel}
>  
> Here is my procedure to reproduce the problem.
> h3. 1. Create a cluster and set {{hinted_handoff_enabled}} to false.
> {noformat}
> $ ccm create -v 3.0.14 -n 3 cluster_3.0.14
> $ for h in $(seq 1 3) ; do perl -pi -e 's/hinted_handoff_enabled: 
> true/hinted_handoff_enabled: false/' 
> ~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
> $ for h in $(seq 1 3) ; do grep "hinted_handoff_enabled:" 
> ~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
> hinted_handoff_enabled: false
> hinted_handoff_enabled: false
> hinted_handoff_enabled: false
> $ ccm start{noformat}
> h3. 2. Create a keyspace and a table.
> {noformat}
> $ ccm node1 cqlsh
> DROP KEYSPACE IF EXISTS ks1;
> CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '3'}  AND durable_writes = true;
> CREATE TABLE ks1.t1 (
> key text PRIMARY KEY,
> value blob
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = 'ALWAYS';
> QUIT;
> {noformat}
> h3. 3. Stop node2 and node3. Insert a row.
> {noformat}
> $ ccm node3 stop && ccm node2 stop && ccm status
> Cluster: 'cluster_3.0.14'
> --
> node1: UP
> node3: DOWN
> node2: DOWN
> $ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; insert into ks1.t1 
> (key, value) values ('mmullass', bigintAsBlob(1));"
> Current consistency level is ONE.
> Now Tracing is enabled
> Tracing session: 01d74590-97cb-11e7-8ea7-c1bd4d549501
>  activity 
>| timestamp  | source| 
> source_elapsed
> -++---+
>   
> Execute CQL3 query | 2017-09-12 23:59:42.316000 | 127.0.0.1 | 
>  0
>  Parsing insert into ks1.t1 (key, value) values ('mmullass', 
> bigintAsBlob(1)); [SharedPool-Worker-1] | 2017-09-12 23:59:42.319000 | 
> 127.0.0.1 |   4323
>Preparing 
> statement 

[jira] [Updated] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-09-12 Thread Hiro Wakabayashi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiro Wakabayashi updated CASSANDRA-13863:
-
Description: 
{{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
cause no read repair, but read repair happens with speculative retry. I think 
{{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
stop read repair completely because the user wants to stop read repair in some 
cases.

{panel:title=Case 1: TWCS users}
The 
[documentation|http://cassandra.apache.org/doc/latest/operating/compaction.html?highlight=read_repair_chance]
 states how to disable read repair.
{quote}While TWCS tries to minimize the impact of comingled data, users should 
attempt to avoid this behavior. Specifically, users should avoid queries that 
explicitly set the timestamp via CQL USING TIMESTAMP. Additionally, users 
should run frequent repairs (which streams data in such a way that it does not 
become comingled), and disable background read repair by setting the table’s 
read_repair_chance and dclocal_read_repair_chance to 0.
{quote}
{panel}
{panel:title=Case 2. Strict SLA for read latency}
In a peak time, read latency is a key for us but, read repair causes latency 
higher than no read repair. We can use anti entropy repair in off peak time for 
consistency.
{panel}
 
Here is my procedure to reproduce the problem.

h3. 1. Create a cluster and set {{hinted_handoff_enabled}} to false.
{noformat}
$ ccm create -v 3.0.14 -n 3 cluster_3.0.14
$ for h in $(seq 1 3) ; do perl -pi -e 's/hinted_handoff_enabled: 
true/hinted_handoff_enabled: false/' 
~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
$ for h in $(seq 1 3) ; do grep "hinted_handoff_enabled:" 
~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
hinted_handoff_enabled: false
hinted_handoff_enabled: false
hinted_handoff_enabled: false
$ ccm start{noformat}
h3. 2. Create a keyspace and a table.
{noformat}
$ ccm node1 cqlsh
DROP KEYSPACE IF EXISTS ks1;
CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '3'}  AND durable_writes = true;
CREATE TABLE ks1.t1 (
key text PRIMARY KEY,
value blob
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = 'ALWAYS';
QUIT;
{noformat}
h3. 3. Stop node2 and node3. Insert a row.
{noformat}
$ ccm node3 stop && ccm node2 stop && ccm status
Cluster: 'cluster_3.0.14'
--
node1: UP
node3: DOWN
node2: DOWN

$ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; insert into ks1.t1 (key, 
value) values ('mmullass', bigintAsBlob(1));"
Current consistency level is ONE.
Now Tracing is enabled

Tracing session: 01d74590-97cb-11e7-8ea7-c1bd4d549501

 activity   
 | timestamp  | source| source_elapsed
-++---+

  Execute CQL3 query | 2017-09-12 23:59:42.316000 | 127.0.0.1 |  0
 Parsing insert into ks1.t1 (key, value) values ('mmullass', bigintAsBlob(1)); 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.319000 | 127.0.0.1 |   4323
   Preparing statement 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.32 | 127.0.0.1 |   5250
 Determining replicas for mutation 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  11886
Appending to commitlog 
[SharedPool-Worker-3] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  12195
 Adding to t1 memtable 
[SharedPool-Worker-3] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  12392

Request complete | 2017-09-12 23:59:42.328680 | 127.0.0.1 |  12680


$ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; select * from ks1.t1 

[jira] [Created] (CASSANDRA-13863) Speculative retry causes read repair even if read_repair_chance is 0.0.

2017-09-12 Thread Hiro Wakabayashi (JIRA)
Hiro Wakabayashi created CASSANDRA-13863:


 Summary: Speculative retry causes read repair even if 
read_repair_chance is 0.0.
 Key: CASSANDRA-13863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13863
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hiro Wakabayashi


{{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
cause no read repair, but read repair happens with speculative retry. I think 
{{read_repair_chance = 0.0}} and {{dclocal_read_repair_chance = 0.0}} should 
stop read repair completely because the user wants to stop read repair in some 
cases.

{panel:title=Case 1: TWCS users}
The 
[documentation|http://cassandra.apache.org/doc/latest/operating/compaction.html?highlight=read_repair_chance]
 states how to disable read repair.
{quote}While TWCS tries to minimize the impact of comingled data, users should 
attempt to avoid this behavior. Specifically, users should avoid queries that 
explicitly set the timestamp via CQL USING TIMESTAMP. Additionally, users 
should run frequent repairs (which streams data in such a way that it does not 
become comingled), and disable background read repair by setting the table’s 
read_repair_chance and dclocal_read_repair_chance to 0.
{quote}
{panel}
{panel:title=Case 2. Strict SLA for read latency}
In a peak time, read latency is a key for us but, read repair causes latency 
higher than no read repair. We can use anti entropy repair in off peak time for 
consistency.
{panel}
 
Here is my procedure to reproduce the problem.

h3. 1. Create a cluster and set {{hinted_handoff_enabled}} to false.
{noformat}
$ ccm create -v 3.0.14 -n 3 cluster_3.0.14
$ for h in $(seq 1 3) ; do perl -pi -e 's/hinted_handoff_enabled: 
true/hinted_handoff_enabled: false/' 
~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
$ for h in $(seq 1 3) ; do grep "hinted_handoff_enabled:" 
~/.ccm/cluster_3.0.14/node$h/conf/cassandra.yaml ; done
hinted_handoff_enabled: false
hinted_handoff_enabled: false
hinted_handoff_enabled: false
$ ccm start{noformat}
h3. 2. Create a keyspace and a table.
{noformat}
$ ccm node1 cqlsh
DROP KEYSPACE IF EXISTS ks1;
CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '3'}  AND durable_writes = true;
CREATE TABLE ks1.t1 (
key text PRIMARY KEY,
value blob
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = 'ALWAYS';
QUIT;
{noformat}
h3. 3. Stop node2 and node3. Insert a row.
{noformat}
$ ccm node3 stop && ccm node2 stop && ccm status
Cluster: 'cluster_3.0.14'
--
node1: UP
node3: DOWN
node2: DOWN

$ ccm node1 cqlsh -k ks1 -e "consistency; tracing on; insert into ks1.t1 (key, 
value) values ('mmullass', bigintAsBlob(1));"
Current consistency level is ONE.
Now Tracing is enabled

Tracing session: 01d74590-97cb-11e7-8ea7-c1bd4d549501

 activity   
 | timestamp  | source| source_elapsed
-++---+

  Execute CQL3 query | 2017-09-12 23:59:42.316000 | 127.0.0.1 |  0
 Parsing insert into ks1.t1 (key, value) values ('mmullass', bigintAsBlob(1)); 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.319000 | 127.0.0.1 |   4323
   Preparing statement 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.32 | 127.0.0.1 |   5250
 Determining replicas for mutation 
[SharedPool-Worker-1] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  11886
Appending to commitlog 
[SharedPool-Worker-3] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  12195
 Adding to t1 memtable 
[SharedPool-Worker-3] | 2017-09-12 23:59:42.327000 | 127.0.0.1 |  12392
 

[jira] [Updated] (CASSANDRA-12014) IndexSummary > 2G causes an assertion error

2017-09-12 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12014:
-
Status: Patch Available  (was: In Progress)

> IndexSummary > 2G causes an assertion error
> ---
>
> Key: CASSANDRA-12014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> {noformat}
> ERROR [CompactionExecutor:1546280] 2016-06-01 13:21:00,444  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:1546280,1,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.maybeAddEntry(IndexSummaryBuilder.java:171)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.append(SSTableWriter.java:634)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.afterAppend(SSTableWriter.java:179)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:205) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_51]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> {noformat}
> I believe this can be fixed by raising the min_index_interval, but we should 
> have a better method of coping with this than throwing the AE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12014) IndexSummary > 2G causes an assertion error

2017-09-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164093#comment-16164093
 ] 

Stefania commented on CASSANDRA-12014:
--

Testing this 
[commit|https://github.com/stef1927/cassandra/commit/980b529cd8c1ecb44fb4c0ba88c4d023e302fda9]
 [here|https://circleci.com/gh/stef1927/cassandra/2]. It should print out 
{{Runtime.getRuntime().maxMemory()}} but I don't think it'll work.

The problem is that on my laptop or on a Circle container, 
{{Runtime.getRuntime().maxMemory()}} only returns about 900 MB when running the 
test from ant. This comes from {{-Xmx1024m}}. The test passes (both locally and 
and on the container) because the memory is off-heap. 

So, if indeed this test causes the builds to fail when run in parallel, we need 
a different way to work out that we are on Circle. One suggestion would be to 
look at the environment variable {{CIRCLECI=true}} as done in this 
[commit|https://github.com/stef1927/cassandra/commit/77f9101117038f144d553b94980e8992de2d8d66].
 I tested it manually on Circle by ssh-ing into the build above and the test is 
skipped. I've also queued a new 
[build|https://circleci.com/gh/stef1927/cassandra/3].

[~krummas] WDYT?



> IndexSummary > 2G causes an assertion error
> ---
>
> Key: CASSANDRA-12014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> {noformat}
> ERROR [CompactionExecutor:1546280] 2016-06-01 13:21:00,444  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:1546280,1,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.maybeAddEntry(IndexSummaryBuilder.java:171)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.append(SSTableWriter.java:634)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.afterAppend(SSTableWriter.java:179)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:205) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_51]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> {noformat}
> I believe this can be fixed by raising the min_index_interval, but we should 
> have a better method of coping with this than throwing the AE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13818) Add support for --hosts, --force, and subrange repair to incremental repair

2017-09-12 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13818:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Got a clean utest run locally, dtest failures were failing on trunk as well.
Committed to trunk as {{3cec208c40b85e1be0ff8c68fc9d9017945a1ed8}}

> Add support for --hosts, --force, and subrange repair to incremental repair
> ---
>
> Key: CASSANDRA-13818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13818
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> It should be possible to run incremental repair with nodes down, we just 
> shouldn't promote the data to repaired afterwards



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Add incremental repair support for --hosts, --force, and subrange repair

2017-09-12 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk c6cd82462 -> 3cec208c4


Add incremental repair support for --hosts, --force, and subrange repair

Patch by Blake Eggleston; reviewed by Marcus Eriksson for CASSANDRA-13818


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3cec208c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3cec208c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3cec208c

Branch: refs/heads/trunk
Commit: 3cec208c40b85e1be0ff8c68fc9d9017945a1ed8
Parents: c6cd824
Author: Blake Eggleston 
Authored: Mon Aug 28 10:33:34 2017 -0700
Committer: Blake Eggleston 
Committed: Tue Sep 12 15:51:34 2017 -0700

--
 CHANGES.txt |   1 +
 .../db/compaction/CompactionManager.java|   4 +-
 .../org/apache/cassandra/repair/RepairJob.java  |  10 +-
 .../repair/RepairMessageVerbHandler.java|   6 +-
 .../apache/cassandra/repair/RepairRunnable.java | 161 ++-
 .../apache/cassandra/repair/RepairSession.java  |  12 +-
 .../org/apache/cassandra/repair/Validator.java  |  10 +-
 .../repair/consistent/ConsistentSession.java|   3 +-
 .../cassandra/repair/messages/RepairOption.java |  16 +-
 .../cassandra/service/ActiveRepairService.java  |  33 ++--
 ...pactionStrategyManagerPendingRepairTest.java |   2 +-
 .../cassandra/repair/AbstractRepairTest.java|   2 +
 .../cassandra/repair/RepairRunnableTest.java|  65 
 .../repair/consistent/LocalSessionTest.java |   1 -
 .../repair/messages/RepairOptionTest.java   |  13 --
 .../service/ActiveRepairServiceTest.java|  55 +++
 16 files changed, 289 insertions(+), 105 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cec208c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1f03ec5..55bbfa8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add incremental repair support for --hosts, --force, and subrange repair 
(CASSANDRA-13818)
  * Rework CompactionStrategyManager.getScanners synchronization 
(CASSANDRA-13786)
  * Add additional unit tests for batch behavior, TTLs, Timestamps 
(CASSANDRA-13846)
  * Add keyspace and table name in schema validation exception (CASSANDRA-13845)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cec208c/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 5619da7..06fbef2 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1330,7 +1330,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 }
 else
 {
-if (!validator.isConsistent)
+if (!validator.isIncremental)
 {
 // flush first so everyone is validating data that is as 
similar as possible
 
StorageService.instance.forceKeyspaceFlush(cfs.keyspace.getName(), cfs.name);
@@ -1447,7 +1447,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 predicate = prs.getPreviewPredicate();
 
 }
-else if (validator.isConsistent)
+else if (validator.isIncremental)
 {
 predicate = s -> 
validator.desc.parentSessionId.equals(s.getSSTableMetadata().pendingRepair);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cec208c/src/java/org/apache/cassandra/repair/RepairJob.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairJob.java 
b/src/java/org/apache/cassandra/repair/RepairJob.java
index 0615681..4bc3496 100644
--- a/src/java/org/apache/cassandra/repair/RepairJob.java
+++ b/src/java/org/apache/cassandra/repair/RepairJob.java
@@ -43,7 +43,7 @@ public class RepairJob extends AbstractFuture 
implements Runnable
 private final RepairJobDesc desc;
 private final RepairParallelism parallelismDegree;
 private final ListeningExecutorService taskExecutor;
-private final boolean isConsistent;
+private final boolean isIncremental;
 private final PreviewKind previewKind;
 
 /**
@@ -52,13 +52,13 @@ public class RepairJob extends AbstractFuture 
implements Runnable
  * @param session RepairSession that this RepairJob belongs
  * @param columnFamily name of the ColumnFamily to repair
  */
-public RepairJob(RepairSession session, 

[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13862:
---
Description: 
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{quote}
Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
{quote}


Same thing applies for {{Propose stage}} as well.



  was:
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372



Same thing applies for {{Propose stage}} as well.




> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> {quote}
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> {quote}
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13862:
---
Fix Version/s: (was: 3.0.15)
   (was: 4.0)
   4.x
   3.11.x
   3.0.x

> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163791#comment-16163791
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-13862:
---

Hi [~aweisberg]

I see you have done similar thing for paxos's commit phase, could you please 
review my code change?

Jaydeep

> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15, 4.0
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12953) Index name uniqueness validation in CFMetaData is not entirely correct

2017-09-12 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-12953:
--

Assignee: (was: Ariel Weisberg)

> Index name uniqueness validation in CFMetaData is not entirely correct
> --
>
> Key: CASSANDRA-12953
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12953
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Priority: Minor
> Fix For: 3.0.x, 3.11.x
>
>
> The check in {{CFMetaData.validate()}} relies on external global state 
> ({{Schema.instance}} to fetch all index names in the keyspace. However, in 
> many cases the validation will be performed without all table instances 
> registered in {{Schema.isntance}} yet. The check should live in 
> {{KeyspaceMetadata}} instead, where no such access is required.
> Things broken by it right now: multiple tests, and Thrift's 
> {{system_add_keyspace}}, at least.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13555) Thread leak during repair

2017-09-12 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston resolved CASSANDRA-13555.
-
Resolution: Duplicate

Closing as a duplicate of CASSANDRA-13797, which removes the code:

{code}
// Wait for validation to complete
Futures.getUnchecked(validations);
{code}

Blocking on the validation futures caused some other problems and doesn't seem 
to serve any purpose (validations are throttled by the validation executor). As 
far as I can tell, it just wasn't removed when repair was made async.

> Thread leak during repair
> -
>
> Key: CASSANDRA-13555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13555
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
>
> The symptom is similar to what happened in [CASSANDRA-13204 | 
> https://issues.apache.org/jira/browse/CASSANDRA-13204] that the thread 
> waiting forever doing nothing. This one happened during "nodetool repair -pr 
> -seq -j 1" in production but I can easily simulate the problem with just 
> "nodetool repair" in dev environment (CCM). I'm trying to explain what 
> happened with 3.0.13 code base.
> 1. One node is down while doing repair. This is the error I saw in production:
> {code}
> ERROR [GossipTasks:1] 2017-05-19 15:00:10,545 RepairSession.java:334 - 
> [repair #bc9a3cd1-3ca3-11e7-a44a-e30923ac9336] session completed with the 
> following error
> java.io.IOException: Endpoint /10.185.43.15 died
> at 
> org.apache.cassandra.repair.RepairSession.convict(RepairSession.java:333) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:306) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:766) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper.access$800(Gossiper.java:66) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:181) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_121]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {code}
> 2. At this moment the repair coordinator hasn't received the response 
> (MerkleTrees) for the node that was marked down. This means, RepairJob#run 
> will never return because it waits for validations to finish:
> {code}
> // Wait for validation to complete
> Futures.getUnchecked(validations);
> {code}
> Be noted that all RepairJob's (as Runnable) run on a shared executor created 
> in RepairRunnable#runMayThrow, while all snapshot, validation and sync'ing 
> happen on a per-RepairSession "taskExecutor". The RepairJob#run will only 
> return when it receives MerkleTrees (or null) from all endpoints for a given 
> column family and token range.
> As evidence of the thread leak, below is from the thread dump. I can also get 
> the same stack trace when simulating the same issue in dev environment.
> {code}
> "Repair#129:56" #406373 daemon prio=5 os_prio=0 tid=0x7fc495028400 
> nid=0x1a77d waiting on condition [0x7fc02153]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0002d7c00198> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> 

[jira] [Updated] (CASSANDRA-13797) RepairJob blocks on syncTasks

2017-09-12 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13797:

Fix Version/s: 3.0.15

> RepairJob blocks on syncTasks
> -
>
> Key: CASSANDRA-13797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13797
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 3.0.15, 4.0
>
>
> The thread running {{RepairJob}} blocks while it waits for the validations it 
> starts to complete ([see 
> here|https://github.com/bdeggleston/cassandra/blob/9fdec0a82851f5c35cd21d02e8c4da8fc685edb2/src/java/org/apache/cassandra/repair/RepairJob.java#L185]).
>  However, the downstream callbacks (ie: the post-repair cleanup stuff) aren't 
> waiting for {{RepairJob#run}} to return, they're waiting for a result to be 
> set on RepairJob the future, which happens after the sync tasks have 
> completed. This post repair cleanup stuff also immediately shuts down the 
> executor {{RepairJob#run}} is running in. So in noop repair sessions, where 
> there's nothing to stream, I'm seeing the callbacks sometimes fire before 
> {{RepairJob#run}} wakes up, and causing an {{InterruptedException}} is thrown.
> I'm pretty sure this can just be removed, but I'd like a second opinion. This 
> appears to just be a holdover from before repair coordination became async. I 
> thought it might be doing some throttling by blocking, but each repair 
> session gets it's own executor, and validation is  throttled by the fixed 
> size executors doing the actual work of validation, so I don't think we need 
> to keep this around.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-09-12 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-13717:

   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   4.0
   3.11.1
   3.0.15
   Status: Resolved  (was: Patch Available)

committed as sha {{a08a816a6a3497046ba75a38d76d5095347dfe95}}.

Thanks!

> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
>Priority: Critical
> Fix For: 3.0.15, 3.11.1, 4.0
>
> Attachments: example_queries.cql, fix_13717
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-12 Thread jasobrown
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c6cd8246
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c6cd8246
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c6cd8246

Branch: refs/heads/trunk
Commit: c6cd8246280acde5e2244d8960b2d5c17353424f
Parents: 7d4d1a3 cb2a1c8
Author: Jason Brown 
Authored: Tue Sep 12 14:14:54 2017 -0700
Committer: Jason Brown 
Committed: Tue Sep 12 14:18:00 2017 -0700

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Tuples.java  | 28 ++--
 .../cql3/validation/entities/TupleTypeTest.java | 17 +++-
 3 files changed, 37 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6cd8246/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6cd8246/src/java/org/apache/cassandra/cql3/Tuples.java
--
diff --cc src/java/org/apache/cassandra/cql3/Tuples.java
index bae756a,01f3466..317e192
--- a/src/java/org/apache/cassandra/cql3/Tuples.java
+++ b/src/java/org/apache/cassandra/cql3/Tuples.java
@@@ -68,12 -65,7 +68,12 @@@ public class Tuple
  
  public Term prepare(String keyspace, ColumnSpecification receiver) 
throws InvalidRequestException
  {
 -validateAssignableTo(keyspace, receiver);
 +// The parser cannot differentiate between a tuple with one 
element and a term between parenthesis.
 +// By consequence, we need to wait until we know the target type 
to determine which one it is.
- if (elements.size() == 1 && !(receiver.type instanceof TupleType))
++if (elements.size() == 1 && !checkIfTupleType(receiver.type))
 +return elements.get(0).prepare(keyspace, receiver);
 +
 +validateTupleAssignableTo(receiver, elements);
  
  List values = new ArrayList<>(elements.size());
  boolean allTerminal = true;
@@@ -110,14 -102,38 +110,14 @@@
  return allTerminal ? value.bind(QueryOptions.DEFAULT) : value;
  }
  
 -private void validateAssignableTo(String keyspace, 
ColumnSpecification receiver) throws InvalidRequestException
 -{
 -if (!checkIfTupleType(receiver.type))
 -throw new InvalidRequestException(String.format("Invalid 
tuple type literal for %s of type %s", receiver.name, 
receiver.type.asCQL3Type()));
 -
 -TupleType tt = getTupleType(receiver.type);
 -for (int i = 0; i < elements.size(); i++)
 -{
 -if (i >= tt.size())
 -{
 -throw new InvalidRequestException(String.format("Invalid 
tuple literal for %s: too many elements. Type %s expects %d but got %d",
 -receiver.name, tt.asCQL3Type(), tt.size(), 
elements.size()));
 -}
 -
 -Term.Raw value = elements.get(i);
 -ColumnSpecification spec = componentSpecOf(receiver, i);
 -if (!value.testAssignment(keyspace, spec).isAssignable())
 -throw new InvalidRequestException(String.format("Invalid 
tuple literal for %s: component %d is not of type %s", receiver.name, i, 
spec.type.asCQL3Type()));
 -}
 -}
 -
  public AssignmentTestable.TestResult testAssignment(String keyspace, 
ColumnSpecification receiver)
  {
 -try
 -{
 -validateAssignableTo(keyspace, receiver);
 -return AssignmentTestable.TestResult.WEAKLY_ASSIGNABLE;
 -}
 -catch (InvalidRequestException e)
 -{
 -return AssignmentTestable.TestResult.NOT_ASSIGNABLE;
 -}
 +// The parser cannot differentiate between a tuple with one 
element and a term between parenthesis.
 +// By consequence, we need to wait until we know the target type 
to determine which one it is.
- if (elements.size() == 1 && !(receiver.type instanceof TupleType))
++if (elements.size() == 1 && !checkIfTupleType(receiver.type))
 +return elements.get(0).testAssignment(keyspace, receiver);
 +
 +return testTupleAssignment(receiver, elements);
  }
  
  @Override
@@@ -420,100 -436,29 +420,112 @@@
  }
  }
  
 -public static String tupleToString(List items)
 +/**
 + * Create a String representation of the tuple containing 
the specified elements.
 + *
 + * @param elements the tuple elements
 + * @return a 

[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-12 Thread jasobrown
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb2a1c8f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb2a1c8f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb2a1c8f

Branch: refs/heads/trunk
Commit: cb2a1c8f4209ffe9aea8e40e7f0e45dc70613645
Parents: c05d98a a08a816
Author: Jason Brown 
Authored: Tue Sep 12 14:11:06 2017 -0700
Committer: Jason Brown 
Committed: Tue Sep 12 14:14:28 2017 -0700

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Tuples.java  | 24 +++-
 .../cql3/validation/entities/TupleTypeTest.java | 14 
 3 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb2a1c8f/CHANGES.txt
--
diff --cc CHANGES.txt
index 2c48ab2,3d3903e..ec9d126
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,5 +1,15 @@@
 -3.0.15
 +3.11.1
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * INSERT statement fails when Tuple type is used as clustering column with 
default DESC order (CASSANDRA-13717)
   * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
   * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
   * Range deletes in a CAS batch are ignored (CASSANDRA-13655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb2a1c8f/src/java/org/apache/cassandra/cql3/Tuples.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-09-12 Thread jasobrown
INSERT statement fails when Tuple type is used as clustering column with 
default DESC order

patch by Stavros Kontopoulos, reviewed by jasobrown for CASSANDRA-13717


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a08a816a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a08a816a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a08a816a

Branch: refs/heads/trunk
Commit: a08a816a6a3497046ba75a38d76d5095347dfe95
Parents: a586f6c
Author: Stavros Kontopoulos 
Authored: Thu Aug 10 04:23:26 2017 +0300
Committer: Jason Brown 
Committed: Tue Sep 12 14:10:34 2017 -0700

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Tuples.java  | 24 +++-
 .../cql3/validation/entities/TupleTypeTest.java | 14 
 3 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a08a816a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6053117..3d3903e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * INSERT statement fails when Tuple type is used as clustering column with 
default DESC order (CASSANDRA-13717)
  * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a08a816a/src/java/org/apache/cassandra/cql3/Tuples.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Tuples.java 
b/src/java/org/apache/cassandra/cql3/Tuples.java
index ee08efe..c7564d3 100644
--- a/src/java/org/apache/cassandra/cql3/Tuples.java
+++ b/src/java/org/apache/cassandra/cql3/Tuples.java
@@ -47,7 +47,7 @@ public class Tuples
 return new ColumnSpecification(column.ksName,
column.cfName,
new 
ColumnIdentifier(String.format("%s[%d]", column.name, component), true),
-   
((TupleType)column.type).type(component));
+   
(getTupleType(column.type)).type(component));
 }
 
 /**
@@ -77,7 +77,7 @@ public class Tuples
 
 values.add(value);
 }
-DelayedValue value = new DelayedValue((TupleType)receiver.type, 
values);
+DelayedValue value = new DelayedValue(getTupleType(receiver.type), 
values);
 return allTerminal ? value.bind(QueryOptions.DEFAULT) : value;
 }
 
@@ -104,10 +104,10 @@ public class Tuples
 
 private void validateAssignableTo(String keyspace, ColumnSpecification 
receiver) throws InvalidRequestException
 {
-if (!(receiver.type instanceof TupleType))
+if (!checkIfTupleType(receiver.type))
 throw new InvalidRequestException(String.format("Invalid tuple 
type literal for %s of type %s", receiver.name, receiver.type.asCQL3Type()));
 
-TupleType tt = (TupleType)receiver.type;
+TupleType tt = getTupleType(receiver.type);
 for (int i = 0; i < elements.size(); i++)
 {
 if (i >= tt.size())
@@ -256,7 +256,7 @@ public class Tuples
 List l = 
type.getSerializer().deserializeForNativeProtocol(value, 
options.getProtocolVersion());
 
 assert type.getElementsType() instanceof TupleType;
-TupleType tupleType = (TupleType) type.getElementsType();
+TupleType tupleType = 
Tuples.getTupleType(type.getElementsType());
 
 // type.split(bytes)
 List elements = new ArrayList<>(l.size());
@@ -375,7 +375,7 @@ public class Tuples
 ByteBuffer value = options.getValues().get(bindIndex);
 if (value == ByteBufferUtil.UNSET_BYTE_BUFFER)
 throw new InvalidRequestException(String.format("Invalid unset 
value for tuple %s", receiver.name));
-return value == null ? null : Value.fromSerialized(value, 
(TupleType)receiver.type);
+return value == null ? null : Value.fromSerialized(value, 
getTupleType(receiver.type));
 }
 }
 
@@ -412,4 +412,16 @@ public class Tuples
 sb.append(')');
 return sb.toString();
 }
+
+public static boolean checkIfTupleType(AbstractType tuple)
+{
+return (tuple instanceof TupleType) ||
+   (tuple instanceof ReversedType && ((ReversedType) 

[2/6] cassandra git commit: INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-09-12 Thread jasobrown
INSERT statement fails when Tuple type is used as clustering column with 
default DESC order

patch by Stavros Kontopoulos, reviewed by jasobrown for CASSANDRA-13717


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a08a816a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a08a816a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a08a816a

Branch: refs/heads/cassandra-3.11
Commit: a08a816a6a3497046ba75a38d76d5095347dfe95
Parents: a586f6c
Author: Stavros Kontopoulos 
Authored: Thu Aug 10 04:23:26 2017 +0300
Committer: Jason Brown 
Committed: Tue Sep 12 14:10:34 2017 -0700

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Tuples.java  | 24 +++-
 .../cql3/validation/entities/TupleTypeTest.java | 14 
 3 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a08a816a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6053117..3d3903e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * INSERT statement fails when Tuple type is used as clustering column with 
default DESC order (CASSANDRA-13717)
  * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a08a816a/src/java/org/apache/cassandra/cql3/Tuples.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Tuples.java 
b/src/java/org/apache/cassandra/cql3/Tuples.java
index ee08efe..c7564d3 100644
--- a/src/java/org/apache/cassandra/cql3/Tuples.java
+++ b/src/java/org/apache/cassandra/cql3/Tuples.java
@@ -47,7 +47,7 @@ public class Tuples
 return new ColumnSpecification(column.ksName,
column.cfName,
new 
ColumnIdentifier(String.format("%s[%d]", column.name, component), true),
-   
((TupleType)column.type).type(component));
+   
(getTupleType(column.type)).type(component));
 }
 
 /**
@@ -77,7 +77,7 @@ public class Tuples
 
 values.add(value);
 }
-DelayedValue value = new DelayedValue((TupleType)receiver.type, 
values);
+DelayedValue value = new DelayedValue(getTupleType(receiver.type), 
values);
 return allTerminal ? value.bind(QueryOptions.DEFAULT) : value;
 }
 
@@ -104,10 +104,10 @@ public class Tuples
 
 private void validateAssignableTo(String keyspace, ColumnSpecification 
receiver) throws InvalidRequestException
 {
-if (!(receiver.type instanceof TupleType))
+if (!checkIfTupleType(receiver.type))
 throw new InvalidRequestException(String.format("Invalid tuple 
type literal for %s of type %s", receiver.name, receiver.type.asCQL3Type()));
 
-TupleType tt = (TupleType)receiver.type;
+TupleType tt = getTupleType(receiver.type);
 for (int i = 0; i < elements.size(); i++)
 {
 if (i >= tt.size())
@@ -256,7 +256,7 @@ public class Tuples
 List l = 
type.getSerializer().deserializeForNativeProtocol(value, 
options.getProtocolVersion());
 
 assert type.getElementsType() instanceof TupleType;
-TupleType tupleType = (TupleType) type.getElementsType();
+TupleType tupleType = 
Tuples.getTupleType(type.getElementsType());
 
 // type.split(bytes)
 List elements = new ArrayList<>(l.size());
@@ -375,7 +375,7 @@ public class Tuples
 ByteBuffer value = options.getValues().get(bindIndex);
 if (value == ByteBufferUtil.UNSET_BYTE_BUFFER)
 throw new InvalidRequestException(String.format("Invalid unset 
value for tuple %s", receiver.name));
-return value == null ? null : Value.fromSerialized(value, 
(TupleType)receiver.type);
+return value == null ? null : Value.fromSerialized(value, 
getTupleType(receiver.type));
 }
 }
 
@@ -412,4 +412,16 @@ public class Tuples
 sb.append(')');
 return sb.toString();
 }
+
+public static boolean checkIfTupleType(AbstractType tuple)
+{
+return (tuple instanceof TupleType) ||
+   (tuple instanceof ReversedType && ((ReversedType) 

[1/6] cassandra git commit: INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-09-12 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 a586f6c88 -> a08a816a6
  refs/heads/cassandra-3.11 c05d98a30 -> cb2a1c8f4
  refs/heads/trunk 7d4d1a325 -> c6cd82462


INSERT statement fails when Tuple type is used as clustering column with 
default DESC order

patch by Stavros Kontopoulos, reviewed by jasobrown for CASSANDRA-13717


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a08a816a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a08a816a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a08a816a

Branch: refs/heads/cassandra-3.0
Commit: a08a816a6a3497046ba75a38d76d5095347dfe95
Parents: a586f6c
Author: Stavros Kontopoulos 
Authored: Thu Aug 10 04:23:26 2017 +0300
Committer: Jason Brown 
Committed: Tue Sep 12 14:10:34 2017 -0700

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Tuples.java  | 24 +++-
 .../cql3/validation/entities/TupleTypeTest.java | 14 
 3 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a08a816a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6053117..3d3903e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * INSERT statement fails when Tuple type is used as clustering column with 
default DESC order (CASSANDRA-13717)
  * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a08a816a/src/java/org/apache/cassandra/cql3/Tuples.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Tuples.java 
b/src/java/org/apache/cassandra/cql3/Tuples.java
index ee08efe..c7564d3 100644
--- a/src/java/org/apache/cassandra/cql3/Tuples.java
+++ b/src/java/org/apache/cassandra/cql3/Tuples.java
@@ -47,7 +47,7 @@ public class Tuples
 return new ColumnSpecification(column.ksName,
column.cfName,
new 
ColumnIdentifier(String.format("%s[%d]", column.name, component), true),
-   
((TupleType)column.type).type(component));
+   
(getTupleType(column.type)).type(component));
 }
 
 /**
@@ -77,7 +77,7 @@ public class Tuples
 
 values.add(value);
 }
-DelayedValue value = new DelayedValue((TupleType)receiver.type, 
values);
+DelayedValue value = new DelayedValue(getTupleType(receiver.type), 
values);
 return allTerminal ? value.bind(QueryOptions.DEFAULT) : value;
 }
 
@@ -104,10 +104,10 @@ public class Tuples
 
 private void validateAssignableTo(String keyspace, ColumnSpecification 
receiver) throws InvalidRequestException
 {
-if (!(receiver.type instanceof TupleType))
+if (!checkIfTupleType(receiver.type))
 throw new InvalidRequestException(String.format("Invalid tuple 
type literal for %s of type %s", receiver.name, receiver.type.asCQL3Type()));
 
-TupleType tt = (TupleType)receiver.type;
+TupleType tt = getTupleType(receiver.type);
 for (int i = 0; i < elements.size(); i++)
 {
 if (i >= tt.size())
@@ -256,7 +256,7 @@ public class Tuples
 List l = 
type.getSerializer().deserializeForNativeProtocol(value, 
options.getProtocolVersion());
 
 assert type.getElementsType() instanceof TupleType;
-TupleType tupleType = (TupleType) type.getElementsType();
+TupleType tupleType = 
Tuples.getTupleType(type.getElementsType());
 
 // type.split(bytes)
 List elements = new ArrayList<>(l.size());
@@ -375,7 +375,7 @@ public class Tuples
 ByteBuffer value = options.getValues().get(bindIndex);
 if (value == ByteBufferUtil.UNSET_BYTE_BUFFER)
 throw new InvalidRequestException(String.format("Invalid unset 
value for tuple %s", receiver.name));
-return value == null ? null : Value.fromSerialized(value, 
(TupleType)receiver.type);
+return value == null ? null : Value.fromSerialized(value, 
getTupleType(receiver.type));
 }
 }
 
@@ -412,4 +412,16 @@ public class Tuples
 sb.append(')');
 return sb.toString();
 }
+
+

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-12 Thread jasobrown
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb2a1c8f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb2a1c8f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb2a1c8f

Branch: refs/heads/cassandra-3.11
Commit: cb2a1c8f4209ffe9aea8e40e7f0e45dc70613645
Parents: c05d98a a08a816
Author: Jason Brown 
Authored: Tue Sep 12 14:11:06 2017 -0700
Committer: Jason Brown 
Committed: Tue Sep 12 14:14:28 2017 -0700

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Tuples.java  | 24 +++-
 .../cql3/validation/entities/TupleTypeTest.java | 14 
 3 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb2a1c8f/CHANGES.txt
--
diff --cc CHANGES.txt
index 2c48ab2,3d3903e..ec9d126
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,5 +1,15 @@@
 -3.0.15
 +3.11.1
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * INSERT statement fails when Tuple type is used as clustering column with 
default DESC order (CASSANDRA-13717)
   * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
   * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
   * Range deletes in a CAS batch are ignored (CASSANDRA-13655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb2a1c8f/src/java/org/apache/cassandra/cql3/Tuples.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13632) Digest mismatch if row is empty

2017-09-12 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163650#comment-16163650
 ] 

Jason Brown commented on CASSANDRA-13632:
-

[~whangsf] Can you provide steps to reproduce? Also, what kind of improvement 
did you see when you applied your patch? Specific numbers would be great! 

wrt to thrift, are you using CQL over thrift, or old-school thrift a la 
astyanx/hector/pycassa? I tried to dig up a thrift client but it was pretty 
painful and I stopped.



> Digest mismatch if row is empty
> ---
>
> Key: CASSANDRA-13632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13632
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Andrew Whang
>Assignee: Andrew Whang
> Fix For: 3.0.x
>
>
> This issue is similar to CASSANDRA-12090. Quorum read queries that include a 
> column selector (non-wildcard) result in digest mismatch when the row is 
> empty (key does not exist). It seems the data serialization path checks if 
> rowIterator.isEmpty() and if so ignores column names (by setting IS_EMPTY 
> flag). However, the digest serialization path does not perform this check and 
> includes column names. The digest comparison results in a mismatch. The 
> mismatch does not end up issuing a read repair mutation since the underlying 
> data is the same.
> The mismatch on the read path ends up doubling our p99 read latency. We 
> discovered this issue while testing a 2.2.5 to 3.0.13 upgrade.
> One thing to note is that we're using thrift, which ends up handling the 
> ColumnFilter differently than the CQL path. 
> As with CASSANDRA-12090, fixing the digest seems sensible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Reopened] (CASSANDRA-12014) IndexSummary > 2G causes an assertion error

2017-09-12 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reopened CASSANDRA-12014:


> IndexSummary > 2G causes an assertion error
> ---
>
> Key: CASSANDRA-12014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> {noformat}
> ERROR [CompactionExecutor:1546280] 2016-06-01 13:21:00,444  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:1546280,1,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.maybeAddEntry(IndexSummaryBuilder.java:171)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.append(SSTableWriter.java:634)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.afterAppend(SSTableWriter.java:179)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:205) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_51]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> {noformat}
> I believe this can be fixed by raising the min_index_interval, but we should 
> have a better method of coping with this than throwing the AE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12014) IndexSummary > 2G causes an assertion error

2017-09-12 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163632#comment-16163632
 ] 

Jeff Jirsa edited comment on CASSANDRA-12014 at 9/12/17 8:58 PM:
-

FWIW, I think this unit test breaks circleCI, where the free plan runs tests on 
4GB containers, and the new tests aren't ok with that. I realize there are 
"other" options besides circleCI, but it's the only free place where 
non-committers can easily run tests. 

Maybe we can alter it such that it skips the test if there's not enough RAM to 
run it?

Perhaps something like:

{code}
diff --git a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java 
b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java
index acac719..662d36a 100644
--- a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java
@@ -26,6 +26,7 @@ import java.util.*;
 import com.google.common.collect.Lists;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.Assume;

 import org.apache.cassandra.Util;
 import org.apache.cassandra.config.DatabaseDescriptor;
@@ -70,6 +71,7 @@ public class IndexSummaryTest
 @Test
 public void testIndexSummaryKeySizes() throws IOException
 {
+Assume.assumeTrue(Runtime.getRuntime().maxMemory() > 
Integer.MAX_VALUE);
 testIndexSummaryProperties(32, 100);
 testIndexSummaryProperties(64, 100);
 testIndexSummaryProperties(100, 100);
@@ -79,6 +81,7 @@ public class IndexSummaryTest

 private void testIndexSummaryProperties(int keySize, int numKeys) throws 
IOException
 {
+Assume.assumeTrue(Runtime.getRuntime().maxMemory() > 
Integer.MAX_VALUE);
 final int minIndexInterval = 1;
 final List keys = new ArrayList<>(numKeys);

@@ -114,6 +117,7 @@ public class IndexSummaryTest
 @Test
 public void tesLargeIndexSummary() throws IOException
 {
+Assume.assumeTrue(Runtime.getRuntime().maxMemory() > 
Integer.MAX_VALUE);
 final int numKeys = 100;
 final int keySize = 3000;
 final int minIndexInterval = 1;
@@ -143,8 +147,9 @@ public class IndexSummaryTest
  * the index summary should be downsampled automatically.
  */
 @Test
-public void tesLargeIndexSummaryWithExpectedSizeMatching() throws 
IOException
+public void testLargeIndexSummaryWithExpectedSizeMatching() throws 
IOException
 {
+Assume.assumeTrue(Runtime.getRuntime().maxMemory() > 
Integer.MAX_VALUE);
 final int numKeys = 100;
 final int keySize = 3000;
 final int minIndexInterval = 1;
{code}



was (Author: jjirsa):
FWIW, I think this unit test breaks circleCI, where the free plan runs tests on 
4GB containers, and the new tests aren't ok with that. I realize there are 
"other" options besides circleCI, but it's the only free place where 
non-committers can easily run tests. 

Maybe we can alter it such that it skips the test if there's not enough RAM to 
run it?


> IndexSummary > 2G causes an assertion error
> ---
>
> Key: CASSANDRA-12014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> {noformat}
> ERROR [CompactionExecutor:1546280] 2016-06-01 13:21:00,444  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:1546280,1,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.maybeAddEntry(IndexSummaryBuilder.java:171)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.append(SSTableWriter.java:634)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.afterAppend(SSTableWriter.java:179)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:205) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> 

[jira] [Commented] (CASSANDRA-12014) IndexSummary > 2G causes an assertion error

2017-09-12 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163632#comment-16163632
 ] 

Jeff Jirsa commented on CASSANDRA-12014:


FWIW, I think this unit test breaks circleCI, where the free plan runs tests on 
4GB containers, and the new tests aren't ok with that. I realize there are 
"other" options besides circleCI, but it's the only free place where 
non-committers can easily run tests. 

Maybe we can alter it such that it skips the test if there's not enough RAM to 
run it?


> IndexSummary > 2G causes an assertion error
> ---
>
> Key: CASSANDRA-12014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> {noformat}
> ERROR [CompactionExecutor:1546280] 2016-06-01 13:21:00,444  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:1546280,1,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.maybeAddEntry(IndexSummaryBuilder.java:171)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.append(SSTableWriter.java:634)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.afterAppend(SSTableWriter.java:179)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:205) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_51]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> {noformat}
> I believe this can be fixed by raising the min_index_interval, but we should 
> have a better method of coping with this than throwing the AE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12953) Index name uniqueness validation in CFMetaData is not entirely correct

2017-09-12 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-12953:
--

Assignee: Ariel Weisberg

> Index name uniqueness validation in CFMetaData is not entirely correct
> --
>
> Key: CASSANDRA-12953
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12953
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 3.0.x, 3.11.x
>
>
> The check in {{CFMetaData.validate()}} relies on external global state 
> ({{Schema.instance}} to fetch all index names in the keyspace. However, in 
> many cases the validation will be performed without all table instances 
> registered in {{Schema.isntance}} yet. The check should live in 
> {{KeyspaceMetadata}} instead, where no such access is required.
> Things broken by it right now: multiple tests, and Thrift's 
> {{system_add_keyspace}}, at least.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13786) Validation compactions can cause orphan sstable warnings

2017-09-12 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13786:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Got a clean utest run locally, dtest failures were also all failing on trunk.
Committed as {{7d4d1a32581ff40ed1049833631832054bcf2316}}

> Validation compactions can cause orphan sstable warnings
> 
>
> Key: CASSANDRA-13786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13786
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> I've seen LevelledCompactionStrategy occasionally logging: 
> {quote} from level 0 is not on corresponding level in the 
> leveled manifest. This is not a problem per se, but may indicate an orphaned 
> sstable due to a failed compaction not cleaned up properly."{quote} warnings 
> from a ValidationExecutor thread.
> What's happening here is that a compaction running concurrently with the 
> validation is promoting (or demoting) sstables as part of an incremental 
> repair, and an sstable has changed hands by the time the validation 
> compaction gets around to getting scanners for it. The sstable 
> isolation/synchronization done by validation compactions is a lot looser than 
> normal compactions, so seeing this happen isn't very surprising. Given that 
> it's harmless, and not unexpected, I think it would be best to not log these 
> during validation compactions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Rework CSM.getScanners synchronization

2017-09-12 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk 37771f31b -> 7d4d1a325


Rework CSM.getScanners synchronization

Patch by Blake Eggleston; Reviewed by Marcus Eriksson for CASSANDRA-13786


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d4d1a32
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d4d1a32
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d4d1a32

Branch: refs/heads/trunk
Commit: 7d4d1a32581ff40ed1049833631832054bcf2316
Parents: 37771f3
Author: Blake Eggleston 
Authored: Thu Aug 24 13:00:44 2017 -0700
Committer: Blake Eggleston 
Committed: Tue Sep 12 13:31:47 2017 -0700

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionManager.java| 21 +++-
 .../compaction/CompactionStrategyManager.java   | 56 
 .../db/compaction/PendingRepairManager.java |  9 +---
 4 files changed, 53 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d4d1a32/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a5dc68d..ebe0dc0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Rework CompactionStrategyManager.getScanners synchronization 
(CASSANDRA-13786)
  * Add additional unit tests for batch behavior, TTLs, Timestamps 
(CASSANDRA-13846)
  * Add keyspace and table name in schema validation exception (CASSANDRA-13845)
  * Emit metrics whenever we hit tombstone failures and warn thresholds 
(CASSANDRA-13771)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d4d1a32/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 722a5d0..5619da7 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -642,17 +642,12 @@ public class CompactionManager implements 
CompactionManagerMBean
 logger.info("{} Starting anticompaction for {}.{} on {}/{} 
sstables", PreviewKind.NONE.logPrefix(parentRepairSession), 
cfs.keyspace.getName(), cfs.getTableName(), validatedForRepair.size(), 
cfs.getLiveSSTables().size());
 logger.trace("{} Starting anticompaction for ranges {}", 
PreviewKind.NONE.logPrefix(parentRepairSession), ranges);
 Set sstables = new HashSet<>(validatedForRepair);
-Set mutatedRepairStatuses = new HashSet<>();
-// we should only notify that repair status changed if it actually 
did:
-Set mutatedRepairStatusToNotify = new HashSet<>();
-Map wasRepairedBefore = new HashMap<>();
-for (SSTableReader sstable : sstables)
-wasRepairedBefore.put(sstable, sstable.isRepaired());
 
 Set nonAnticompacting = new HashSet<>();
 
 Iterator sstableIterator = sstables.iterator();
 List normalizedRanges = Range.normalize(ranges);
+Set fullyContainedSSTables = new HashSet<>();
 
 while (sstableIterator.hasNext())
 {
@@ -667,11 +662,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 if (r.contains(sstableRange))
 {
 logger.info("{} SSTable {} fully contained in range 
{}, mutating repairedAt instead of anticompacting", 
PreviewKind.NONE.logPrefix(parentRepairSession), sstable, r);
-
sstable.descriptor.getMetadataSerializer().mutateRepaired(sstable.descriptor, 
repairedAt, pendingRepair);
-sstable.reloadSSTableMetadata();
-mutatedRepairStatuses.add(sstable);
-if (!wasRepairedBefore.get(sstable))
-mutatedRepairStatusToNotify.add(sstable);
+fullyContainedSSTables.add(sstable);
 sstableIterator.remove();
 shouldAnticompact = true;
 break;
@@ -690,10 +681,10 @@ public class CompactionManager implements 
CompactionManagerMBean
 sstableIterator.remove();
 }
 }
-
cfs.metric.bytesMutatedAnticompaction.inc(SSTableReader.getTotalBytes(mutatedRepairStatuses));
-
cfs.getTracker().notifySSTableRepairedStatusChanged(mutatedRepairStatusToNotify);
-txn.cancel(Sets.union(nonAnticompacting, mutatedRepairStatuses));
-

[jira] [Commented] (CASSANDRA-13123) Draining a node might fail to delete all inactive commitlogs

2017-09-12 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163579#comment-16163579
 ] 

Jason Brown commented on CASSANDRA-13123:
-

Sorry this fell off my review radar (more than) a few months ago. For the last 
month, however, I've been trying to run this patch, rebased on 3.0/3.11/trunk, 
on circleci and the results have almost always been broken (in ways seemingly 
unrelated to this ticket). I've run it locally and everything seemed legit, and 
I've now run the utests on the apache jenkins server, and things were good (a 
few completely unrelated things failed);

||3.0||3.11||trunk||
|[apache 
dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/9/]|[apache
 
dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/10/]|[apache
 
dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/11/]|

Running the 
[dtests|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/304/]
 now (only for 3.0), and if it looks good I'll commit.

> Draining a node might fail to delete all inactive commitlogs
> 
>
> Key: CASSANDRA-13123
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13123
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jan Urbański
>Assignee: Jan Urbański
> Fix For: 3.8
>
> Attachments: 13123-2.2.8.txt, 13123-3.0.10.txt, 13123-3.9.txt, 
> 13123-trunk.txt
>
>
> After issuing a drain command, it's possible that not all of the inactive 
> commitlogs are removed.
> The drain command shuts down the CommitLog instance, which in turn shuts down 
> the CommitLogSegmentManager. This has the effect of discarding any pending 
> management tasks it might have, like the removal of inactive commitlogs.
> This in turn leads to an excessive amount of commitlogs being left behind 
> after a drain and a lengthy recovery after a restart. With a fleet of dozens 
> of nodes, each of them leaving several GB of commitlogs after a drain and 
> taking up to two minutes to recover them on restart, the additional time 
> required to restart the entire fleet becomes noticeable.
> This problem is not present in 3.x or trunk because of the CLSM rewrite done 
> in CASSANDRA-8844.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13595) Short read protection doesn't work at the end of a partition

2017-09-12 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13595:
---
Labels: Correctness  (was: )

> Short read protection doesn't work at the end of a partition
> 
>
> Key: CASSANDRA-13595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13595
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Andrés de la Peña
>Assignee: ZhaoYang
>  Labels: Correctness
>
> It seems that short read protection doesn't work when the short read is done 
> at the end of a partition in a range query. The final assertion of this dtest 
> fails:
> {code}
> def short_read_partitions_delete_test(self):
> cluster = self.cluster
> cluster.set_configuration_options(values={'hinted_handoff_enabled': 
> False})
> cluster.set_batch_commitlog(enabled=True)
> cluster.populate(2).start(wait_other_notice=True)
> node1, node2 = self.cluster.nodelist()
> session = self.patient_cql_connection(node1)
> create_ks(session, 'ks', 2)
> session.execute("CREATE TABLE t (k int, c int, PRIMARY KEY(k, c)) 
> WITH read_repair_chance = 0.0")
> # we write 1 and 2 in a partition: all nodes get it.
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (1, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (2, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> # we delete partition 1: only node 1 gets it.
> node2.flush()
> node2.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 1"))
> node2.start(wait_other_notice=True)
> # we delete partition 2: only node 2 gets it.
> node1.flush()
> node1.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node2, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 2"))
> node1.start(wait_other_notice=True)
> # read from both nodes
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ALL)
> assert_none(session, "SELECT * FROM t LIMIT 1")
> {code}
> However, the dtest passes if we remove the {{LIMIT 1}}.
> Short read protection [uses a 
> {{SinglePartitionReadCommand}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DataResolver.java#L484],
>  maybe it should use a {{PartitionRangeReadCommand}} instead?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13797) RepairJob blocks on syncTasks

2017-09-12 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163095#comment-16163095
 ] 

Blake Eggleston commented on CASSANDRA-13797:
-

Adding 3.0 branch & tests since it's also affected.

[3.0|https://github.com/bdeggleston/cassandra/tree/13797-3.0]
[utest|https://circleci.com/gh/bdeggleston/cassandra/117]
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/302/]

> RepairJob blocks on syncTasks
> -
>
> Key: CASSANDRA-13797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13797
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> The thread running {{RepairJob}} blocks while it waits for the validations it 
> starts to complete ([see 
> here|https://github.com/bdeggleston/cassandra/blob/9fdec0a82851f5c35cd21d02e8c4da8fc685edb2/src/java/org/apache/cassandra/repair/RepairJob.java#L185]).
>  However, the downstream callbacks (ie: the post-repair cleanup stuff) aren't 
> waiting for {{RepairJob#run}} to return, they're waiting for a result to be 
> set on RepairJob the future, which happens after the sync tasks have 
> completed. This post repair cleanup stuff also immediately shuts down the 
> executor {{RepairJob#run}} is running in. So in noop repair sessions, where 
> there's nothing to stream, I'm seeing the callbacks sometimes fire before 
> {{RepairJob#run}} wakes up, and causing an {{InterruptedException}} is thrown.
> I'm pretty sure this can just be removed, but I'd like a second opinion. This 
> appears to just be a holdover from before repair coordination became async. I 
> thought it might be doing some throttling by blocking, but each repair 
> session gets it's own executor, and validation is  throttled by the fixed 
> size executors doing the actual work of validation, so I don't think we need 
> to keep this around.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-09-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163085#comment-16163085
 ] 

Sylvain Lebresne commented on CASSANDRA-12373:
--

The one main remaining things I'm not sure about is that it seems possible to 
have different schema (meaning, content of schema tables) for what is 
essentially the same table, depending on how it was created/upgraded.  Namely, 
it appears a dense SCF may have 1 or 2 clustering and may or may not have 
definitions for the so-called super column "key" and "values" columns.

This makes it hard, at least to me, to reason about things and have confidence 
it always work as expected. This also feels error prone in the future. 
Typically, most code is written expecting that 
{{CFMetaData.primaryKeyColumns()}} would always be equals to 
{{CFMetaData.partitionKeyColumns() + CFMetaData.clusteringColumns()}}, but 
that's not necessarilly the case here for SCF (and whether it's the case or not 
really depend more on how the table was created that the table definition). 
Note that I'm not saying this particular example is a problem today, I believe 
it's not, but I'm worried about how fragile this feel.

So my preference would be to force things to be more consistent. What I mean 
here is that I would make it so that:
* in the schema tables, every dense SCF table has 2 {{CLUSTERING}} (the 1st 
"true" clustering, and the 2nd standing for the SC "key" column) and 2 
{{REGULAR}} definition (the SC "map" and the SC "value" column). Note that I 
think it's important we save the "key" column as a {{CLUSTERING}} one: 
otherwise, if both the "key" and "value" column definions are {{REGULAR}} (as I 
think they can be in the current patch), you can't distinguish which is which 
later one (and I think that's a current bug of 
{{SuperColumnCompatibility.getSuperCfKeyColumn}}).
* but at the level of {{CFMetaData}}, we extract the "key" and "value" column 
to their respective field, but otherwise remove them from {{clusteringColumns}} 
and {{partitionColumns}}.


Other than, a bunch of other largely minor issues:
* In {{CFMetaData.renameColumn()}}, we appear to allow renaming every column 
for any SCF, including non-dense ones. I don't think that was allowed in 2.x 
(renaming non-PK columns of non-dense SCF through CQL) and I suggest 
maintaining non supporting it. In fact, I don't think it's entirely safe in 
some complex case of users still using thrift and doing schema-changes from it.
* I don't think the change in {{CFMetaData.makeLegacyDefaultValidator}} is 
correct. That said, I don't think the previous code was correct either. If I'm 
not mistaken, what we should be returning in the SCF case is 
{{((MapType)compactValueColumn().type).valueComparator()}}.
* In {{SuperColumnCompatibility.prepareUpdateOperations}}, after the first 
loop, I think we should check that {{superColumnKey != null}} (and provide a 
meaningful error message if that's not the case). I believe otherwise we might 
NPE when handling the {{Operation}}s created.
* In {{SuperColumnCompatibility.columnNameGenerator}}, I'm not sure I fully 
understand the reason for always excluding {{"column1"}} (despite the comment). 
Not that it's really a big deal.
* In {{SuperColumnCompatiblity.SuperColumnRestrictions}}, regarding the 
different javadoc:
** for the class javadoc, since things are tricky, when saying "the default 
column names are used", I think that's a good place to remind what "column1" 
and "column2" means, and that both in term of the internal representation, of 
their CQL exposure, and of the thrift correspondance. Or maybe move such 
explanation to the {{SuperColumnCompatibility}} class javadoc and point to it?
** for {{mutliEQRestriction}} should be {{... AND (column1, column2) = 
('value1', 1)}} but it currently uses a {{>}}.
** for {{keyInRestriction}}, the "This operation does _not_ have a direct 
Thrift counterpart" isn't true. And In fact, I'm not sure why we have to fetch 
everything and filter: can't we just handle this in {{getColumnFilter}} by only 
selecting the map entries we want? Note that the one operation that does not 
have a Thrift counterpart is {{mutliSliceRestriction}} (and, technically, 
anything operation on strict bounds since Thrift was always inclusive).
** for {{keyEQRestriction}}, I believe "in `getRowFilter`" is supposed to be 
"in `getColumnFilter`". Using a "\{@link\}" probably wouldn't hurt either :).
** Nit: there is a few typo in those comments ("prece*e*ding" instead of 
"preceding", "exlusive", "enitre", "... in this case since, since ...").

And a few nitpicks:
* in {{MultiColumnRelation}}, both methods have {{List 
receivers = receivers(cfm)}}, but then in the next line, they call 
{{receivers(cfm)}} instead of just reusing {{receivers}}.
* In {{Relation}}, I'd extend the error message to something like 
{{"Unsupported operation (" + this + ") on super 

[jira] [Commented] (CASSANDRA-13622) Better config validation/documentation

2017-09-12 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163058#comment-16163058
 ] 

ZhaoYang commented on CASSANDRA-13622:
--

[~KurtG]  [~adelapena] thank you 

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Kurt Greaves
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13622) Better config validation/documentation

2017-09-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-13622:
--
   Resolution: Fixed
Fix Version/s: (was: 4.0)
   4.x
   3.11.x
   3.0.x
   Status: Resolved  (was: Ready to Commit)

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Kurt Greaves
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13622) Better config validation/documentation

2017-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163035#comment-16163035
 ] 

Andrés de la Peña commented on CASSANDRA-13622:
---

Committed as 
[8fc9275d3020fa0c80ed1852726be0a5a63e487c|https://github.com/apache/cassandra/commit/8fc9275d3020fa0c80ed1852726be0a5a63e487c].

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Kurt Greaves
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/5] cassandra git commit: Improve config validation and documentation on overflow and NPE

2017-09-12 Thread adelapena
Improve config validation and documentation on overflow and NPE

patch by Zhao Yang; reviewed by Kurt Greaves for CASSANDRA-13622


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a586f6c8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a586f6c8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a586f6c8

Branch: refs/heads/trunk
Commit: a586f6c88dab173663b765261d084ed8410efe81
Parents: 1210365
Author: Zhao Yang 
Authored: Tue Sep 12 14:31:07 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 15:06:23 2017 +0100

--
 CHANGES.txt  |  1 +
 conf/cassandra.yaml  |  5 +++--
 .../apache/cassandra/config/DatabaseDescriptor.java  | 15 +++
 src/java/org/apache/cassandra/utils/FBUtilities.java |  7 +--
 4 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 26b1794..6053117 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 3.0.15
  * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
+ * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
  * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
  * Better handle corrupt final commitlog segment (CASSANDRA-11995)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 22491c6..d77d27a 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -315,6 +315,7 @@ commitlog_sync_period_in_ms: 1
 # is reasonable.
 # Max mutation size is also configurable via max_mutation_size_in_kb setting in
 # cassandra.yaml. The default is half the size commitlog_segment_size_in_mb * 
1024.
+# This should be positive and less than 2048.
 #
 # NOTE: If max_mutation_size_in_kb is set explicitly then 
commitlog_segment_size_in_mb must
 # be set to at least twice the size of max_mutation_size_in_kb / 1024
@@ -517,7 +518,7 @@ native_transport_port: 9042
 #
 # The maximum size of allowed frame. Frame (requests) larger than this will
 # be rejected as invalid. The default is 256MB. If you're changing this 
parameter,
-# you may want to adjust max_value_size_in_mb accordingly.
+# you may want to adjust max_value_size_in_mb accordingly. This should be 
positive and less than 2048.
 # native_transport_max_frame_size_in_mb: 256
 
 # The maximum number of concurrent client connections.
@@ -960,7 +961,7 @@ windows_timer_interval: 1
 
 # Maximum size of any value in SSTables. Safety measure to detect SSTable 
corruption
 # early. Any value size larger than this threshold will result into marking an 
SSTable
-# as corrupted.
+# as corrupted. This should be positive and less than 2048.
 # max_value_size_in_mb: 256
 
 # Coalescing Strategies #

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index aba7617..029db89 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -442,6 +442,9 @@ public class DatabaseDescriptor
 
 if (conf.native_transport_max_frame_size_in_mb <= 0)
 throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be positive, 
but was " + conf.native_transport_max_frame_size_in_mb, false);
+else if (conf.native_transport_max_frame_size_in_mb >= 2048)
+throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be smaller 
than 2048, but was "
++ conf.native_transport_max_frame_size_in_mb, false);
 
 // fail early instead of OOMing (see CASSANDRA-8116)
 if (ThriftServer.HSHA.equals(conf.rpc_server_type) && 
conf.rpc_max_threads == Integer.MAX_VALUE)
@@ -576,6 +579,8 @@ public class DatabaseDescriptor
 /* data file and commit log directories. they get created later, when 
they're needed. */
 for (String datadir : conf.data_file_directories)
 {
+if (datadir == null)
+throw new ConfigurationException("data_file_directories must 
not 

[1/5] cassandra git commit: Improve config validation and documentation on overflow and NPE

2017-09-12 Thread adelapena
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 12841938a -> c05d98a30
  refs/heads/trunk 1a679cf5b -> 37771f31b


Improve config validation and documentation on overflow and NPE

patch by Zhao Yang; reviewed by Kurt Greaves for CASSANDRA-13622


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a586f6c8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a586f6c8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a586f6c8

Branch: refs/heads/cassandra-3.11
Commit: a586f6c88dab173663b765261d084ed8410efe81
Parents: 1210365
Author: Zhao Yang 
Authored: Tue Sep 12 14:31:07 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 15:06:23 2017 +0100

--
 CHANGES.txt  |  1 +
 conf/cassandra.yaml  |  5 +++--
 .../apache/cassandra/config/DatabaseDescriptor.java  | 15 +++
 src/java/org/apache/cassandra/utils/FBUtilities.java |  7 +--
 4 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 26b1794..6053117 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 3.0.15
  * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
+ * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
  * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
  * Better handle corrupt final commitlog segment (CASSANDRA-11995)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 22491c6..d77d27a 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -315,6 +315,7 @@ commitlog_sync_period_in_ms: 1
 # is reasonable.
 # Max mutation size is also configurable via max_mutation_size_in_kb setting in
 # cassandra.yaml. The default is half the size commitlog_segment_size_in_mb * 
1024.
+# This should be positive and less than 2048.
 #
 # NOTE: If max_mutation_size_in_kb is set explicitly then 
commitlog_segment_size_in_mb must
 # be set to at least twice the size of max_mutation_size_in_kb / 1024
@@ -517,7 +518,7 @@ native_transport_port: 9042
 #
 # The maximum size of allowed frame. Frame (requests) larger than this will
 # be rejected as invalid. The default is 256MB. If you're changing this 
parameter,
-# you may want to adjust max_value_size_in_mb accordingly.
+# you may want to adjust max_value_size_in_mb accordingly. This should be 
positive and less than 2048.
 # native_transport_max_frame_size_in_mb: 256
 
 # The maximum number of concurrent client connections.
@@ -960,7 +961,7 @@ windows_timer_interval: 1
 
 # Maximum size of any value in SSTables. Safety measure to detect SSTable 
corruption
 # early. Any value size larger than this threshold will result into marking an 
SSTable
-# as corrupted.
+# as corrupted. This should be positive and less than 2048.
 # max_value_size_in_mb: 256
 
 # Coalescing Strategies #

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index aba7617..029db89 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -442,6 +442,9 @@ public class DatabaseDescriptor
 
 if (conf.native_transport_max_frame_size_in_mb <= 0)
 throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be positive, 
but was " + conf.native_transport_max_frame_size_in_mb, false);
+else if (conf.native_transport_max_frame_size_in_mb >= 2048)
+throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be smaller 
than 2048, but was "
++ conf.native_transport_max_frame_size_in_mb, false);
 
 // fail early instead of OOMing (see CASSANDRA-8116)
 if (ThriftServer.HSHA.equals(conf.rpc_server_type) && 
conf.rpc_max_threads == Integer.MAX_VALUE)
@@ -576,6 +579,8 @@ public class DatabaseDescriptor
 /* data file and commit log directories. they get created later, when 
they're needed. */
 for (String datadir : 

[3/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-12 Thread adelapena
Merge branch 'cassandra-3.0' into cassandra-3.11

# Conflicts:
#   src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c05d98a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c05d98a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c05d98a3

Branch: refs/heads/trunk
Commit: c05d98a30b56fb8dd3924780150a631123ce0851
Parents: 1284193 a586f6c
Author: Andrés de la Peña 
Authored: Tue Sep 12 15:29:13 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 15:29:13 2017 +0100

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c05d98a3/CHANGES.txt
--
diff --cc CHANGES.txt
index 752d9aa,6053117..2c48ab2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,7 +1,18 @@@
 -3.0.15
 +3.11.1
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
-  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
   * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
+  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
   * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
 + * Avoid assertion error when IndexSummary > 2G (CASSANDRA-12014)
   * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
   * Better handle corrupt final commitlog segment (CASSANDRA-11995)
   * StreamingHistogram is not thread safe (CASSANDRA-13756)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/5] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-12 Thread adelapena
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37771f31
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37771f31
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37771f31

Branch: refs/heads/trunk
Commit: 37771f31bcc35747fff350f1ec7a4a7e312f19dd
Parents: 1a679cf c05d98a
Author: Andrés de la Peña 
Authored: Tue Sep 12 15:29:55 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 15:29:55 2017 +0100

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/37771f31/CHANGES.txt
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-12 Thread adelapena
Merge branch 'cassandra-3.0' into cassandra-3.11

# Conflicts:
#   src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c05d98a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c05d98a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c05d98a3

Branch: refs/heads/cassandra-3.11
Commit: c05d98a30b56fb8dd3924780150a631123ce0851
Parents: 1284193 a586f6c
Author: Andrés de la Peña 
Authored: Tue Sep 12 15:29:13 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 15:29:13 2017 +0100

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c05d98a3/CHANGES.txt
--
diff --cc CHANGES.txt
index 752d9aa,6053117..2c48ab2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,7 +1,18 @@@
 -3.0.15
 +3.11.1
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
-  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
   * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
+  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
   * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
 + * Avoid assertion error when IndexSummary > 2G (CASSANDRA-12014)
   * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
   * Better handle corrupt final commitlog segment (CASSANDRA-11995)
   * StreamingHistogram is not thread safe (CASSANDRA-13756)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Improve config validation and documentation on overflow and NPE

2017-09-12 Thread adelapena
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 12103653f -> a586f6c88


Improve config validation and documentation on overflow and NPE

patch by Zhao Yang; reviewed by Kurt Greaves for CASSANDRA-13622


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a586f6c8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a586f6c8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a586f6c8

Branch: refs/heads/cassandra-3.0
Commit: a586f6c88dab173663b765261d084ed8410efe81
Parents: 1210365
Author: Zhao Yang 
Authored: Tue Sep 12 14:31:07 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 15:06:23 2017 +0100

--
 CHANGES.txt  |  1 +
 conf/cassandra.yaml  |  5 +++--
 .../apache/cassandra/config/DatabaseDescriptor.java  | 15 +++
 src/java/org/apache/cassandra/utils/FBUtilities.java |  7 +--
 4 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 26b1794..6053117 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 3.0.15
  * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
+ * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
  * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
  * Better handle corrupt final commitlog segment (CASSANDRA-11995)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 22491c6..d77d27a 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -315,6 +315,7 @@ commitlog_sync_period_in_ms: 1
 # is reasonable.
 # Max mutation size is also configurable via max_mutation_size_in_kb setting in
 # cassandra.yaml. The default is half the size commitlog_segment_size_in_mb * 
1024.
+# This should be positive and less than 2048.
 #
 # NOTE: If max_mutation_size_in_kb is set explicitly then 
commitlog_segment_size_in_mb must
 # be set to at least twice the size of max_mutation_size_in_kb / 1024
@@ -517,7 +518,7 @@ native_transport_port: 9042
 #
 # The maximum size of allowed frame. Frame (requests) larger than this will
 # be rejected as invalid. The default is 256MB. If you're changing this 
parameter,
-# you may want to adjust max_value_size_in_mb accordingly.
+# you may want to adjust max_value_size_in_mb accordingly. This should be 
positive and less than 2048.
 # native_transport_max_frame_size_in_mb: 256
 
 # The maximum number of concurrent client connections.
@@ -960,7 +961,7 @@ windows_timer_interval: 1
 
 # Maximum size of any value in SSTables. Safety measure to detect SSTable 
corruption
 # early. Any value size larger than this threshold will result into marking an 
SSTable
-# as corrupted.
+# as corrupted. This should be positive and less than 2048.
 # max_value_size_in_mb: 256
 
 # Coalescing Strategies #

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a586f6c8/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index aba7617..029db89 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -442,6 +442,9 @@ public class DatabaseDescriptor
 
 if (conf.native_transport_max_frame_size_in_mb <= 0)
 throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be positive, 
but was " + conf.native_transport_max_frame_size_in_mb, false);
+else if (conf.native_transport_max_frame_size_in_mb >= 2048)
+throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be smaller 
than 2048, but was "
++ conf.native_transport_max_frame_size_in_mb, false);
 
 // fail early instead of OOMing (see CASSANDRA-8116)
 if (ThriftServer.HSHA.equals(conf.rpc_server_type) && 
conf.rpc_max_threads == Integer.MAX_VALUE)
@@ -576,6 +579,8 @@ public class DatabaseDescriptor
 /* data file and commit log directories. they get created later, when 
they're needed. */
 for (String datadir : conf.data_file_directories)
 {
+if 

[1/5] cassandra git commit: Improve config validation and documentation on overflow and NPE

2017-09-12 Thread adelapena
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 eb027a1de -> 12841938a
  refs/heads/trunk 826c9f4a6 -> 1a679cf5b


Improve config validation and documentation on overflow and NPE

patch by Zhao Yang; reviewed by Kurt Greaves for CASSANDRA-13622


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8fc9275d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8fc9275d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8fc9275d

Branch: refs/heads/cassandra-3.11
Commit: 8fc9275d3020fa0c80ed1852726be0a5a63e487c
Parents: e86bef4
Author: Zhao Yang 
Authored: Tue Sep 12 14:31:07 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 14:31:07 2017 +0100

--
 CHANGES.txt  |  1 +
 conf/cassandra.yaml  |  5 +++--
 .../apache/cassandra/config/DatabaseDescriptor.java  | 15 +++
 src/java/org/apache/cassandra/utils/FBUtilities.java |  7 +--
 4 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fc9275d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d155e..b00e47c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
  * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
  * Better handle corrupt final commitlog segment (CASSANDRA-11995)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fc9275d/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 22491c6..d77d27a 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -315,6 +315,7 @@ commitlog_sync_period_in_ms: 1
 # is reasonable.
 # Max mutation size is also configurable via max_mutation_size_in_kb setting in
 # cassandra.yaml. The default is half the size commitlog_segment_size_in_mb * 
1024.
+# This should be positive and less than 2048.
 #
 # NOTE: If max_mutation_size_in_kb is set explicitly then 
commitlog_segment_size_in_mb must
 # be set to at least twice the size of max_mutation_size_in_kb / 1024
@@ -517,7 +518,7 @@ native_transport_port: 9042
 #
 # The maximum size of allowed frame. Frame (requests) larger than this will
 # be rejected as invalid. The default is 256MB. If you're changing this 
parameter,
-# you may want to adjust max_value_size_in_mb accordingly.
+# you may want to adjust max_value_size_in_mb accordingly. This should be 
positive and less than 2048.
 # native_transport_max_frame_size_in_mb: 256
 
 # The maximum number of concurrent client connections.
@@ -960,7 +961,7 @@ windows_timer_interval: 1
 
 # Maximum size of any value in SSTables. Safety measure to detect SSTable 
corruption
 # early. Any value size larger than this threshold will result into marking an 
SSTable
-# as corrupted.
+# as corrupted. This should be positive and less than 2048.
 # max_value_size_in_mb: 256
 
 # Coalescing Strategies #

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fc9275d/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index aba7617..029db89 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -442,6 +442,9 @@ public class DatabaseDescriptor
 
 if (conf.native_transport_max_frame_size_in_mb <= 0)
 throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be positive, 
but was " + conf.native_transport_max_frame_size_in_mb, false);
+else if (conf.native_transport_max_frame_size_in_mb >= 2048)
+throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be smaller 
than 2048, but was "
++ conf.native_transport_max_frame_size_in_mb, false);
 
 // fail early instead of OOMing (see CASSANDRA-8116)
 if (ThriftServer.HSHA.equals(conf.rpc_server_type) && 
conf.rpc_max_threads == Integer.MAX_VALUE)
@@ -576,6 +579,8 @@ public class DatabaseDescriptor
 /* data file and commit log directories. they get created later, when 
they're needed. */
 for (String datadir : conf.data_file_directories)
 {
+if (datadir == null)
+throw new ConfigurationException("data_file_directories 

[3/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-12 Thread adelapena
Merge branch 'cassandra-3.0' into cassandra-3.11

# Conflicts:
#   CHANGES.txt
#   src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12841938
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12841938
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12841938

Branch: refs/heads/trunk
Commit: 12841938a0cd420d626749e91f5f696f26354b03
Parents: eb027a1 8fc9275
Author: Andrés de la Peña 
Authored: Tue Sep 12 14:52:18 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 14:52:18 2017 +0100

--
 CHANGES.txt  |  1 +
 conf/cassandra.yaml  |  5 +++--
 .../apache/cassandra/config/DatabaseDescriptor.java  | 15 +++
 src/java/org/apache/cassandra/utils/FBUtilities.java |  7 +--
 4 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/12841938/CHANGES.txt
--
diff --cc CHANGES.txt
index 099a869,b00e47c..752d9aa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,17 -1,6 +1,18 @@@
 -3.0.15
 +3.11.1
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
 + * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
   * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
 + * Avoid assertion error when IndexSummary > 2G (CASSANDRA-12014)
   * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
   * Better handle corrupt final commitlog segment (CASSANDRA-11995)
   * StreamingHistogram is not thread safe (CASSANDRA-13756)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12841938/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index 4bb5840,d77d27a..e847e54
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -1105,96 -959,11 +1106,96 @@@ enable_scripted_user_defined_functions
  # setting.
  windows_timer_interval: 1
  
 +
 +# Enables encrypting data at-rest (on disk). Different key providers can be 
plugged in, but the default reads from
 +# a JCE-style keystore. A single keystore can hold multiple keys, but the one 
referenced by
 +# the "key_alias" is the only key that will be used for encrypt opertaions; 
previously used keys
 +# can still (and should!) be in the keystore and will be used on decrypt 
operations
 +# (to handle the case of key rotation).
 +#
 +# It is strongly recommended to download and install Java Cryptography 
Extension (JCE)
 +# Unlimited Strength Jurisdiction Policy Files for your version of the JDK.
 +# (current link: 
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html)
 +#
 +# Currently, only the following file types are supported for transparent data 
encryption, although
 +# more are coming in future cassandra releases: commitlog, hints
 +transparent_data_encryption_options:
 +enabled: false
 +chunk_length_kb: 64
 +cipher: AES/CBC/PKCS5Padding
 +key_alias: testing:1
 +# CBC IV length for AES needs to be 16 bytes (which is also the default 
size)
 +# iv_length: 16
 +key_provider: 
 +  - class_name: org.apache.cassandra.security.JKSKeyProvider
 +parameters: 
 +  - keystore: conf/.keystore
 +keystore_password: cassandra
 +store_type: JCEKS
 +key_password: cassandra
 +
 +
 +#
 +# SAFETY THRESHOLDS #
 +#
 +
 +# When executing a scan, within or across a partition, we need to keep the
 +# tombstones seen in memory so we can return them to the coordinator, which
 +# will use them to make sure other replicas also know about the deleted rows.
 +# With workloads that generate a lot of tombstones, this can cause performance
 +# 

[5/5] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-12 Thread adelapena
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a679cf5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a679cf5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a679cf5

Branch: refs/heads/trunk
Commit: 1a679cf5b612410d72fc69918617aad258821e4c
Parents: 826c9f4 1284193
Author: Andrés de la Peña 
Authored: Tue Sep 12 14:55:31 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 14:55:31 2017 +0100

--
 CHANGES.txt  |  1 +
 conf/cassandra.yaml  |  5 +++--
 .../apache/cassandra/config/DatabaseDescriptor.java  | 15 +++
 src/java/org/apache/cassandra/utils/FBUtilities.java |  7 +--
 4 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a679cf5/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a679cf5/conf/cassandra.yaml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a679cf5/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a679cf5/src/java/org/apache/cassandra/utils/FBUtilities.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/5] cassandra git commit: Improve config validation and documentation on overflow and NPE

2017-09-12 Thread adelapena
Improve config validation and documentation on overflow and NPE

patch by Zhao Yang; reviewed by Kurt Greaves for CASSANDRA-13622


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8fc9275d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8fc9275d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8fc9275d

Branch: refs/heads/trunk
Commit: 8fc9275d3020fa0c80ed1852726be0a5a63e487c
Parents: e86bef4
Author: Zhao Yang 
Authored: Tue Sep 12 14:31:07 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 14:31:07 2017 +0100

--
 CHANGES.txt  |  1 +
 conf/cassandra.yaml  |  5 +++--
 .../apache/cassandra/config/DatabaseDescriptor.java  | 15 +++
 src/java/org/apache/cassandra/utils/FBUtilities.java |  7 +--
 4 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fc9275d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d155e..b00e47c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
  * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
  * Better handle corrupt final commitlog segment (CASSANDRA-11995)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fc9275d/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 22491c6..d77d27a 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -315,6 +315,7 @@ commitlog_sync_period_in_ms: 1
 # is reasonable.
 # Max mutation size is also configurable via max_mutation_size_in_kb setting in
 # cassandra.yaml. The default is half the size commitlog_segment_size_in_mb * 
1024.
+# This should be positive and less than 2048.
 #
 # NOTE: If max_mutation_size_in_kb is set explicitly then 
commitlog_segment_size_in_mb must
 # be set to at least twice the size of max_mutation_size_in_kb / 1024
@@ -517,7 +518,7 @@ native_transport_port: 9042
 #
 # The maximum size of allowed frame. Frame (requests) larger than this will
 # be rejected as invalid. The default is 256MB. If you're changing this 
parameter,
-# you may want to adjust max_value_size_in_mb accordingly.
+# you may want to adjust max_value_size_in_mb accordingly. This should be 
positive and less than 2048.
 # native_transport_max_frame_size_in_mb: 256
 
 # The maximum number of concurrent client connections.
@@ -960,7 +961,7 @@ windows_timer_interval: 1
 
 # Maximum size of any value in SSTables. Safety measure to detect SSTable 
corruption
 # early. Any value size larger than this threshold will result into marking an 
SSTable
-# as corrupted.
+# as corrupted. This should be positive and less than 2048.
 # max_value_size_in_mb: 256
 
 # Coalescing Strategies #

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8fc9275d/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index aba7617..029db89 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -442,6 +442,9 @@ public class DatabaseDescriptor
 
 if (conf.native_transport_max_frame_size_in_mb <= 0)
 throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be positive, 
but was " + conf.native_transport_max_frame_size_in_mb, false);
+else if (conf.native_transport_max_frame_size_in_mb >= 2048)
+throw new 
ConfigurationException("native_transport_max_frame_size_in_mb must be smaller 
than 2048, but was "
++ conf.native_transport_max_frame_size_in_mb, false);
 
 // fail early instead of OOMing (see CASSANDRA-8116)
 if (ThriftServer.HSHA.equals(conf.rpc_server_type) && 
conf.rpc_max_threads == Integer.MAX_VALUE)
@@ -576,6 +579,8 @@ public class DatabaseDescriptor
 /* data file and commit log directories. they get created later, when 
they're needed. */
 for (String datadir : conf.data_file_directories)
 {
+if (datadir == null)
+throw new ConfigurationException("data_file_directories must 
not contain empty entry", false);
 if (datadir.equals(conf.commitlog_directory))
 throw new 

[4/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-12 Thread adelapena
Merge branch 'cassandra-3.0' into cassandra-3.11

# Conflicts:
#   CHANGES.txt
#   src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12841938
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12841938
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12841938

Branch: refs/heads/cassandra-3.11
Commit: 12841938a0cd420d626749e91f5f696f26354b03
Parents: eb027a1 8fc9275
Author: Andrés de la Peña 
Authored: Tue Sep 12 14:52:18 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Sep 12 14:52:18 2017 +0100

--
 CHANGES.txt  |  1 +
 conf/cassandra.yaml  |  5 +++--
 .../apache/cassandra/config/DatabaseDescriptor.java  | 15 +++
 src/java/org/apache/cassandra/utils/FBUtilities.java |  7 +--
 4 files changed, 24 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/12841938/CHANGES.txt
--
diff --cc CHANGES.txt
index 099a869,b00e47c..752d9aa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,17 -1,6 +1,18 @@@
 -3.0.15
 +3.11.1
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Improve config validation and documentation on overflow and NPE 
(CASSANDRA-13622)
 + * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
   * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
 + * Avoid assertion error when IndexSummary > 2G (CASSANDRA-12014)
   * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
   * Better handle corrupt final commitlog segment (CASSANDRA-11995)
   * StreamingHistogram is not thread safe (CASSANDRA-13756)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12841938/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index 4bb5840,d77d27a..e847e54
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -1105,96 -959,11 +1106,96 @@@ enable_scripted_user_defined_functions
  # setting.
  windows_timer_interval: 1
  
 +
 +# Enables encrypting data at-rest (on disk). Different key providers can be 
plugged in, but the default reads from
 +# a JCE-style keystore. A single keystore can hold multiple keys, but the one 
referenced by
 +# the "key_alias" is the only key that will be used for encrypt opertaions; 
previously used keys
 +# can still (and should!) be in the keystore and will be used on decrypt 
operations
 +# (to handle the case of key rotation).
 +#
 +# It is strongly recommended to download and install Java Cryptography 
Extension (JCE)
 +# Unlimited Strength Jurisdiction Policy Files for your version of the JDK.
 +# (current link: 
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html)
 +#
 +# Currently, only the following file types are supported for transparent data 
encryption, although
 +# more are coming in future cassandra releases: commitlog, hints
 +transparent_data_encryption_options:
 +enabled: false
 +chunk_length_kb: 64
 +cipher: AES/CBC/PKCS5Padding
 +key_alias: testing:1
 +# CBC IV length for AES needs to be 16 bytes (which is also the default 
size)
 +# iv_length: 16
 +key_provider: 
 +  - class_name: org.apache.cassandra.security.JKSKeyProvider
 +parameters: 
 +  - keystore: conf/.keystore
 +keystore_password: cassandra
 +store_type: JCEKS
 +key_password: cassandra
 +
 +
 +#
 +# SAFETY THRESHOLDS #
 +#
 +
 +# When executing a scan, within or across a partition, we need to keep the
 +# tombstones seen in memory so we can return them to the coordinator, which
 +# will use them to make sure other replicas also know about the deleted rows.
 +# With workloads that generate a lot of tombstones, this can cause performance
 

[jira] [Updated] (CASSANDRA-13069) Local batchlog for MV may not be correctly written on node movements

2017-09-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13069:

   Resolution: Fixed
Fix Version/s: 4.0
   3.11.1
   3.0.15
   Status: Resolved  (was: Patch Available)

bq. Lgtm, +1 if CI is happy (or at least as happy as it usually is).

CI look good, committed as {{12103653f313d6f1ef030a535986123ddcffea9c}} to 
cassandra-3.0 and merge up to cassandra-3.11 and master. Dtests merged to 
cassandra-dtest master as {{c39a85c3e7b2869ef3fffafe1380362d6469919d}}.

Thanks for the review!

> Local batchlog for MV may not be correctly written on node movements
> 
>
> Key: CASSANDRA-13069
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13069
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Sylvain Lebresne
>Assignee: Paulo Motta
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> Unless I'm really reading this wrong, I think the code 
> [here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageProxy.java#L829-L843],
>  which comes from CASSANDRA-10674, isn't working properly.
> More precisely, I believe we can have both paired and unpaired mutations, so 
> that both {{if}} can be taken, but if that's the case, the 2nd write to the 
> batchlog will basically overwrite (remove) the batchlog write of the 1st 
> {{if}} and I don't think that's the intention. In practice, this means 
> "paired" mutation won't be in the batchlog, which mean they won't be replayed 
> at all if they fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Fix pending view mutations handling and cleanup batchlog when there are local and remote paired mutations

2017-09-12 Thread paulo
Fix pending view mutations handling and cleanup batchlog when there are local 
and remote paired mutations

Patch by Paulo Motta; Reviewed by Sylvain Lebresne for CASSANDRA-13069


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12103653
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12103653
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12103653

Branch: refs/heads/cassandra-3.11
Commit: 12103653f313d6f1ef030a535986123ddcffea9c
Parents: e86bef4
Author: Paulo Motta 
Authored: Wed Dec 21 20:19:21 2016 -0200
Committer: Paulo Motta 
Committed: Tue Sep 12 08:30:24 2017 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/batchlog/BatchlogManager.java |  3 +-
 .../service/BatchlogResponseHandler.java|  4 +-
 .../apache/cassandra/service/StorageProxy.java  | 82 +---
 4 files changed, 41 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d155e..26b1794 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
  * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
  * Better handle corrupt final commitlog segment (CASSANDRA-11995)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java 
b/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
index b614fc5..a0b614f 100644
--- a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
@@ -67,6 +67,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 private static final Logger logger = 
LoggerFactory.getLogger(BatchlogManager.class);
 public static final BatchlogManager instance = new BatchlogManager();
+public static final long BATCHLOG_REPLAY_TIMEOUT = 
Long.getLong("cassandra.batchlog.replay_timeout_in_ms", 
DatabaseDescriptor.getWriteRpcTimeout() * 2);
 
 private volatile long totalBatchesReplayed = 0; // no concurrency 
protection necessary as only written by replay thread.
 private volatile UUID lastReplayedUuid = UUIDGen.minTimeUUID(0);
@@ -284,7 +285,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 public static long getBatchlogTimeout()
 {
-return DatabaseDescriptor.getWriteRpcTimeout() * 2; // enough time for 
the actual write + BM removal mutation
+return BATCHLOG_REPLAY_TIMEOUT; // enough time for the actual write + 
BM removal mutation
 }
 
 private static class ReplayingBatch

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
--
diff --git a/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java 
b/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
index ac44923..a1477e6 100644
--- a/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
+++ b/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
@@ -50,7 +50,7 @@ public class BatchlogResponseHandler extends 
AbstractWriteResponseHandler
 {
 wrapped.response(msg);
 if (requiredBeforeFinishUpdater.decrementAndGet(this) == 0)
-cleanup.run();
+cleanup.ackMutation();
 }
 
 public boolean isLatencyForSnitch()
@@ -107,7 +107,7 @@ public class BatchlogResponseHandler extends 
AbstractWriteResponseHandler
 this.callback = callback;
 }
 
-public void run()
+public void ackMutation()
 {
 if (mutationsWaitingForUpdater.decrementAndGet(this) == 0)
 callback.invoke();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 6610cf7..1ce1bc5 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -729,15 +729,16 @@ public class StorageProxy implements StorageProxyMBean

[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-12 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb027a1d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb027a1d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb027a1d

Branch: refs/heads/trunk
Commit: eb027a1de6d97f040ef9da1552e5811539e633a0
Parents: b64a4e4 1210365
Author: Paulo Motta 
Authored: Tue Sep 12 08:31:42 2017 -0500
Committer: Paulo Motta 
Committed: Tue Sep 12 08:33:00 2017 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/batchlog/BatchlogManager.java |  3 +-
 .../service/BatchlogResponseHandler.java|  4 +-
 .../apache/cassandra/service/StorageProxy.java  | 86 +---
 4 files changed, 44 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb027a1d/CHANGES.txt
--
diff --cc CHANGES.txt
index 52775e7,26b1794..099a869
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,6 +1,17 @@@
 -3.0.15
 +3.11.1
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
   * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
 + * Avoid assertion error when IndexSummary > 2G (CASSANDRA-12014)
   * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
   * Better handle corrupt final commitlog segment (CASSANDRA-11995)
   * StreamingHistogram is not thread safe (CASSANDRA-13756)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb027a1d/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb027a1d/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb027a1d/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --cc src/java/org/apache/cassandra/service/StorageProxy.java
index b8f87d9,1ce1bc5..913c3c2
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@@ -819,7 -756,31 +792,34 @@@ public class StorageProxy implements St
  "but this node hasn't updated its 
ring metadata yet. Adding mutation to " +
  "local batchlog to be replayed 
later.",
  mutation.key());
- nonPairedMutations.add(mutation);
+ continue;
+ }
+ 
+ // When local node is the paired endpoint just apply the 
mutation locally.
+ if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined())
++{
+ try
+ {
+ mutation.apply(writeCommitLog);
+ nonLocalMutations.remove(mutation);
+ cleanup.ackMutation();
+ }
+ catch (Exception exc)
+ {
+ logger.error("Error applying local view update to 
keyspace {}: {}", mutation.getKeyspaceName(), mutation);
+ throw exc;
+ }
++}
+ else
+ {
+ wrappers.add(wrapViewBatchResponseHandler(mutation,
+   
consistencyLevel,
+   
consistencyLevel,
+

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-12 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb027a1d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb027a1d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb027a1d

Branch: refs/heads/cassandra-3.11
Commit: eb027a1de6d97f040ef9da1552e5811539e633a0
Parents: b64a4e4 1210365
Author: Paulo Motta 
Authored: Tue Sep 12 08:31:42 2017 -0500
Committer: Paulo Motta 
Committed: Tue Sep 12 08:33:00 2017 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/batchlog/BatchlogManager.java |  3 +-
 .../service/BatchlogResponseHandler.java|  4 +-
 .../apache/cassandra/service/StorageProxy.java  | 86 +---
 4 files changed, 44 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb027a1d/CHANGES.txt
--
diff --cc CHANGES.txt
index 52775e7,26b1794..099a869
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,6 +1,17 @@@
 -3.0.15
 +3.11.1
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
   * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
 + * Avoid assertion error when IndexSummary > 2G (CASSANDRA-12014)
   * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
   * Better handle corrupt final commitlog segment (CASSANDRA-11995)
   * StreamingHistogram is not thread safe (CASSANDRA-13756)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb027a1d/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb027a1d/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb027a1d/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --cc src/java/org/apache/cassandra/service/StorageProxy.java
index b8f87d9,1ce1bc5..913c3c2
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@@ -819,7 -756,31 +792,34 @@@ public class StorageProxy implements St
  "but this node hasn't updated its 
ring metadata yet. Adding mutation to " +
  "local batchlog to be replayed 
later.",
  mutation.key());
- nonPairedMutations.add(mutation);
+ continue;
+ }
+ 
+ // When local node is the paired endpoint just apply the 
mutation locally.
+ if 
(pairedEndpoint.get().equals(FBUtilities.getBroadcastAddress()) && 
StorageService.instance.isJoined())
++{
+ try
+ {
+ mutation.apply(writeCommitLog);
+ nonLocalMutations.remove(mutation);
+ cleanup.ackMutation();
+ }
+ catch (Exception exc)
+ {
+ logger.error("Error applying local view update to 
keyspace {}: {}", mutation.getKeyspaceName(), mutation);
+ throw exc;
+ }
++}
+ else
+ {
+ wrappers.add(wrapViewBatchResponseHandler(mutation,
+   
consistencyLevel,
+   
consistencyLevel,
+   

[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-12 Thread paulo
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/826c9f4a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/826c9f4a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/826c9f4a

Branch: refs/heads/trunk
Commit: 826c9f4a6ebad8880390f8a26058d0c1f964f687
Parents: 4718358 eb027a1
Author: Paulo Motta 
Authored: Tue Sep 12 08:33:19 2017 -0500
Committer: Paulo Motta 
Committed: Tue Sep 12 08:33:19 2017 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/batchlog/BatchlogManager.java |  3 +-
 .../service/BatchlogResponseHandler.java|  4 +-
 .../apache/cassandra/service/StorageProxy.java  | 86 +---
 4 files changed, 44 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/826c9f4a/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/826c9f4a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/826c9f4a/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/826c9f4a/src/java/org/apache/cassandra/service/StorageProxy.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: Fix pending view mutations handling and cleanup batchlog when there are local and remote paired mutations

2017-09-12 Thread paulo
Fix pending view mutations handling and cleanup batchlog when there are local 
and remote paired mutations

Patch by Paulo Motta; Reviewed by Sylvain Lebresne for CASSANDRA-13069


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12103653
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12103653
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12103653

Branch: refs/heads/trunk
Commit: 12103653f313d6f1ef030a535986123ddcffea9c
Parents: e86bef4
Author: Paulo Motta 
Authored: Wed Dec 21 20:19:21 2016 -0200
Committer: Paulo Motta 
Committed: Tue Sep 12 08:30:24 2017 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/batchlog/BatchlogManager.java |  3 +-
 .../service/BatchlogResponseHandler.java|  4 +-
 .../apache/cassandra/service/StorageProxy.java  | 82 +---
 4 files changed, 41 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d155e..26b1794 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
  * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
  * Better handle corrupt final commitlog segment (CASSANDRA-11995)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java 
b/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
index b614fc5..a0b614f 100644
--- a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
@@ -67,6 +67,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 private static final Logger logger = 
LoggerFactory.getLogger(BatchlogManager.class);
 public static final BatchlogManager instance = new BatchlogManager();
+public static final long BATCHLOG_REPLAY_TIMEOUT = 
Long.getLong("cassandra.batchlog.replay_timeout_in_ms", 
DatabaseDescriptor.getWriteRpcTimeout() * 2);
 
 private volatile long totalBatchesReplayed = 0; // no concurrency 
protection necessary as only written by replay thread.
 private volatile UUID lastReplayedUuid = UUIDGen.minTimeUUID(0);
@@ -284,7 +285,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 public static long getBatchlogTimeout()
 {
-return DatabaseDescriptor.getWriteRpcTimeout() * 2; // enough time for 
the actual write + BM removal mutation
+return BATCHLOG_REPLAY_TIMEOUT; // enough time for the actual write + 
BM removal mutation
 }
 
 private static class ReplayingBatch

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
--
diff --git a/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java 
b/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
index ac44923..a1477e6 100644
--- a/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
+++ b/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
@@ -50,7 +50,7 @@ public class BatchlogResponseHandler extends 
AbstractWriteResponseHandler
 {
 wrapped.response(msg);
 if (requiredBeforeFinishUpdater.decrementAndGet(this) == 0)
-cleanup.run();
+cleanup.ackMutation();
 }
 
 public boolean isLatencyForSnitch()
@@ -107,7 +107,7 @@ public class BatchlogResponseHandler extends 
AbstractWriteResponseHandler
 this.callback = callback;
 }
 
-public void run()
+public void ackMutation()
 {
 if (mutationsWaitingForUpdater.decrementAndGet(this) == 0)
 callback.invoke();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 6610cf7..1ce1bc5 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -729,15 +729,16 @@ public class StorageProxy implements StorageProxyMBean
 

[1/6] cassandra git commit: Fix pending view mutations handling and cleanup batchlog when there are local and remote paired mutations

2017-09-12 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 e86bef439 -> 12103653f
  refs/heads/cassandra-3.11 b64a4e4f6 -> eb027a1de
  refs/heads/trunk 471835815 -> 826c9f4a6


Fix pending view mutations handling and cleanup batchlog when there are local 
and remote paired mutations

Patch by Paulo Motta; Reviewed by Sylvain Lebresne for CASSANDRA-13069


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12103653
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12103653
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12103653

Branch: refs/heads/cassandra-3.0
Commit: 12103653f313d6f1ef030a535986123ddcffea9c
Parents: e86bef4
Author: Paulo Motta 
Authored: Wed Dec 21 20:19:21 2016 -0200
Committer: Paulo Motta 
Committed: Tue Sep 12 08:30:24 2017 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/batchlog/BatchlogManager.java |  3 +-
 .../service/BatchlogResponseHandler.java|  4 +-
 .../apache/cassandra/service/StorageProxy.java  | 82 +---
 4 files changed, 41 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d155e..26b1794 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Fix pending view mutations handling and cleanup batchlog when there are 
local and remote paired mutations (CASSANDRA-13069)
  * Range deletes in a CAS batch are ignored (CASSANDRA-13655)
  * Change repair midpoint logging for tiny ranges (CASSANDRA-13603)
  * Better handle corrupt final commitlog segment (CASSANDRA-11995)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java 
b/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
index b614fc5..a0b614f 100644
--- a/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/batchlog/BatchlogManager.java
@@ -67,6 +67,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 private static final Logger logger = 
LoggerFactory.getLogger(BatchlogManager.class);
 public static final BatchlogManager instance = new BatchlogManager();
+public static final long BATCHLOG_REPLAY_TIMEOUT = 
Long.getLong("cassandra.batchlog.replay_timeout_in_ms", 
DatabaseDescriptor.getWriteRpcTimeout() * 2);
 
 private volatile long totalBatchesReplayed = 0; // no concurrency 
protection necessary as only written by replay thread.
 private volatile UUID lastReplayedUuid = UUIDGen.minTimeUUID(0);
@@ -284,7 +285,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 public static long getBatchlogTimeout()
 {
-return DatabaseDescriptor.getWriteRpcTimeout() * 2; // enough time for 
the actual write + BM removal mutation
+return BATCHLOG_REPLAY_TIMEOUT; // enough time for the actual write + 
BM removal mutation
 }
 
 private static class ReplayingBatch

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
--
diff --git a/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java 
b/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
index ac44923..a1477e6 100644
--- a/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
+++ b/src/java/org/apache/cassandra/service/BatchlogResponseHandler.java
@@ -50,7 +50,7 @@ public class BatchlogResponseHandler extends 
AbstractWriteResponseHandler
 {
 wrapped.response(msg);
 if (requiredBeforeFinishUpdater.decrementAndGet(this) == 0)
-cleanup.run();
+cleanup.ackMutation();
 }
 
 public boolean isLatencyForSnitch()
@@ -107,7 +107,7 @@ public class BatchlogResponseHandler extends 
AbstractWriteResponseHandler
 this.callback = callback;
 }
 
-public void run()
+public void ackMutation()
 {
 if (mutationsWaitingForUpdater.decrementAndGet(this) == 0)
 callback.invoke();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/12103653/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 6610cf7..1ce1bc5 100644
--- 

cassandra-dtest git commit: Add tests for CASSANDRA-13069

2017-09-12 Thread paulo
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 40f13658c -> c39a85c3e


Add tests for CASSANDRA-13069


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/c39a85c3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/c39a85c3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/c39a85c3

Branch: refs/heads/master
Commit: c39a85c3e7b2869ef3fffafe1380362d6469919d
Parents: 40f1365
Author: Paulo Motta 
Authored: Thu Aug 24 00:55:12 2017 -0500
Committer: Paulo Motta 
Committed: Tue Sep 12 08:36:12 2017 -0500

--
 byteman/fail_after_view_write.btm  |  8 +++
 byteman/fail_before_view_write.btm |  8 +++
 materialized_views_test.py | 90 -
 3 files changed, 104 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/c39a85c3/byteman/fail_after_view_write.btm
--
diff --git a/byteman/fail_after_view_write.btm 
b/byteman/fail_after_view_write.btm
new file mode 100644
index 000..b7f68b3
--- /dev/null
+++ b/byteman/fail_after_view_write.btm
@@ -0,0 +1,8 @@
+RULE Die before applying base mutation
+CLASS org.apache.cassandra.db.view.TableViews
+METHOD pushViewReplicaUpdates
+AT EXIT
+IF callerEquals("applyInternal")
+DO
+  throw new RuntimeException("Dummy failure");
+ENDRULE

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/c39a85c3/byteman/fail_before_view_write.btm
--
diff --git a/byteman/fail_before_view_write.btm 
b/byteman/fail_before_view_write.btm
new file mode 100644
index 000..963fc7c
--- /dev/null
+++ b/byteman/fail_before_view_write.btm
@@ -0,0 +1,8 @@
+RULE Die before applying base mutation
+CLASS org.apache.cassandra.db.view.TableViews
+METHOD pushViewReplicaUpdates
+AT ENTRY
+IF callerEquals("applyInternal")
+DO
+  throw new RuntimeException("Dummy failure");
+ENDRULE

http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/c39a85c3/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 637124d..60dea68 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -7,7 +7,7 @@ from functools import partial
 from multiprocessing import Process, Queue
 from unittest import skip, skipIf
 
-from cassandra import ConsistencyLevel
+from cassandra import ConsistencyLevel, WriteFailure
 from cassandra.cluster import NoHostAvailable
 from cassandra.concurrent import execute_concurrent_with_args
 from cassandra.cluster import Cluster
@@ -22,6 +22,7 @@ from dtest import Tester, debug, get_ip_from_node, create_ks
 from tools.assertions import (assert_all, assert_crc_check_chance_equal,
   assert_invalid, assert_none, assert_one,
   assert_unavailable)
+from tools.data import rows_to_list
 from tools.decorators import since
 from tools.misc import new_node
 from tools.jmxutils import (JolokiaAgent, make_mbean, 
remove_perf_disable_shared_mem)
@@ -124,10 +125,14 @@ class TestMaterializedViews(Tester):
 self._settle_nodes()
 
 def _replay_batchlogs(self):
-debug("Replaying batchlog on all nodes")
 for node in self.cluster.nodelist():
 if node.is_running():
+debug("Replaying batchlog on node {}".format(node.name))
 node.nodetool("replaybatchlog")
+# CASSANDRA-13069 - Ensure replayed mutations are removed from 
the batchlog
+node_session = self.patient_exclusive_cql_connection(node)
+result = list(node_session.execute("SELECT count(*) FROM 
system.batches;"))
+self.assertEqual(result[0].count, 0)
 
 def create_test(self):
 """Test the materialized view creation"""
@@ -1873,6 +1878,87 @@ class TestMaterializedViews(Tester):
 # node3 should have received and ignored the creation of the MV over 
the dropped table
 self.assertTrue(node3.grep_log('Not adding view users_by_state because 
the base table'))
 
+def base_view_consistency_on_failure_after_mv_apply_test(self):
+self._test_base_view_consistency_on_crash("after")
+
+def base_view_consistency_on_failure_before_mv_apply_test(self):
+self._test_base_view_consistency_on_crash("before")
+
+def _test_base_view_consistency_on_crash(self, fail_phase):
+"""
+ * Fails base table write before or after applying views
+ * Restart node and replay commit and batchlog
+ * Check that base and views are present
+
+ @jira_ticket 

[jira] [Comment Edited] (CASSANDRA-13619) java.nio.BufferOverflowException: null while flushing hints

2017-09-12 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162780#comment-16162780
 ] 

Marcus Eriksson edited comment on CASSANDRA-13619 at 9/12/17 10:47 AM:
---

In {{PartitionUpdate}}, {{isBuilt}} is non-volatile, and it is set once the 
{{Holder}} ref has been updated. {{build()}} is synchronized but 
{{maybeBuild()}} where we check {{isBuilt}} is not. That means this is 
basically double checked locking, but since {{isBuilt}} is not volatile, the 
assignments in {{build()}} can be reordered, making {{isBuilt}} true before 
{{holder}} is assigned.

It stops reproducing if I set {{isBuilt = this.holder != null}} instead of 
{{isBuilt = true}} to make sure that {{holder}} is set before {{isBuilt}} but 
making {{isBuilt}} volatile should be the correct solution.


was (Author: krummas):
In {{PartitionUpdate}}, {{isBuilt}} is non-volatile, and it is set once the 
{{Holder}} ref has been updated. {{build()}} is synchronized but 
{{maybeBuild()}} where we check {{isBuilt}} is not. That means this is 
basically double checked locking, but since {{isBuilt}} is not volatile, the 
assignments in {{build()}} can be reordered, making {{isBuilt}} set before 
{{holder} is assigned.

It stops reproducing if I set {{isBuilt = this.holder != null}} instead of 
{{isBuilt = true}} to make sure that {{holder}} is set before {{isBuilt}} but 
making {{isBuilt}} volatile should be the correct solution.

> java.nio.BufferOverflowException: null while flushing hints
> ---
>
> Key: CASSANDRA-13619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13619
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Core
>Reporter: Milan Milosevic
>Assignee: Marcus Eriksson
>
> I'm seeing the following exception running Cassandra 3.0.11 on 21 node 
> cluster in two AWS regions when half of the nodes in one region go down, and 
> the load is high on the rest of the nodes:
> {code}
> WARN  [SharedPool-Worker-10] 2017-06-14 12:57:15,017 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-10,5,main]: {}
> java.lang.RuntimeException: java.nio.BufferOverflowException
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2549)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0-zing_17.03.1.0]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0-zing_17.03.1.0]
> Caused by: java.nio.BufferOverflowException: null
> at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:195)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:258)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.Columns$Serializer.serialize(Columns.java:405) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:407)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:120)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:625)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:305)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.Hint$Serializer.serialize(Hint.java:141) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> 

[jira] [Updated] (CASSANDRA-13619) java.nio.BufferOverflowException: null while flushing hints

2017-09-12 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13619:

Reviewer: Aleksey Yeschenko

> java.nio.BufferOverflowException: null while flushing hints
> ---
>
> Key: CASSANDRA-13619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13619
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Core
>Reporter: Milan Milosevic
>Assignee: Marcus Eriksson
>
> I'm seeing the following exception running Cassandra 3.0.11 on 21 node 
> cluster in two AWS regions when half of the nodes in one region go down, and 
> the load is high on the rest of the nodes:
> {code}
> WARN  [SharedPool-Worker-10] 2017-06-14 12:57:15,017 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-10,5,main]: {}
> java.lang.RuntimeException: java.nio.BufferOverflowException
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2549)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0-zing_17.03.1.0]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0-zing_17.03.1.0]
> Caused by: java.nio.BufferOverflowException: null
> at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:195)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:258)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.Columns$Serializer.serialize(Columns.java:405) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:407)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:120)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:625)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:305)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.Hint$Serializer.serialize(Hint.java:141) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.HintsBuffer$Allocation.write(HintsBuffer.java:251) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.HintsBuffer$Allocation.write(HintsBuffer.java:230) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.HintsBufferPool.write(HintsBufferPool.java:61) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.HintsService.write(HintsService.java:154) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.service.StorageProxy$11.runMayThrow(StorageProxy.java:2627)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2545)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> ... 5 common frames omitted
> {code}
> Relevant configurations from cassandra.yaml:
> {code}
> -cassandra_hinted_handoff_throttle_in_kb: 1024
>  cassandra_max_hints_delivery_threads: 4
> -cassandra_hints_flush_period_in_ms: 1
> -cassandra_max_hints_file_size_in_mb: 512
> {code}
> When I reduce -cassandra_hints_flush_period_in_ms: 1 to 5000, the number 
> of exceptions lowers significantly, but they are still present.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CASSANDRA-13619) java.nio.BufferOverflowException: null while flushing hints

2017-09-12 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162780#comment-16162780
 ] 

Marcus Eriksson commented on CASSANDRA-13619:
-

In {{PartitionUpdate}}, {{isBuilt}} is non-volatile, and it is set once the 
{{Holder}} ref has been updated. {{build()}} is synchronized but 
{{maybeBuild()}} where we check {{isBuilt}} is not. That means this is 
basically double checked locking, but since {{isBuilt}} is not volatile, the 
assignments in {{build()}} can be reordered, making {{isBuilt}} set before 
{{holder} is assigned.

It stops reproducing if I set {{isBuilt = this.holder != null}} instead of 
{{isBuilt = true}} to make sure that {{holder}} is set before {{isBuilt}} but 
making {{isBuilt}} volatile should be the correct solution.

> java.nio.BufferOverflowException: null while flushing hints
> ---
>
> Key: CASSANDRA-13619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13619
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Core
>Reporter: Milan Milosevic
>Assignee: Marcus Eriksson
>
> I'm seeing the following exception running Cassandra 3.0.11 on 21 node 
> cluster in two AWS regions when half of the nodes in one region go down, and 
> the load is high on the rest of the nodes:
> {code}
> WARN  [SharedPool-Worker-10] 2017-06-14 12:57:15,017 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-10,5,main]: {}
> java.lang.RuntimeException: java.nio.BufferOverflowException
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2549)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0-zing_17.03.1.0]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0-zing_17.03.1.0]
> Caused by: java.nio.BufferOverflowException: null
> at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:195)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:258)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.Columns$Serializer.serialize(Columns.java:405) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:407)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:120)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:625)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:305)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.Hint$Serializer.serialize(Hint.java:141) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.HintsBuffer$Allocation.write(HintsBuffer.java:251) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.HintsBuffer$Allocation.write(HintsBuffer.java:230) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.HintsBufferPool.write(HintsBufferPool.java:61) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.hints.HintsService.write(HintsService.java:154) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.service.StorageProxy$11.runMayThrow(StorageProxy.java:2627)
>  ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> 

[jira] [Commented] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-09-12 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162777#comment-16162777
 ] 

Marcus Eriksson commented on CASSANDRA-13339:
-

Thanks everyone, closing as dupe of CASSANDRA-13619, same bug

> java.nio.BufferOverflowException: null
> --
>
> Key: CASSANDRA-13339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13339
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Richards
>
> I'm seeing the following exception running Cassandra 3.9 (with Netty updated 
> to 4.1.8.Final) running on a 2 node cluster.  It would have been processing 
> around 50 queries/second at the time (mixture of 
> inserts/updates/selects/deletes) : there's a collection of tables (some with 
> counters some without) and a single materialized view.
> {code}
> ERROR [MutationStage-4] 2017-03-15 22:50:33,052 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}
> and then again shortly afterwards
> {code}
> ERROR [MutationStage-3] 2017-03-15 23:27:36,198 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  

[jira] [Resolved] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-09-12 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-13339.
-
Resolution: Duplicate

> java.nio.BufferOverflowException: null
> --
>
> Key: CASSANDRA-13339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13339
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Richards
>
> I'm seeing the following exception running Cassandra 3.9 (with Netty updated 
> to 4.1.8.Final) running on a 2 node cluster.  It would have been processing 
> around 50 queries/second at the time (mixture of 
> inserts/updates/selects/deletes) : there's a collection of tables (some with 
> counters some without) and a single materialized view.
> {code}
> ERROR [MutationStage-4] 2017-03-15 22:50:33,052 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}
> and then again shortly afterwards
> {code}
> ERROR [MutationStage-3] 2017-03-15 23:27:36,198 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> 

[jira] [Updated] (CASSANDRA-13847) test failure in cqlsh_tests.cqlsh_tests.CqlLoginTest.test_list_roles_after_login

2017-09-12 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13847:
-
Status: Ready to Commit  (was: Patch Available)

> test failure in 
> cqlsh_tests.cqlsh_tests.CqlLoginTest.test_list_roles_after_login
> 
>
> Key: CASSANDRA-13847
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13847
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Joel Knighton
>Assignee: Andrés de la Peña
>  Labels: test-failure
> Fix For: 2.1.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/546/testReport/cqlsh_tests.cqlsh_tests/CqlLoginTest/test_list_roles_after_login
> This test was added for [CASSANDRA-13640]. The comments seem to indicated 
> this is only a problem on 3.0+, but the added test certainly seems to 
> reproduce the problem on 2.1 and 2.2. Even if the issue does affect 2.1/2.2, 
> it seems insufficiently critical for 2.1, so we need to limit the test to run 
> on 2.2+ at the very least, possibly 3.0+ if we don't fix the cause on 2.2.
> Thoughts [~adelapena]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13847) test failure in cqlsh_tests.cqlsh_tests.CqlLoginTest.test_list_roles_after_login

2017-09-12 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162734#comment-16162734
 ] 

ZhaoYang commented on CASSANDRA-13847:
--

+1 for the change. sorry the overlook last time.

> test failure in 
> cqlsh_tests.cqlsh_tests.CqlLoginTest.test_list_roles_after_login
> 
>
> Key: CASSANDRA-13847
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13847
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Joel Knighton
>Assignee: Andrés de la Peña
>  Labels: test-failure
> Fix For: 2.1.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/546/testReport/cqlsh_tests.cqlsh_tests/CqlLoginTest/test_list_roles_after_login
> This test was added for [CASSANDRA-13640]. The comments seem to indicated 
> this is only a problem on 3.0+, but the added test certainly seems to 
> reproduce the problem on 2.1 and 2.2. Even if the issue does affect 2.1/2.2, 
> it seems insufficiently critical for 2.1, so we need to limit the test to run 
> on 2.2+ at the very least, possibly 3.0+ if we don't fix the cause on 2.2.
> Thoughts [~adelapena]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13595) Short read protection doesn't work at the end of a partition

2017-09-12 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162720#comment-16162720
 ] 

ZhaoYang edited comment on CASSANDRA-13595 at 9/12/17 9:24 AM:
---

The cause is no short-read-protection generated for node2 with key=2, because 
no UnfilteredRowIterator with key=2 for node2...

{code}
For initial read:
Node1 returns:  
   PartitionIterator {
UnfilteredIterator( k=1@tombstone, back by 
short-read-protection)  
UnfilteredIterator( k=2, back by short-read-protection)  
  }
Node2 returns: 
   PartitionIterator {
UnfilteredIterator( k=1, back by short-read-protection) 
  }
{code}




was (Author: jasonstack):
The cause is no short-read-protection generated for node2 with key=2, because 
no UnfilteredRowIterator with key=2 for node2...

{code}
For initial read:
Node1 returns:  
   PartitionIterator {
UnfilteredIterator( k=1@tombstone, back by 
short-read-protection)  
UnfilteredIterator( k=2, back by short-read-protection)  
  }
Node2 returns: 
   PartitionIterator {
UnfilteredIterator( k=1, back by short-read-protection) 
  }
{code}

I think in this case, we should expect paging to fetch next partition instead 
of short-read-protection which seems work only within partition, not across 
partition.

> Short read protection doesn't work at the end of a partition
> 
>
> Key: CASSANDRA-13595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13595
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Andrés de la Peña
>Assignee: ZhaoYang
>
> It seems that short read protection doesn't work when the short read is done 
> at the end of a partition in a range query. The final assertion of this dtest 
> fails:
> {code}
> def short_read_partitions_delete_test(self):
> cluster = self.cluster
> cluster.set_configuration_options(values={'hinted_handoff_enabled': 
> False})
> cluster.set_batch_commitlog(enabled=True)
> cluster.populate(2).start(wait_other_notice=True)
> node1, node2 = self.cluster.nodelist()
> session = self.patient_cql_connection(node1)
> create_ks(session, 'ks', 2)
> session.execute("CREATE TABLE t (k int, c int, PRIMARY KEY(k, c)) 
> WITH read_repair_chance = 0.0")
> # we write 1 and 2 in a partition: all nodes get it.
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (1, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (2, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> # we delete partition 1: only node 1 gets it.
> node2.flush()
> node2.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 1"))
> node2.start(wait_other_notice=True)
> # we delete partition 2: only node 2 gets it.
> node1.flush()
> node1.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node2, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 2"))
> node1.start(wait_other_notice=True)
> # read from both nodes
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ALL)
> assert_none(session, "SELECT * FROM t LIMIT 1")
> {code}
> However, the dtest passes if we remove the {{LIMIT 1}}.
> Short read protection [uses a 
> {{SinglePartitionReadCommand}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DataResolver.java#L484],
>  maybe it should use a {{PartitionRangeReadCommand}} instead?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13595) Short read protection doesn't work at the end of a partition

2017-09-12 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162720#comment-16162720
 ] 

ZhaoYang commented on CASSANDRA-13595:
--

The cause is no short-read-protection generated for node2 with key=2, because 
no UnfilteredRowIterator with key=2 for node2...

{code}
For initial read:
Node1 returns:  
   PartitionIterator {
UnfilteredIterator( k=1@tombstone, back by 
short-read-protection)  
UnfilteredIterator( k=2, back by short-read-protection)  
  }
Node2 returns: 
   PartitionIterator {
UnfilteredIterator( k=1, back by short-read-protection) 
  }
{code}

I think in this case, we should expect paging to fetch next partition instead 
of short-read-protection which seems work only within partition, not across 
partition.

> Short read protection doesn't work at the end of a partition
> 
>
> Key: CASSANDRA-13595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13595
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Andrés de la Peña
>Assignee: ZhaoYang
>
> It seems that short read protection doesn't work when the short read is done 
> at the end of a partition in a range query. The final assertion of this dtest 
> fails:
> {code}
> def short_read_partitions_delete_test(self):
> cluster = self.cluster
> cluster.set_configuration_options(values={'hinted_handoff_enabled': 
> False})
> cluster.set_batch_commitlog(enabled=True)
> cluster.populate(2).start(wait_other_notice=True)
> node1, node2 = self.cluster.nodelist()
> session = self.patient_cql_connection(node1)
> create_ks(session, 'ks', 2)
> session.execute("CREATE TABLE t (k int, c int, PRIMARY KEY(k, c)) 
> WITH read_repair_chance = 0.0")
> # we write 1 and 2 in a partition: all nodes get it.
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (1, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (2, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> # we delete partition 1: only node 1 gets it.
> node2.flush()
> node2.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 1"))
> node2.start(wait_other_notice=True)
> # we delete partition 2: only node 2 gets it.
> node1.flush()
> node1.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node2, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 2"))
> node1.start(wait_other_notice=True)
> # read from both nodes
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ALL)
> assert_none(session, "SELECT * FROM t LIMIT 1")
> {code}
> However, the dtest passes if we remove the {{LIMIT 1}}.
> Short read protection [uses a 
> {{SinglePartitionReadCommand}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DataResolver.java#L484],
>  maybe it should use a {{PartitionRangeReadCommand}} instead?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10496) Make DTCS/TWCS split partitions based on time during compaction

2017-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162610#comment-16162610
 ] 

ASF GitHub Bot commented on CASSANDRA-10496:


Github user iksaif commented on the issue:

https://github.com/apache/cassandra/pull/147
  
* `switchCompactionLocation(..)`: go it, will update code
* `Marcus' idea was to only create two sstables per bucket`: ok, I missed 
that. I'll make it work.
* `sstables generated before this patch`: I wanted to think about the 
upgrade strategy only when the other questions would have been answered. If 
this get shipped before 4.0 is released this could be a non-issue.


> Make DTCS/TWCS split partitions based on time during compaction
> ---
>
> Key: CASSANDRA-10496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10496
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>  Labels: dtcs
> Fix For: 4.x
>
>
> To avoid getting old data in new time windows with DTCS (or related, like 
> [TWCS|CASSANDRA-9666]), we need to split out old data into its own sstable 
> during compaction.
> My initial idea is to just create two sstables, when we create the compaction 
> task we state the start and end times for the window, and any data older than 
> the window will be put in its own sstable.
> By creating a single sstable with old data, we will incrementally get the 
> windows correct - say we have an sstable with these timestamps:
> {{[100, 99, 98, 97, 75, 50, 10]}}
> and we are compacting in window {{[100, 80]}} - we would create two sstables:
> {{[100, 99, 98, 97]}}, {{[75, 50, 10]}}, and the first window is now 
> 'correct'. The next compaction would compact in window {{[80, 60]}} and 
> create sstables {{[75]}}, {{[50, 10]}} etc.
> We will probably also want to base the windows on the newest data in the 
> sstables so that we actually have older data than the window.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13835) Thrift get_slice responds slower on Cassandra 3

2017-09-12 Thread Pawel Szlendak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162606#comment-16162606
 ] 

Pawel Szlendak edited comment on CASSANDRA-13835 at 9/12/17 7:34 AM:
-

I was able to identify that the slower response times have been introduced in 
Cassandra version 2.1 by implementing CASSANDRA-4718.

With the indentified change Cassandra 2.1 responds even slower than 3.10:
PS C:\Users\pszlendak\Documents\TRAX-6984> python attack.py
get_slice count: 788
get_slice total response time: 12.4009997845
get_slice average response time: 0.0157373093712

Fortunately, in Cassandra 2.2 setting of windows clock frequency to 1 ms has 
been introduced (CASSANDRA-9634) which partially mitigated the influence of 
CASSANDRA-4718 giving us the times originally reported in this ticket for 
Cassandra 3.10 (still slower than 1.2)






was (Author: pszlendak):
I was able to identify that the slower response times have been introduced in 
Cassandra version 2.1 by implementing CASSANDRA-4718.

With the indentified change Cassandra 2.1 responds even slower than 3.10:
PS C:\Users\pszlendak\Documents\TRAX-6984> python attack.py
get_slice count: 788
get_slice total response time: 12.4009997845
get_slice average response time: 0.0157373093712

Fortunately, in Cassandra 2.2 setting of windows clock frequency to 1 ms has 
been introduced (CASSANDRA-9634) which partially mitigated the influence of 
CASSANDRA-4718 giving us the times originally reported in this ticket for 
Cassandra 3.10.





> Thrift get_slice responds slower on Cassandra 3
> ---
>
> Key: CASSANDRA-13835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13835
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pawel Szlendak
> Attachments: attack.py, cassandra120_get_slice_reply_time.png, 
> cassandra310_get_slice_reply_time.png
>
>
> I have recently upgraded from Cassandra 1.2.18 to Cassandra 3.10 and was 
> surprised to notice performance degradation of my server application.
> I dug down through my application stack only to find out that the cause of 
> the performance issue was slower response time of Cassandra 3.10 get_slice as 
> compared to Cassandra 1.2.18 (almost x3 times slower on average).
> I am attaching a python script (attack.py) here that can be used to reproduce 
> this issue on a Windows platform. The script uses the pycassa python library 
> that can easily be installed using pip.
> REPRODUCTION STEPS:
> 1. Install Cassandra 1.2.18 from 
> https://archive.apache.org/dist/cassandra/1.2.18/apache-cassandra-1.2.18-bin.tar.gz
> 2. Run Cassandra 1.2.18 from cmd console using cassandra.bat
> 3. Create a test keyspace and an empty CF using attack.py script
>
> {noformat}
> python attack.py create
> {noformat}
> 4. Run some get_slice queries to an empty CF and note down the average 
> response time (in seconds)
>
> {noformat}
> python attack.py
> {noformat}
>get_slice count: 788
>get_slice total response time: 0.3126376
>*get_slice average response time: 0.000397208075838*
> 5. Stop Cassandra 1.2.18 and install Cassandra 3.10 from 
> https://archive.apache.org/dist/cassandra/3.10/apache-cassandra-3.10-bin.tar.gz
> 6. Tweak cassandra.yaml to run thrift service (start_rpc=true) and run 
> Cassandra from an elevated cmd console using cassandra.bat
> 7. Create a test keyspace and an empty CF using attack.py script
>
> {noformat}
> python attack.py create
> {noformat}
> 8. Run some get_slice queries to an empty CF using attack.py and note down 
> the average response time (in seconds)
> {noformat}
> python attack.py
> {noformat}
>get_slice count: 788
>get_slice total response time: 1.1646185
>*get_slice average response time: 0.00147842634753*
> 9. Compare the average response times
> EXPECTED:
>get_slice response time of Cassandra 3.10 is not worse than on Cassandra 
> 1.2.18
> ACTUAL:
>get_slice response time of Cassandra 3.10 is x3 worse than that of 
> Cassandra 1.2.18
> REMARKS:
> - this seems to happen only on Windows platform (tested on Windows 10 and 
> Windows Server 2008 R2)
> - running the very same procedure on Linux (Ubuntu) renders roughly the same 
> response times
> - I sniffed the traffic to/from Cassandra 1.2.18 and Cassandra 3.10 and it 
> can be seen that Cassandra 3.10 responds slower (Wireshark dumps attached)
> - when attacking the server with concurrent get_slice queries I can see lower 
> CPU usage for Cassandra 3.10 that for Cassandra 1.2.18
> - get_slice in attack.py queries the column family for non-exisitng key (the 
> column familiy is empty)
> I am willing to work on this on my own if you guys give me some tips on where 
> to look for. I am also aware that this might be more Windows/Java related, 
> nevertheless, any help from 

[jira] [Commented] (CASSANDRA-13835) Thrift get_slice responds slower on Cassandra 3

2017-09-12 Thread Pawel Szlendak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162606#comment-16162606
 ] 

Pawel Szlendak commented on CASSANDRA-13835:


I was able to identify that the slower response times have been introduced in 
Cassandra version 2.1 by implementing CASSANDRA-4718.

With the indentified change Cassandra 2.1 responds even slower than 3.10:
PS C:\Users\pszlendak\Documents\TRAX-6984> python attack.py
get_slice count: 788
get_slice total response time: 12.4009997845
get_slice average response time: 0.0157373093712

Fortunately, in Cassandra 2.2 setting of windows clock frequency to 1 ms has 
been introduced (CASSANDRA-9634) which partially mitigated the influence of 
CASSANDRA-4718 giving us the times originally reported in this ticket for 
Cassandra 3.10.





> Thrift get_slice responds slower on Cassandra 3
> ---
>
> Key: CASSANDRA-13835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13835
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pawel Szlendak
> Attachments: attack.py, cassandra120_get_slice_reply_time.png, 
> cassandra310_get_slice_reply_time.png
>
>
> I have recently upgraded from Cassandra 1.2.18 to Cassandra 3.10 and was 
> surprised to notice performance degradation of my server application.
> I dug down through my application stack only to find out that the cause of 
> the performance issue was slower response time of Cassandra 3.10 get_slice as 
> compared to Cassandra 1.2.18 (almost x3 times slower on average).
> I am attaching a python script (attack.py) here that can be used to reproduce 
> this issue on a Windows platform. The script uses the pycassa python library 
> that can easily be installed using pip.
> REPRODUCTION STEPS:
> 1. Install Cassandra 1.2.18 from 
> https://archive.apache.org/dist/cassandra/1.2.18/apache-cassandra-1.2.18-bin.tar.gz
> 2. Run Cassandra 1.2.18 from cmd console using cassandra.bat
> 3. Create a test keyspace and an empty CF using attack.py script
>
> {noformat}
> python attack.py create
> {noformat}
> 4. Run some get_slice queries to an empty CF and note down the average 
> response time (in seconds)
>
> {noformat}
> python attack.py
> {noformat}
>get_slice count: 788
>get_slice total response time: 0.3126376
>*get_slice average response time: 0.000397208075838*
> 5. Stop Cassandra 1.2.18 and install Cassandra 3.10 from 
> https://archive.apache.org/dist/cassandra/3.10/apache-cassandra-3.10-bin.tar.gz
> 6. Tweak cassandra.yaml to run thrift service (start_rpc=true) and run 
> Cassandra from an elevated cmd console using cassandra.bat
> 7. Create a test keyspace and an empty CF using attack.py script
>
> {noformat}
> python attack.py create
> {noformat}
> 8. Run some get_slice queries to an empty CF using attack.py and note down 
> the average response time (in seconds)
> {noformat}
> python attack.py
> {noformat}
>get_slice count: 788
>get_slice total response time: 1.1646185
>*get_slice average response time: 0.00147842634753*
> 9. Compare the average response times
> EXPECTED:
>get_slice response time of Cassandra 3.10 is not worse than on Cassandra 
> 1.2.18
> ACTUAL:
>get_slice response time of Cassandra 3.10 is x3 worse than that of 
> Cassandra 1.2.18
> REMARKS:
> - this seems to happen only on Windows platform (tested on Windows 10 and 
> Windows Server 2008 R2)
> - running the very same procedure on Linux (Ubuntu) renders roughly the same 
> response times
> - I sniffed the traffic to/from Cassandra 1.2.18 and Cassandra 3.10 and it 
> can be seen that Cassandra 3.10 responds slower (Wireshark dumps attached)
> - when attacking the server with concurrent get_slice queries I can see lower 
> CPU usage for Cassandra 3.10 that for Cassandra 1.2.18
> - get_slice in attack.py queries the column family for non-exisitng key (the 
> column familiy is empty)
> I am willing to work on this on my own if you guys give me some tips on where 
> to look for. I am also aware that this might be more Windows/Java related, 
> nevertheless, any help from your side would be much appreciated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13595) Short read protection doesn't work at the end of a partition

2017-09-12 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-13595:


Assignee: ZhaoYang

> Short read protection doesn't work at the end of a partition
> 
>
> Key: CASSANDRA-13595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13595
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Andrés de la Peña
>Assignee: ZhaoYang
>
> It seems that short read protection doesn't work when the short read is done 
> at the end of a partition in a range query. The final assertion of this dtest 
> fails:
> {code}
> def short_read_partitions_delete_test(self):
> cluster = self.cluster
> cluster.set_configuration_options(values={'hinted_handoff_enabled': 
> False})
> cluster.set_batch_commitlog(enabled=True)
> cluster.populate(2).start(wait_other_notice=True)
> node1, node2 = self.cluster.nodelist()
> session = self.patient_cql_connection(node1)
> create_ks(session, 'ks', 2)
> session.execute("CREATE TABLE t (k int, c int, PRIMARY KEY(k, c)) 
> WITH read_repair_chance = 0.0")
> # we write 1 and 2 in a partition: all nodes get it.
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (1, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (2, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> # we delete partition 1: only node 1 gets it.
> node2.flush()
> node2.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 1"))
> node2.start(wait_other_notice=True)
> # we delete partition 2: only node 2 gets it.
> node1.flush()
> node1.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node2, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 2"))
> node1.start(wait_other_notice=True)
> # read from both nodes
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ALL)
> assert_none(session, "SELECT * FROM t LIMIT 1")
> {code}
> However, the dtest passes if we remove the {{LIMIT 1}}.
> Short read protection [uses a 
> {{SinglePartitionReadCommand}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DataResolver.java#L484],
>  maybe it should use a {{PartitionRangeReadCommand}} instead?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13069) Local batchlog for MV may not be correctly written on node movements

2017-09-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162589#comment-16162589
 ] 

Sylvain Lebresne commented on CASSANDRA-13069:
--

bq. Resubmitted CI with the above change. Please let me know what do you think.

Lgtm, +1 if CI is happy (or at least as happy as it usually is).

> Local batchlog for MV may not be correctly written on node movements
> 
>
> Key: CASSANDRA-13069
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13069
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Sylvain Lebresne
>Assignee: Paulo Motta
>
> Unless I'm really reading this wrong, I think the code 
> [here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageProxy.java#L829-L843],
>  which comes from CASSANDRA-10674, isn't working properly.
> More precisely, I believe we can have both paired and unpaired mutations, so 
> that both {{if}} can be taken, but if that's the case, the 2nd write to the 
> batchlog will basically overwrite (remove) the batchlog write of the 1st 
> {{if}} and I don't think that's the intention. In practice, this means 
> "paired" mutation won't be in the batchlog, which mean they won't be replayed 
> at all if they fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Status: Patch Available  (was: Open)

> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15, 4.0
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162572#comment-16162572
 ] 

Jaydeepkumar Chovatia edited comment on CASSANDRA-13862 at 9/12/17 6:40 AM:


Please find patch details here:

||Branch||uTest||
|[3.0 | 
https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:13862-3.0]|[circleci
 |https://circleci.com/gh/jaydeepkumar1984/cassandra/14]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:13862-trunk]|[circleci
 |https://circleci.com/gh/jaydeepkumar1984/cassandra/13]|



was (Author: chovatia.jayd...@gmail.com):
Please find patch details here:

||Branch||uTest||
|[3.0 | 
https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:13862-3.0]|[https://circleci.com/gh/jaydeepkumar1984/cassandra/14]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:13862-trunk]|[https://circleci.com/gh/jaydeepkumar1984/cassandra/13]|


> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15, 4.0
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Fix Version/s: 4.0

> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15, 4.0
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162572#comment-16162572
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-13862:
---

Please find patch details here:

||Branch||uTest||
|[3.0 | 
https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:13862-3.0]|[https://circleci.com/gh/jaydeepkumar1984/cassandra/14]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:13862-trunk]|[https://circleci.com/gh/jaydeepkumar1984/cassandra/13]|


> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-09-12 Thread Valera V. Kharseko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162570#comment-16162570
 ] 

Valera V. Kharseko commented on CASSANDRA-13339:


1,8 OpenJDK

--
ICQ: 11727046 Skype: vharseko Mobile: +7-917-595-51-55






> java.nio.BufferOverflowException: null
> --
>
> Key: CASSANDRA-13339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13339
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Richards
>
> I'm seeing the following exception running Cassandra 3.9 (with Netty updated 
> to 4.1.8.Final) running on a 2 node cluster.  It would have been processing 
> around 50 queries/second at the time (mixture of 
> inserts/updates/selects/deletes) : there's a collection of tables (some with 
> counters some without) and a single materialized view.
> {code}
> ERROR [MutationStage-4] 2017-03-15 22:50:33,052 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}
> and then again shortly afterwards
> {code}
> ERROR [MutationStage-3] 2017-03-15 23:27:36,198 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> 

[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Description: 
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{noformat}

Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372

{noformat}


Same thing applies for {{Propose stage}} as well.



  was:
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{noformat}
Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
{noformat}


Same thing applies for Propose stage as well.




> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> {noformat}
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> {noformat}
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Description: 
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372



Same thing applies for {{Propose stage}} as well.



  was:
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{noformat}

Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372

{noformat}


Same thing applies for {{Propose stage}} as well.




> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> Same thing applies for {{Propose stage}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Description: 
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{noformat}
Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
{noformat}


Same thing applies for Propose stage as well.



  was:
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{noformat}
{{Sending PAXOS_PREPARE message to /A.B.C.D 
[MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D |  
15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}
{noformat}


Same thing applies for Propose stage as well.




> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> {noformat}
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
> {noformat}
> Same thing applies for Propose stage as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-

[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Description: 
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{noformat}
{{Sending PAXOS_PREPARE message to /A.B.C.D 
[MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D |  
15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}
{noformat}


Same thing applies for Propose stage as well.



  was:
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{{Sending}} PAXOS_PREPARE message to /A.B.C.D 
[MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D |  
15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}

Same thing applies for Propose stage as well.




> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> {noformat}
> {{Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}
> {noformat}
> Same thing applies for Propose stage as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To 

[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Description: 
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{{Sending}} PAXOS_PREPARE message to /A.B.C.D 
[MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D |  
15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}

Same thing applies for Propose stage as well.



  was:
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{{Sending PAXOS_PREPARE message to /A.B.C.D 
[MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D |  
15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}

Same thing applies for Propose stage as well.




> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> {{Sending}} PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}
> Same thing applies for Propose stage as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Description: 
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{{Sending PAXOS_PREPARE message to /A.B.C.D 
[MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D |  
15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}

Same thing applies for Propose stage as well.



  was:
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{{… 
   
Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
…}}

Same thing applies for Propose stage as well.




> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> {{Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> … 
>   
>  Processing response from /A.B.C.D [SharedPool-Worker-4] 
> | 2017-09-11 21:55:18.976000 |  A.B.C.D |  20372}}
> Same thing applies for Propose stage as well.



--
This message was sent by Atlassian 

[jira] [Updated] (CASSANDRA-13862) Optimize Paxos prepare and propose stage for local requests

2017-09-12 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13862:
--
Description: 
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{{… 
   
Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
…}}

Same thing applies for Propose stage as well.



  was:
Currently Paxos prepare and propose messages always go through entire 
MessagingService stack in Cassandra even if request is to be served locally, we 
can enhance and make local requests severed w/o involving MessagingService. 
Similar things are done at may [other places | 
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
 in Cassandra which skips MessagingService stage for local requests.


This is what it looks like currently if we have tracing on and run Cassandra 
light weight transaction.

{{
…   
 
Sending PAXOS_PREPARE message to /A.B.C.D [MessagingService-Outgoing-/A.B.C.D] 
| 2017-09-11 21:55:18.971000 |  A.B.C.D |  15045
…   
   
REQUEST_RESPONSE message received from /A.B.C.D 
[MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D |  
20270
…   

 Processing response from /A.B.C.D [SharedPool-Worker-4] | 
2017-09-11 21:55:18.976000 |  A.B.C.D |  20372
…
}}

Same thing applies for Propose stage as well.




> Optimize Paxos prepare and propose stage for local requests 
> 
>
> Key: CASSANDRA-13862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13862
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.15
>
>
> Currently Paxos prepare and propose messages always go through entire 
> MessagingService stack in Cassandra even if request is to be served locally, 
> we can enhance and make local requests severed w/o involving 
> MessagingService. Similar things are done at may [other places | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1244]
>  in Cassandra which skips MessagingService stage for local requests.
> This is what it looks like currently if we have tracing on and run Cassandra 
> light weight transaction.
> {{…   
>  
> Sending PAXOS_PREPARE message to /A.B.C.D 
> [MessagingService-Outgoing-/A.B.C.D] | 2017-09-11 21:55:18.971000 |  A.B.C.D 
> |  15045
> … 
>  
> REQUEST_RESPONSE message received from /A.B.C.D 
> [MessagingService-Incoming-/A.B.C.D] | 2017-09-11 21:55:18.976000 |  A.B.C.D 
> |  20270
> …

[jira] [Commented] (CASSANDRA-13754) BTree.Builder memory leak

2017-09-12 Thread Markus Dlugi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162565#comment-16162565
 ] 

Markus Dlugi commented on CASSANDRA-13754:
--

[~urandom], I think it might still be the same issue. The threads you mentioned 
are all created by the {{SEPWorker}} as well, as you can also see in your 
screenshot where your {{FastThreadLocalThread}} has a reference to an instance 
of that class. Now I'm not sure whether the actual content of your 
{{ThreadLocalMap}} s is the same as in my heap dump - in my case, the maps 
mostly held instances of {{BTree$Builder}} , which then had references to many 
{{byte[]}} arrays. Maybe you can check if this is the case for you as well?

Other than that, you could also try and see if the patches created by [~snazy] 
alleviate your issue.

> BTree.Builder memory leak
> -
>
> Key: CASSANDRA-13754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13754
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 3.11.0, Netty 4.0.44.Final, OpenJDK 8u141-b15
>Reporter: Eric Evans
>Assignee: Robert Stupp
> Fix For: 3.11.1
>
> Attachments: Screenshot from 2017-09-11 16-54-43.png
>
>
> After a chronic bout of {{OutOfMemoryError}} in our development environment, 
> a heap analysis is showing that more than 10G of our 12G heaps are consumed 
> by the {{threadLocals}} members (instances of {{java.lang.ThreadLocalMap}}) 
> of various {{io.netty.util.concurrent.FastThreadLocalThread}} instances.  
> Reverting 
> [cecbe17|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=cecbe17e3eafc052acc13950494f7dddf026aa54]
>  fixes the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org