[jira] [Updated] (CASSANDRA-12526) For LCS, single SSTable up-level is handled inefficiently

2017-05-25 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12526:
---
Fix Version/s: 4.x

> For LCS, single SSTable up-level is handled inefficiently
> -
>
> Key: CASSANDRA-12526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12526
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>  Labels: compaction, lcs, performance
> Fix For: 4.x
>
>
> I'm using the latest trunk (as of August 2016, which probably is going to be 
> 3.10) to run some experiments on LeveledCompactionStrategy and noticed this 
> inefficiency.
> The test data is generated using cassandra-stress default parameters 
> (keyspace1.standard1), so as you can imagine, it consists of a ton of newly 
> inserted partitions that will never merge in compactions, which is probably 
> the worst kind of workload for LCS (however, I'll detail later why this 
> scenario should not be ignored as a corner case; for now, let's just assume 
> we still want to handle this scenario efficiently).
> After the compaction test is done, I scrubbed debug.log for patterns that 
> match  the "Compacted" summary so that I can see how long each individual 
> compaction took and how many bytes they processed. The search pattern is like 
> the following:
> {noformat}
> grep 'Compacted.*standard1' debug.log
> {noformat}
> Interestingly, I noticed a lot of the finished compactions are marked as 
> having *only one* SSTable involved. With the workload mentioned above, the 
> "single SSTable" compactions actually consist of the majority of all 
> compactions (as shown below), so its efficiency can affect the overall 
> compaction throughput quite a bit.
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | wc -l
> 243
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | grep ") 1 sstable" | wc -l
> 218
> {noformat}
> By looking at the code, it appears that there's a way to directly edit the 
> level of a particular SSTable like the following:
> {code}
> sstable.descriptor.getMetadataSerializer().mutateLevel(sstable.descriptor, 
> targetLevel);
> sstable.reloadSSTableMetadata();
> {code}
> To be exact, I summed up the time spent for these single-SSTable compactions 
> (the total data size is 60GB) and found that if each compaction only needs to 
> spend 100ms for only the metadata change (instead of the 10+ second they're 
> doing now), it can already achieve 22.75% saving on total compaction time.
> Compared to what we have now (reading the whole single-SSTable from old level 
> and writing out the same single-SSTable at the new level), the only 
> difference I could think of by using this approach is that the new SSTable 
> will have the same file name (sequence number) as the old one's, which could 
> break some assumptions on some other part of the code. However, not having to 
> go through the full read/write IO, and not having to bear the overhead of 
> cleaning up the old file, creating the new file, creating more churns in heap 
> and file buffer, it seems the benefits outweigh the inconvenience. So I'd 
> argue this JIRA belongs to LHF and should be made available in 3.0.x as well.
> As mentioned in the 2nd paragraph, I'm also going to address why this kind of 
> all-new-partition workload should not be ignored as a corner case. Basically, 
> for the main use case of LCS where you need to frequently merge partitions to 
> optimize read and eliminate tombstones and expired data sooner, LCS can be 
> perfectly happy and efficiently perform the partition merge and tombstone 
> elimination for a long time. However, as soon as the node becomes a bit 
> unhealthy for various reasons (could be a bad disk so it's missing a whole 
> bunch of mutations and need repair, could be the user chooses to ingest way 
> more data than it usually takes and exceeds its capability, or god-forbidden, 
> some DBA chooses to run offline sstablelevelreset), you will have to handle 
> this kind of "all-new-partition with a lot of SSTables in L0" scenario, and 
> once all L0 SSTables finally gets up-leveled to L1, you will likely see a lot 
> of such single-SSTable compactions, which is the situation this JIRA is 
> intended to address.
> Actually, when I think more about this, to make this kind of single SSTable 
> up-level more efficient will not only help the all-new-partition scenario, 
> but also help in general any time when there is a big backlog of L0 SSTables 
> due to too many flushes or excessive repair streaming with vnode. In those 
> situations, by default STCS_in_L0 will be triggered, and you will end up 
> getting a bunch of much bigger L0 

[jira] [Commented] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-25 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025735#comment-16025735
 ] 

Stefania commented on CASSANDRA-13120:
--

The approach looks good. 

Some nits 
[here|https://github.com/stef1927/cassandra/commit/4ff0ccf5e290749b60204509b7d4b7d5d469de59].
 

I have two suggestions:

* Pass a boolean or a new enum to the {{SSTableReadMetricsCollector}} 
constructor that indicates the query type (single or range). This way we know 
for sure if we need to increment {{mergedSSTables}}. At the moment we rely on 
knowing which methods get called where, which is a bit brittle.

* Make the listener symmetric w.r.t. sstables skipped and selected. At the 
moment we have {{skippingSSTable}} with a reason to indicate that an sstable 
was skipped and different methods to indicate that an sstable was selected. I 
would personally prefer to only have two methods, something like 
{{onSSTableSkipped}} and {{onSSTableSelected}}, with two parameters: the 
sstable and the reason. The reason enums should probably be two distinct enums. 
We loose the RIE parameter but it is not really used at the moment. WDYT?

Regarding the unit tests:
* The javadoc of {{SSTablesIteratedTest}} needs updating, since it is no longer 
only limited to CASSANDRA-8180.
* I'm not sure the new test covers 
{{queryMemtableAndSSTablesInTimestampOrder()}}.
* It may be useful to also have a range query with the test checking that the 
metric is not updated and with a comment explaining why that it the case.


> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |520
> 

[jira] [Updated] (CASSANDRA-13549) Cqlsh throws and error when querying a duration data type

2017-05-25 Thread Akhil Mehra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil Mehra updated CASSANDRA-13549:

Reviewer: Benjamin Lerer

> Cqlsh throws and error when querying a duration data type
> -
>
> Key: CASSANDRA-13549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13549
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.10 dev environment running on a MacOS Sierra
>Reporter: Akhil Mehra
>Assignee: Benjamin Lerer
>
> h3. Overview
> Querying duration related data from the cqlsh prompt results in an error.
> Consider the following create table and insert statement.
> {code:title=Table and insert statement with duration data 
> type|borderStyle=solid}
> CREATE TABLE duration_test (
>   primary_key text,
>   col20 duration,
>   PRIMARY KEY (primary_key)
> );
> INSERT INTO duration_test (primary_key, col20) VALUES ('primary_key_example', 
> 1y5mo89h4m48s);
> {code}
> On executing a select query on col20 in cqlsh I get an error "Failed to 
> format value '"\x00\xfe\x02GS\xfc\xa5\xc0\x00' : 'ascii' codec can't decode 
> byte 0xfe in position 2: ordinal not in range(128)"
> {code:title=Duration Query|borderStyle=solid}
> Select  col20 from duration_test;
> {code}
> h3. Investigation
> On investigating this further I found that the current python Cassandra 
> driver used found in 
> lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip does not seem to 
> support duration data type. This was added in Jan this year 
> https://github.com/datastax/python-driver/pull/689.
> So I downloaded the latest driver release 
> https://github.com/datastax/python-driver/releases/tag/3.9.0. I embedded the 
> latest driver into cassandra-driver-internal-only-3.7.0.post0-2481531.zip. 
> This fixed the driver related issue but there was still a formatting issue. 
> I then went on to modify the format_value_duration methos in the 
> pylib/cqlshlib/formatting.py. Diff posted below
> {code}
>  @formatter_for('Duration')
>  def format_value_duration(val, colormap, **_):
> -buf = six.iterbytes(val)
> -months = decode_vint(buf)
> -days = decode_vint(buf)
> -nanoseconds = decode_vint(buf)
> -return format_python_formatted_type(duration_as_str(months, days, 
> nanoseconds), colormap, 'duration')
> +return format_python_formatted_type(duration_as_str(val.months, 
> val.days, val.nanoseconds), colormap, 'duration')
> {code}
> This resulted in fixing the issue and duration types are now correctly 
> displayed.
> Happy to fix the issue if I can get some guidance on:
> # If this is a valid issue. Tried searching JIRA but did not find anything 
> reported. 
> # If my assumptions are correct i.e. this is actually a bug
> # how to package the new driver into the source code. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13549) Cqlsh throws and error when querying a duration data type

2017-05-25 Thread Akhil Mehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025706#comment-16025706
 ] 

Akhil Mehra commented on CASSANDRA-13549:
-

I have committed the changes to the following two branches.

[13549-trunk|https://github.com/amehra/cassandra/tree/13549-trunk]
[13549-3.11|https://github.com/amehra/cassandra/tree/13549-3.11]

Both branches have 3.10.0 driver in them and the required changes in 
formatting.py. If you want me to drop it back to 3.9.0 python driver please let 
me know. 

Is there any way I can run the dtests on these two branches on 
http://cassci.datastax.com/ ?

Thanks 

> Cqlsh throws and error when querying a duration data type
> -
>
> Key: CASSANDRA-13549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13549
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.10 dev environment running on a MacOS Sierra
>Reporter: Akhil Mehra
>Assignee: Benjamin Lerer
>
> h3. Overview
> Querying duration related data from the cqlsh prompt results in an error.
> Consider the following create table and insert statement.
> {code:title=Table and insert statement with duration data 
> type|borderStyle=solid}
> CREATE TABLE duration_test (
>   primary_key text,
>   col20 duration,
>   PRIMARY KEY (primary_key)
> );
> INSERT INTO duration_test (primary_key, col20) VALUES ('primary_key_example', 
> 1y5mo89h4m48s);
> {code}
> On executing a select query on col20 in cqlsh I get an error "Failed to 
> format value '"\x00\xfe\x02GS\xfc\xa5\xc0\x00' : 'ascii' codec can't decode 
> byte 0xfe in position 2: ordinal not in range(128)"
> {code:title=Duration Query|borderStyle=solid}
> Select  col20 from duration_test;
> {code}
> h3. Investigation
> On investigating this further I found that the current python Cassandra 
> driver used found in 
> lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip does not seem to 
> support duration data type. This was added in Jan this year 
> https://github.com/datastax/python-driver/pull/689.
> So I downloaded the latest driver release 
> https://github.com/datastax/python-driver/releases/tag/3.9.0. I embedded the 
> latest driver into cassandra-driver-internal-only-3.7.0.post0-2481531.zip. 
> This fixed the driver related issue but there was still a formatting issue. 
> I then went on to modify the format_value_duration methos in the 
> pylib/cqlshlib/formatting.py. Diff posted below
> {code}
>  @formatter_for('Duration')
>  def format_value_duration(val, colormap, **_):
> -buf = six.iterbytes(val)
> -months = decode_vint(buf)
> -days = decode_vint(buf)
> -nanoseconds = decode_vint(buf)
> -return format_python_formatted_type(duration_as_str(months, days, 
> nanoseconds), colormap, 'duration')
> +return format_python_formatted_type(duration_as_str(val.months, 
> val.days, val.nanoseconds), colormap, 'duration')
> {code}
> This resulted in fixing the issue and duration types are now correctly 
> displayed.
> Happy to fix the issue if I can get some guidance on:
> # If this is a valid issue. Tried searching JIRA but did not find anything 
> reported. 
> # If my assumptions are correct i.e. this is actually a bug
> # how to package the new driver into the source code. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13549) Cqlsh throws and error when querying a duration data type

2017-05-25 Thread Akhil Mehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025638#comment-16025638
 ] 

Akhil Mehra commented on CASSANDRA-13549:
-

[~blerer] I just noticed that version 3.10.0 of the python driver was released 
yesterday (https://github.com/datastax/python-driver/releases/tag/3.10.0). 
Tested my changes against 3.10.0 and everything worked as expected.

Do you want me to stick with the 3.9.0 driver or switch to the 3.10.0 driver as 
it is the latest?

Thanks for your input. 



> Cqlsh throws and error when querying a duration data type
> -
>
> Key: CASSANDRA-13549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13549
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.10 dev environment running on a MacOS Sierra
>Reporter: Akhil Mehra
>Assignee: Benjamin Lerer
>
> h3. Overview
> Querying duration related data from the cqlsh prompt results in an error.
> Consider the following create table and insert statement.
> {code:title=Table and insert statement with duration data 
> type|borderStyle=solid}
> CREATE TABLE duration_test (
>   primary_key text,
>   col20 duration,
>   PRIMARY KEY (primary_key)
> );
> INSERT INTO duration_test (primary_key, col20) VALUES ('primary_key_example', 
> 1y5mo89h4m48s);
> {code}
> On executing a select query on col20 in cqlsh I get an error "Failed to 
> format value '"\x00\xfe\x02GS\xfc\xa5\xc0\x00' : 'ascii' codec can't decode 
> byte 0xfe in position 2: ordinal not in range(128)"
> {code:title=Duration Query|borderStyle=solid}
> Select  col20 from duration_test;
> {code}
> h3. Investigation
> On investigating this further I found that the current python Cassandra 
> driver used found in 
> lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip does not seem to 
> support duration data type. This was added in Jan this year 
> https://github.com/datastax/python-driver/pull/689.
> So I downloaded the latest driver release 
> https://github.com/datastax/python-driver/releases/tag/3.9.0. I embedded the 
> latest driver into cassandra-driver-internal-only-3.7.0.post0-2481531.zip. 
> This fixed the driver related issue but there was still a formatting issue. 
> I then went on to modify the format_value_duration methos in the 
> pylib/cqlshlib/formatting.py. Diff posted below
> {code}
>  @formatter_for('Duration')
>  def format_value_duration(val, colormap, **_):
> -buf = six.iterbytes(val)
> -months = decode_vint(buf)
> -days = decode_vint(buf)
> -nanoseconds = decode_vint(buf)
> -return format_python_formatted_type(duration_as_str(months, days, 
> nanoseconds), colormap, 'duration')
> +return format_python_formatted_type(duration_as_str(val.months, 
> val.days, val.nanoseconds), colormap, 'duration')
> {code}
> This resulted in fixing the issue and duration types are now correctly 
> displayed.
> Happy to fix the issue if I can get some guidance on:
> # If this is a valid issue. Tried searching JIRA but did not find anything 
> reported. 
> # If my assumptions are correct i.e. this is actually a bug
> # how to package the new driver into the source code. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13182) test failure in sstableutil_test.SSTableUtilTest.compaction_test

2017-05-25 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025586#comment-16025586
 ] 

Lerh Chuan Low commented on CASSANDRA-13182:


The ticket for marking those methods as deprecated is here: 
https://issues.apache.org/jira/browse/CASSANDRA-13541

> test failure in sstableutil_test.SSTableUtilTest.compaction_test
> 
>
> Key: CASSANDRA-13182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Lerh Chuan Low
>  Labels: dtest, test-failure, test-failure-fresh
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/506/testReport/sstableutil_test/SSTableUtilTest/compaction_test
> {noformat}
> Error Message
> Lists differ: ['/tmp/dtest-Rk_3Cs/test/node1... != 
> ['/tmp/dtest-Rk_3Cs/test/node1...
> First differing element 8:
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db'
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db'
> First list contains 7 additional elements.
> First extra element 24:
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db'
>   
> ['/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-TOC.txt',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Digest.crc32',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Filter.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Index.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Statistics.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Summary.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Statistics.db',
>

[jira] [Commented] (CASSANDRA-13544) Exceptions encountered for concurrent range deletes with mixed cluster keys

2017-05-25 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025446#comment-16025446
 ] 

Eric Evans commented on CASSANDRA-13544:


I can confirm this is still present in 3.10 (see exception text below); I'll 
give 3.11 a try and report back

{noformat}
WARN  [MutationStage-5] 2017-05-25 19:30:41,605 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[MutationStage-5,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:536)
 ~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.db.RangeTombstoneList.addAll(RangeTombstoneList.java:217) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.db.MutableDeletionInfo.add(MutableDeletionInfo.java:141) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:143)
 ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.Memtable.put(Memtable.java:284) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1316) 
~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:618) 
~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.Keyspace.applyFuture(Keyspace.java:425) 
~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:222) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.db.MutationVerbHandler.doVerb(MutationVerbHandler.java:68) 
~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) 
~[apache-cassandra-3.10.jar:3.10]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_131]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 ~[apache-cassandra-3.10.jar:3.10]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
 [apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
[apache-cassandra-3.10.jar:3.10]
 at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
{noformat}

> Exceptions encountered for concurrent range deletes with mixed cluster keys
> ---
>
> Key: CASSANDRA-13544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13544
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Local Write-Read Paths
> Environment: Cassandra 3.7, Debian Linux
>Reporter: Eric Evans
>
> Using a schema that looks something like...
> {code:sql}
> CREATE TABLE data (
> key text,
> rev int,
> tid timeuuid,
> value blob,
> PRIMARY KEY (key, rev, tid)
> ) WITH CLUSTERING ORDER BY (rev DESC, tid DESC)
> {code}
> ...we are performing range deletes using inequality operators on both {{rev}} 
> and {{tid}} ({{WHERE key = ? AND rev < ?}} and {{WHERE key = ? AND rev = ? 
> AND  tid < ?}}).  These range deletes are interleaved with normal writes 
> probabilistically, and (apparently) when two such range deletes occur 
> concurrently, the following exceptions result.
> {noformat}
> ERROR [SharedPool-Worker-18] 2017-05-19 17:30:22,426 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x793a853b, 
> L:/10.64.0.36:9042 - R:/10.64.32.112:550
> 48]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.ClusteringBoundOrBoundary.(ClusteringBoundOrBoundary.java:31)
>  ~[apache-cassandra-3.7.3.jar:3.7.3]
> at 
> org.apache.cassandra.db.ClusteringBoundary.(ClusteringBoundary.java:15) 
> ~[apache-cassandra-3.7.3.jar:3.7.3]
> at 
> org.apache.cassandra.db.ClusteringBoundOrBoundary.inclusiveCloseExclusiveOpen(ClusteringBoundOrBoundary.java:78)
>  ~[apache-cassandra-3.7.3.jar:3.7.3]
> at 
> org.apache.cassandra.db.rows.RangeTombstoneBoundaryMarker.inclusiveCloseExclusiveOpen(RangeTombstoneBoundaryMarker.java:54)
>  ~[apache-cassandra-3.7.3.jar:3.7.3]
> at 
> org.apache.cassandra.db.rows.RangeTombstoneMarker$Merger.merge(RangeTombstoneMarker.java:139)
>  ~[apache-cassandra-3.7.3.jar:3.7.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:521)
>  ~[apache-cassandra-3.7.3.jar:3.7.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:478)
>  ~[apache-cassandra-3.7.3.jar:3.7.3]
>   

[jira] [Created] (CASSANDRA-13555) Thread leak during repair

2017-05-25 Thread Simon Zhou (JIRA)
Simon Zhou created CASSANDRA-13555:
--

 Summary: Thread leak during repair
 Key: CASSANDRA-13555
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13555
 Project: Cassandra
  Issue Type: Bug
Reporter: Simon Zhou
Assignee: Simon Zhou


The symptom is similar to what happened in [CASSANDRA-13204 | 
https://issues.apache.org/jira/browse/CASSANDRA-13204] that the thread waiting 
forever doing nothing. This one happened during "nodetool repair -pr -seq -j 1" 
in production but I can easily simulate the problem with just "nodetool repair" 
in dev environment (CCM). I'm trying to explain what happened with 3.0.13 code 
base.

1. One node is down while doing repair. This is the error I saw in production:

{code}
ERROR [GossipTasks:1] 2017-05-19 15:00:10,545 RepairSession.java:334 - [repair 
#bc9a3cd1-3ca3-11e7-a44a-e30923ac9336] session completed with the following 
error
java.io.IOException: Endpoint /10.185.43.15 died
at 
org.apache.cassandra.repair.RepairSession.convict(RepairSession.java:333) 
~[apache-cassandra-3.0.11.jar:3.0.11]
at 
org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:306) 
[apache-cassandra-3.0.11.jar:3.0.11]
at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:766) 
[apache-cassandra-3.0.11.jar:3.0.11]
at org.apache.cassandra.gms.Gossiper.access$800(Gossiper.java:66) 
[apache-cassandra-3.0.11.jar:3.0.11]
at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:181) 
[apache-cassandra-3.0.11.jar:3.0.11]
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
 [apache-cassandra-3.0.11.jar:3.0.11]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_121]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_121]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_121]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
 [apache-cassandra-3.0.11.jar:3.0.11]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
{code}

2. At this moment the repair coordinator hasn't received the response 
(MerkleTrees) for the node that was marked down. This means, RepairJob#run will 
never return because it waits for validations to finish:

{code}
// Wait for validation to complete
Futures.getUnchecked(validations);
{code}

Be noted that all RepairJob's (as Runnable) run on a shared executor created in 
RepairRunnable#runMayThrow, while all snapshot, validation and sync'ing happen 
on a per-RepairSession "taskExecutor". The RepairJob#run will only return when 
it receives MerkleTrees (or null) from all endpoints for a given column family 
and token range.

As evidence of the thread leak, below is from the thread dump. I can also get 
the same stack trace when simulating the same issue in dev environment.

{code}
"Repair#129:56" #406373 daemon prio=5 os_prio=0 tid=0x7fc495028400 
nid=0x1a77d waiting on condition [0x7fc02153]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0002d7c00198> (a 
com.google.common.util.concurrent.AbstractFuture$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:285)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at 
com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:137)
at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1509)
at org.apache.cassandra.repair.RepairJob.run(RepairJob.java:160)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 

[jira] [Commented] (CASSANDRA-13433) RPM distribution improvements and known issues

2017-05-25 Thread Dennis Noordzij (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025094#comment-16025094
 ] 

Dennis Noordzij commented on CASSANDRA-13433:
-

Pretty cool, very convenient to install.
Just missing 3.10 in rpm packages unfortunately.

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13523) StreamReceiveTask: java.lang.OutOfMemoryError: Map failed

2017-05-25 Thread Matthew O'Riordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024951#comment-16024951
 ] 

Matthew O'Riordan commented on CASSANDRA-13523:
---

Chris, apologies for my delay on this.

The GC details are as follows:

{code}
INFO  13:51:35  G1 Young Generation GC in 310ms.  G1 Eden Space: 367001600 -> 
0; G1 Old Gen: 184442160 -> 244320056;
INFO  13:51:35  Pool NameActive   Pending  Completed   
Blocked  All Time Blocked
Total time for which application threads were stopped: 0.0008716 seconds, 
Stopping threads took: 0.604 seconds
INFO  13:51:35  MutationStage 1 0  72774
 0 0
INFO  13:51:35  RequestResponseStage  0 0  0
 0 0
INFO  13:51:35  ReadRepairStage   0 0  0
 0 0
INFO  13:51:35  CounterMutationStage  0 0  0
 0 0
INFO  13:51:35  ReadStage 0 0  0
 0 0
INFO  13:51:35  MiscStage 0 0  0
 0 0
INFO  13:51:35  GossipStage   0 0  0
 0 0
INFO  13:51:35  CacheCleanupExecutor  0 0  0
 0 0
INFO  13:51:35  InternalResponseStage 0 0  0
 0 0
INFO  13:51:35  CommitLogArchiver 0 0  0
 0 0
INFO  13:51:35  CompactionExecutor0 0  1
 0 0
INFO  13:51:35  ValidationExecutor0 0  0
 0 0
INFO  13:51:35  MigrationStage0 0  0
 0 0
INFO  13:51:35  AntiEntropyStage  0 0  0
 0 0
INFO  13:51:35  Sampler   0 0  0
 0 0
INFO  13:51:35  MemtableFlushWriter   0 0  1
 0 0
INFO  13:51:35  MemtablePostFlush 0 0  3
 0 0
INFO  13:51:35  MemtableReclaimMemory 0 0  1
 0 0
{code}

Note that the container does not have any explicit memory allocations and thus 
has access to the entire system memory.  The instance has 16GB of RAM, and 
Cassandra JVM is launched with the following settings:

{code}
/usr/bin/java -Ddse.system_memory_in_mb=16048 
-Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader -ea 
-javaagent:/usr/share/dse/cassandra/lib/jamm-0.3.0.jar 
-XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms8024M -Xmx8024M -XX:+HeapDumpOnOutOfMemoryError 
-Xss256k -XX:StringTableSize=103 -XX:+UseG1GC -XX:MaxGCPauseMillis=500 
-XX:G1RSetUpdatingPauseTimePercent=5 -XX:+UseCondCardMark 
-XX:+PrintGCApplicationStoppedTime -Djava.net.preferIPv4Stack=true 
-Dcom.sun.management.jmxremote.port=7199 
-Dcom.sun.management.jmxremote.rmi.port=7199 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=true 
-Dcom.sun.management.jmxremote.password.file=/etc/dse/cassandra/conf/jmx.password
 -Dcom.sun.management.jmxremote.access.file=/etc/dse/cassandra/conf/jmx.access 
-Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
-Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra.pid 
-Dcassandra-foreground=yes -cp 

[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-05-25 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024948#comment-16024948
 ] 

Aleksey Yeschenko commented on CASSANDRA-12373:
---

Tests LGTM, feel free to commit them separately - I think it'd be cleaner this 
way, anyway.

Sill thinking through some potential edge cases for the main path on 3.X (to be 
applied to 3.0.X as well).

Thanks.

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13549) Cqlsh throws and error when querying a duration data type

2017-05-25 Thread Akhil Mehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024534#comment-16024534
 ] 

Akhil Mehra commented on CASSANDRA-13549:
-

[~blerer] Thanks for the feedback. Will complete the fix. I do not have 
permissions to assign the ticket to myself. Can you please assign the ticket to 
me or give me permission to assign the ticket to myself. Thanks



> Cqlsh throws and error when querying a duration data type
> -
>
> Key: CASSANDRA-13549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13549
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.10 dev environment running on a MacOS Sierra
>Reporter: Akhil Mehra
>Assignee: Benjamin Lerer
>
> h3. Overview
> Querying duration related data from the cqlsh prompt results in an error.
> Consider the following create table and insert statement.
> {code:title=Table and insert statement with duration data 
> type|borderStyle=solid}
> CREATE TABLE duration_test (
>   primary_key text,
>   col20 duration,
>   PRIMARY KEY (primary_key)
> );
> INSERT INTO duration_test (primary_key, col20) VALUES ('primary_key_example', 
> 1y5mo89h4m48s);
> {code}
> On executing a select query on col20 in cqlsh I get an error "Failed to 
> format value '"\x00\xfe\x02GS\xfc\xa5\xc0\x00' : 'ascii' codec can't decode 
> byte 0xfe in position 2: ordinal not in range(128)"
> {code:title=Duration Query|borderStyle=solid}
> Select  col20 from duration_test;
> {code}
> h3. Investigation
> On investigating this further I found that the current python Cassandra 
> driver used found in 
> lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip does not seem to 
> support duration data type. This was added in Jan this year 
> https://github.com/datastax/python-driver/pull/689.
> So I downloaded the latest driver release 
> https://github.com/datastax/python-driver/releases/tag/3.9.0. I embedded the 
> latest driver into cassandra-driver-internal-only-3.7.0.post0-2481531.zip. 
> This fixed the driver related issue but there was still a formatting issue. 
> I then went on to modify the format_value_duration methos in the 
> pylib/cqlshlib/formatting.py. Diff posted below
> {code}
>  @formatter_for('Duration')
>  def format_value_duration(val, colormap, **_):
> -buf = six.iterbytes(val)
> -months = decode_vint(buf)
> -days = decode_vint(buf)
> -nanoseconds = decode_vint(buf)
> -return format_python_formatted_type(duration_as_str(months, days, 
> nanoseconds), colormap, 'duration')
> +return format_python_formatted_type(duration_as_str(val.months, 
> val.days, val.nanoseconds), colormap, 'duration')
> {code}
> This resulted in fixing the issue and duration types are now correctly 
> displayed.
> Happy to fix the issue if I can get some guidance on:
> # If this is a valid issue. Tried searching JIRA but did not find anything 
> reported. 
> # If my assumptions are correct i.e. this is actually a bug
> # how to package the new driver into the source code. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-05-25 Thread Valera V. Kharseko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024327#comment-16024327
 ] 

Valera V. Kharseko commented on CASSANDRA-13339:


-Dcassandra.nio_data_output_stream_plus_buffer_size=327680

> java.nio.BufferOverflowException: null
> --
>
> Key: CASSANDRA-13339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13339
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Richards
>
> I'm seeing the following exception running Cassandra 3.9 (with Netty updated 
> to 4.1.8.Final) running on a 2 node cluster.  It would have been processing 
> around 50 queries/second at the time (mixture of 
> inserts/updates/selects/deletes) : there's a collection of tables (some with 
> counters some without) and a single materialized view.
> ERROR [MutationStage-4] 2017-03-15 22:50:33,052 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> and then again shortly afterwards
> ERROR [MutationStage-3] 2017-03-15 23:27:36,198 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  

[jira] [Commented] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-05-25 Thread Valera V. Kharseko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024318#comment-16024318
 ] 

Valera V. Kharseko commented on CASSANDRA-13339:


 grep commit /etc/cassandra/conf/cassandra.yaml | grep -v "#"
{noformat}
commitlog_directory: /var/lib/cassandra/commitlog
commit_failure_policy: stop
commitlog_sync: periodic
commitlog_sync_period_in_ms: 1
commitlog_segment_size_in_mb: 128 
{noformat}

> java.nio.BufferOverflowException: null
> --
>
> Key: CASSANDRA-13339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13339
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Richards
>
> I'm seeing the following exception running Cassandra 3.9 (with Netty updated 
> to 4.1.8.Final) running on a 2 node cluster.  It would have been processing 
> around 50 queries/second at the time (mixture of 
> inserts/updates/selects/deletes) : there's a collection of tables (some with 
> counters some without) and a single materialized view.
> ERROR [MutationStage-4] 2017-03-15 22:50:33,052 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> and then again shortly afterwards
> ERROR [MutationStage-3] 2017-03-15 23:27:36,198 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> 

[jira] [Updated] (CASSANDRA-13513) Getting java.lang.AssertionError after upgrade from Cassandra 2.1.17.1428 to 3.0.8

2017-05-25 Thread Anuja Mandlecha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Mandlecha updated CASSANDRA-13513:

Environment: DSE 5.0.8, cqlsh 5.0.1 , Cassandra 3.0.12.1656 , Ubuntu 14.04  
(was: DSE 5.0.2 ,Cassandra 3.0.8  Ubuntu 14.04)

> Getting java.lang.AssertionError after upgrade from Cassandra 2.1.17.1428 to 
> 3.0.8
> --
>
> Key: CASSANDRA-13513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13513
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: DSE 5.0.8, cqlsh 5.0.1 , Cassandra 3.0.12.1656 , Ubuntu 
> 14.04
>Reporter: Anuja Mandlecha
>
> Hi,
> While querying Cassandra table using DBeaver or using DataStax Node.js Driver 
> getting below error. 
> {code}
> WARN  [SharedPool-Worker-2] 2017-05-09 12:55:18,654  
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:228)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:218)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:137) 
> ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:300)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:320) 
> ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1796)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2466)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_101]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> {code}
> Query used is 
> {code}
> select * from dynocloud.user_info where company_name='DS' allow filtering;
> {code} 
> This query returns data when run in cql shell. 
> Also if we give limit 100 to the same query or change the value to 
> company_name, the query returns results. The index definition is 
> {code} 
> CREATE INDEX company_name_userindex ON dynocloud.user_info (company_name);
> {code} 
> Thanks,
> Anuja Mandlecha



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-05-25 Thread Valera V. Kharseko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024293#comment-16024293
 ] 

Valera V. Kharseko edited comment on CASSANDRA-13339 at 5/25/17 6:44 AM:
-

{noformat}
ERROR [MutationStage-16] 2017-05-25 09:35:38,140 StorageProxy.java:1353 - 
Failed to apply mutation locally : {}
java.nio.BufferOverflowException: null
at 
org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
 [apache-cassandra-3.9.0.jar:3.9.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 [apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
[apache-cassandra-3.9.0.jar:3.9.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
ERROR [MutationStage-6] 2017-05-25 09:36:24,351 StorageProxy.java:1353 - Failed 
to apply mutation locally : {}
java.nio.BufferOverflowException: null
at 
org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 

[jira] [Commented] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-05-25 Thread Valera V. Kharseko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024293#comment-16024293
 ] 

Valera V. Kharseko commented on CASSANDRA-13339:


{noformat}
ERROR [MutationStage-16] 2017-05-25 09:35:38,140 StorageProxy.java:1353 - 
Failed to apply mutation locally : {}
java.nio.BufferOverflowException: null
at 
org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
 [apache-cassandra-3.9.0.jar:3.9.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 [apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
[apache-cassandra-3.9.0.jar:3.9.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
ERROR [MutationStage-6] 2017-05-25 09:36:24,351 StorageProxy.java:1353 - Failed 
to apply mutation locally : {}
java.nio.BufferOverflowException: null
at 
org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at