[jira] [Commented] (CASSANDRA-13848) Allow sstabledump to do a json object per partition to better handle large sstables

2017-09-06 Thread Nate Sanders (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156494#comment-16156494
 ] 

Nate Sanders commented on CASSANDRA-13848:
--

http://jsonlines.org/

> Allow sstabledump to do a json object per partition to better handle large 
> sstables
> ---
>
> Key: CASSANDRA-13848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13848
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jeff Jirsa
>Priority: Trivial
>  Labels: lhf
>
> sstable2json / sstabledump make a huge json document of the whole file. For 
> very large sstables this makes it impossible to load in memory to do anything 
> with it. Allowing users to Break it into small json objects per partition 
> would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13848) Allow sstabledump to do a json object per partition to better handle large sstables

2017-09-06 Thread Jeff Jirsa (JIRA)
Jeff Jirsa created CASSANDRA-13848:
--

 Summary: Allow sstabledump to do a json object per partition to 
better handle large sstables
 Key: CASSANDRA-13848
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13848
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeff Jirsa
Priority: Trivial


sstable2json / sstabledump make a huge json document of the whole file. For 
very large sstables this makes it impossible to load in memory to do anything 
with it. Allowing users to Break it into small json objects per partition would 
be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13845) Add keyspace and table name in schema validation exception

2017-09-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13845:
---
Fix Version/s: 4.x
   Status: Patch Available  (was: Open)

> Add keyspace and table name in schema validation exception
> --
>
> Key: CASSANDRA-13845
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13845
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
>
> We're seeing the following exception from time to time, it would be better to 
> include keyspace table name, so we know which table update is causing the 
> issue:
> {noformat}
> ERROR [InternalResponseStage:391] 2017-09-06 00:29:05,361 
> MigrationTask.java:96 - Configuration exception merging remote schema
> org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
> mismatch (found af1f4650-9279-11e7-9df0-399587d0a542; expected 
> a094fe70-89e3-11e7-b4d5-eb8faf28be34)
> at 
> org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:785)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:747) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at org.apache.cassandra.config.Schema.updateTable(Schema.java:661) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1391)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1347)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1297)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:92) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53)
>  [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.14.jar:3.0.14]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13845) Add keyspace and table name in schema validation exception

2017-09-06 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156229#comment-16156229
 ] 

Jay Zhuang commented on CASSANDRA-13845:


Added keyspace and table name in TableMetadata validation exceptions, please 
review:
| Branch | uTest |
| [13845-trunk|https://github.com/cooldoger/cassandra/tree/13845-trunk] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13845-trunk.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13845-trunk]
 |

> Add keyspace and table name in schema validation exception
> --
>
> Key: CASSANDRA-13845
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13845
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
>
> We're seeing the following exception from time to time, it would be better to 
> include keyspace table name, so we know which table update is causing the 
> issue:
> {noformat}
> ERROR [InternalResponseStage:391] 2017-09-06 00:29:05,361 
> MigrationTask.java:96 - Configuration exception merging remote schema
> org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
> mismatch (found af1f4650-9279-11e7-9df0-399587d0a542; expected 
> a094fe70-89e3-11e7-b4d5-eb8faf28be34)
> at 
> org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:785)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:747) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at org.apache.cassandra.config.Schema.updateTable(Schema.java:661) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1391)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1347)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1297)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:92) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53)
>  [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.14.jar:3.0.14]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13847) test failure in cqlsh_tests.cqlsh_tests.CqlLoginTest.test_list_roles_after_login

2017-09-06 Thread Joel Knighton (JIRA)
Joel Knighton created CASSANDRA-13847:
-

 Summary: test failure in 
cqlsh_tests.cqlsh_tests.CqlLoginTest.test_list_roles_after_login
 Key: CASSANDRA-13847
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13847
 Project: Cassandra
  Issue Type: Bug
  Components: Testing, Tools
Reporter: Joel Knighton


example failure:

http://cassci.datastax.com/job/cassandra-2.1_dtest/546/testReport/cqlsh_tests.cqlsh_tests/CqlLoginTest/test_list_roles_after_login

This test was added for [CASSANDRA-13640]. The comments seem to indicated this 
is only a problem on 3.0+, but the added test certainly seems to reproduce the 
problem on 2.1 and 2.2. Even if the issue does affect 2.1/2.2, it seems 
insufficiently critical for 2.1, so we need to limit the test to run on 2.2+ at 
the very least, possibly 3.0+ if we don't fix the cause on 2.2.

Thoughts [~adelapena]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13846) Add additional unit tests for batch behavior

2017-09-06 Thread Jeff Jirsa (JIRA)
Jeff Jirsa created CASSANDRA-13846:
--

 Summary: Add additional unit tests for batch behavior
 Key: CASSANDRA-13846
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13846
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Jirsa
Assignee: Jeff Jirsa
Priority: Minor
 Fix For: 4.x


There are some combinations of batch behavior for which there are no unit 
tests. An example of this is CASSANDRA-13655 , which adds some tests, but not 
every combination. This ticket will be for additional unit tests around batches 
/ counter batches / batches with TTLs / etc.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Update ccm to 2.8.4

2017-09-06 Thread mshuler
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 6d77ace53 -> 926903858


Update ccm to 2.8.4


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/92690385
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/92690385
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/92690385

Branch: refs/heads/master
Commit: 92690385868525e1c1f76b33afa0453214ac731b
Parents: 6d77ace
Author: Philip Thompson 
Authored: Wed Sep 6 19:48:38 2017 +0200
Committer: Michael Shuler 
Committed: Wed Sep 6 12:57:12 2017 -0500

--
 requirements.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/92690385/requirements.txt
--
diff --git a/requirements.txt b/requirements.txt
index 058ea38..a939dcd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,7 +4,7 @@
 futures
 six
 -e 
git+https://github.com/datastax/python-driver.git@cassandra-test#egg=cassandra-driver
-ccm==2.8.1
+ccm==2.8.4
 cql
 decorator
 docopt


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13004) Corruption while adding/removing a column to/from the table

2017-09-06 Thread Abhishek Darak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155748#comment-16155748
 ] 

Abhishek Darak commented on CASSANDRA-13004:


I mean upgrading to the fixed version, will not result in corruption to any new 
data? not the already existing corrupted data.

> Corruption while adding/removing a column to/from the table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stanislav Vishnevskiy
>Assignee: Alex Petrov
>Priority: Blocker
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (
> id bigint,
> guild_id bigint,
> type tinyint,
> name text,
> topic text,
> position int,
> owner_id bigint,
> icon_hash text,
> recipients map,
> permission_overwrites map,
> bitrate int,
> user_limit int,
> last_pin_timestamp timestamp,
> last_message_id bigint,
> PRIMARY KEY (id)
> );
> {code}
> And then we executed the following alter.
> {code:none}
> ALTER TABLE discord_channels.channels ADD application_id bigint;
> {code}
> And one row (that we can tell) got corrupted at the same time and could no 
> longer be read from the Python driver. 
> {code:none}
> [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassandra. 
> ver(4); flags(); stream(27); op(8); offset(9); len(887); buffer: 
> '\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels\x00\x02id\x00\x02\x00\x0eapplication_id\x00\x02\x00\x07bitrate\x00\t\x00\x08guild_id\x00\x02\x00\ticon_hash\x00\r\x00\x0flast_message_id\x00\x02\x00\x12last_pin_timestamp\x00\x0b\x00\x04name\x00\r\x00\x08owner_id\x00\x02\x00\x15permission_overwrites\x00!\x00\x02\x000\x00\x10discord_channels\x00\x1cchannel_permission_overwrite\x00\x04\x00\x02id\x00\x02\x00\x04type\x00\t\x00\x06allow_\x00\t\x00\x04deny\x00\t\x00\x08position\x00\t\x00\nrecipients\x00!\x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01\x00\x04nick\x00\r\x00\x05topic\x00\r\x00\x04type\x00\x14\x00\nuser_limit\x00\t\x00\x00\x00\x01\x00\x00\x00\x08\x03\x8a\x19\x8e\xf8\x82\x00\x01\xff\xff\xff\xff\x00\x00\x00\x04\x00\x00\xfa\x00\x00\x00\x00\x08\x00\x00\xfa\x00\x00\xf8G\xc5\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8b\xc0\xb5nB\x00\x02\x00\x00\x00\x08G\xc5\xffI\x98\xc4\xb4(\x00\x00\x00\x03\x8b\xc0\xa8\xff\xff\xff\xff\x00\x00\x01<\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
>  
> \x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00'
> {code}
> And then in cqlsh when trying to read the row we got this. 
> {code:none}
> /usr/bin/cqlsh.py:632: DateOverFlowWarning: Some timestamps are larger than 
> Python datetime can represent. Timestamps are displayed in milliseconds from 
> epoch.
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1301, in perform_simple_statement
> result = future.result()
>   File 
> 

[jira] [Updated] (CASSANDRA-12992) when mapreduce create sstables and load to cassandra cluster,then drop the table there are much data file not moved to snapshot

2017-09-06 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12992:
---
Description: 
when mapreduce create sstables and load to cassandra cluster,then drop the 
table there are much data file not move to snapshot,

nodetool clearsnapshot can not free the disk,

wo must Manual delete the files 


cassandra table schema:
{code}

CREATE TABLE test.st_platform_api_restaurant_export (
id_date text PRIMARY KEY,
dt text,
eleme_order_total double,
order_amt bigint,
order_date text,
restaurant_id int,
total double
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = 'restaurant'
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 2592000
AND gc_grace_seconds = 1800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
{code}


mapreduce job:
{code}

CREATE EXTERNAL TABLE st_platform_api_restaurant_export_h2c_sstable
(
id_date string,
order_amt bigint,
total double,
eleme_order_total double,
order_date string,
restaurant_id int,
dt string)  STORED BY 
'org.apache.hadoop.hive.cassandra.bulkload.CqlBulkStorageHandler'
TBLPROPERTIES (
'cassandra.output.keyspace.username' = 'cassandra',
'cassandra.output.keyspace'='test',
'cassandra.output.partitioner.class'='org.apache.cassandra.dht.Murmur3Partitioner',
'cassandra.output.keyspace.passwd'='cassandra',
'mapreduce.output.basename'='st_platform_api_restaurant_export',
'cassandra.output.thrift.address'='casandra cluster ips',
'cassandra.output.delete.source'='true',
'cassandra.columnfamily.insert.st_platform_api_restaurant_export'='insert into 
test.st_platform_api_restaurant_export(id_date,order_amt,total,eleme_order_total,order_date,restaurant_id,dt)values(?,?,?,?,?,?,?)',
'cassandra.columnfamily.schema.st_platform_api_restaurant_export'='CREATE TABLE 
test.st_platform_api_restaurant_export (id_date text PRIMARY KEY,dt 
text,eleme_order_total double,order_amt bigint,order_date text,restaurant_id 
int,total double)');
{code}

  was:
{code}
when mapreduce create sstables and load to cassandra cluster,then drop the 
table there are much data file not move to snapshot,

nodetool clearsnapshot can not free the disk,

wo must Manual delete the files 


cassandra table schema:

CREATE TABLE test.st_platform_api_restaurant_export (
id_date text PRIMARY KEY,
dt text,
eleme_order_total double,
order_amt bigint,
order_date text,
restaurant_id int,
total double
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = 'restaurant'
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 2592000
AND gc_grace_seconds = 1800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


mapreduce job:
CREATE EXTERNAL TABLE st_platform_api_restaurant_export_h2c_sstable
(
id_date string,
order_amt bigint,
total double,
eleme_order_total double,
order_date string,
restaurant_id int,
dt string)  STORED BY 
'org.apache.hadoop.hive.cassandra.bulkload.CqlBulkStorageHandler'
TBLPROPERTIES (
'cassandra.output.keyspace.username' = 'cassandra',
'cassandra.output.keyspace'='test',
'cassandra.output.partitioner.class'='org.apache.cassandra.dht.Murmur3Partitioner',
'cassandra.output.keyspace.passwd'='cassandra',
'mapreduce.output.basename'='st_platform_api_restaurant_export',
'cassandra.output.thrift.address'='casandra cluster ips',
'cassandra.output.delete.source'='true',
'cassandra.columnfamily.insert.st_platform_api_restaurant_export'='insert into 
test.st_platform_api_restaurant_export(id_date,order_amt,total,eleme_order_total,order_date,restaurant_id,dt)values(?,?,?,?,?,?,?)',
'cassandra.columnfamily.schema.st_platform_api_restaurant_export'='CREATE TABLE 
test.st_platform_api_restaurant_export (id_date text PRIMARY KEY,dt 
text,eleme_order_total double,order_amt bigint,order_date text,restaurant_id 
int,total double)');
{code}


> when mapreduce create sstables and load to cassandra cluster,then drop the 
> table there are much data file not moved to snapshot
> 

[jira] [Commented] (CASSANDRA-13004) Corruption while adding/removing a column to/from the table

2017-09-06 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155689#comment-16155689
 ] 

Jeff Jirsa commented on CASSANDRA-13004:


There's no way of knowing if the corrupt sstable you're seeing is the result of 
13004 or some other issue (like bad hardware / RAM / etc). The best explanation 
to date is https://gist.github.com/ifesdjeen/9cacb1ccd934374f707125d78f2fbcb6 - 
you should consult that if you're unsure of what to do (the short answer is 
that those sstables may be permanently damaged, upgrading will not fix the 
already corrupt data on disk).


> Corruption while adding/removing a column to/from the table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stanislav Vishnevskiy
>Assignee: Alex Petrov
>Priority: Blocker
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (
> id bigint,
> guild_id bigint,
> type tinyint,
> name text,
> topic text,
> position int,
> owner_id bigint,
> icon_hash text,
> recipients map,
> permission_overwrites map,
> bitrate int,
> user_limit int,
> last_pin_timestamp timestamp,
> last_message_id bigint,
> PRIMARY KEY (id)
> );
> {code}
> And then we executed the following alter.
> {code:none}
> ALTER TABLE discord_channels.channels ADD application_id bigint;
> {code}
> And one row (that we can tell) got corrupted at the same time and could no 
> longer be read from the Python driver. 
> {code:none}
> [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassandra. 
> ver(4); flags(); stream(27); op(8); offset(9); len(887); buffer: 
> '\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels\x00\x02id\x00\x02\x00\x0eapplication_id\x00\x02\x00\x07bitrate\x00\t\x00\x08guild_id\x00\x02\x00\ticon_hash\x00\r\x00\x0flast_message_id\x00\x02\x00\x12last_pin_timestamp\x00\x0b\x00\x04name\x00\r\x00\x08owner_id\x00\x02\x00\x15permission_overwrites\x00!\x00\x02\x000\x00\x10discord_channels\x00\x1cchannel_permission_overwrite\x00\x04\x00\x02id\x00\x02\x00\x04type\x00\t\x00\x06allow_\x00\t\x00\x04deny\x00\t\x00\x08position\x00\t\x00\nrecipients\x00!\x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01\x00\x04nick\x00\r\x00\x05topic\x00\r\x00\x04type\x00\x14\x00\nuser_limit\x00\t\x00\x00\x00\x01\x00\x00\x00\x08\x03\x8a\x19\x8e\xf8\x82\x00\x01\xff\xff\xff\xff\x00\x00\x00\x04\x00\x00\xfa\x00\x00\x00\x00\x08\x00\x00\xfa\x00\x00\xf8G\xc5\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8b\xc0\xb5nB\x00\x02\x00\x00\x00\x08G\xc5\xffI\x98\xc4\xb4(\x00\x00\x00\x03\x8b\xc0\xa8\xff\xff\xff\xff\x00\x00\x01<\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
>  
> \x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00'
> {code}
> And then in cqlsh when trying to read the row we got this. 
> {code:none}
> /usr/bin/cqlsh.py:632: 

[jira] [Commented] (CASSANDRA-13703) Using min_compress_ratio <= 1 causes corruption

2017-09-06 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155665#comment-16155665
 ] 

Branimir Lambov commented on CASSANDRA-13703:
-

Rebased and updated the branch, reducing the visibility of {{shouldCheckCrc}} 
and adding a comment and {{VisibleForTesting}} annotations to make it clear why 
the shorthand methods use inconsistent parameters.

> Using min_compress_ratio <= 1 causes corruption
> ---
>
> Key: CASSANDRA-13703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13703
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: patch
>
>
> This is because chunks written uncompressed end up below the compressed size 
> threshold. Demonstrated by applying the attached patch meant to improve the 
> testing of the 10520 changes, and running 
> {{CompressedSequentialWriterTest.testLZ4Writer}}.
> The default {{min_compress_ratio: 0}} is not affected as it never writes 
> uncompressed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11363) High Blocked NTR When Connecting

2017-09-06 Thread sadagopan kalyanaraman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155610#comment-16155610
 ] 

sadagopan kalyanaraman commented on CASSANDRA-11363:


Thanks it worked!. 

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: T Jake Luciani
> Fix For: 2.1.16, 2.2.8, 3.0.10, 3.10
>
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack, 
> max_queued_ntr_property.txt, thread-queue-2.1.txt
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13692) CompactionAwareWriter_getWriteDirectory throws incompatible exceptions

2017-09-06 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-13692:
--
Reviewer: Joel Knighton

> CompactionAwareWriter_getWriteDirectory throws incompatible exceptions
> --
>
> Key: CASSANDRA-13692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13692
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Hao Zhong
>Assignee: Dimitar Dimitrov
>  Labels: lhf
> Attachments: c13692-2.2-dtest-results.PNG, 
> c13692-2.2-testall-results.PNG, c13692-3.0-dtest-results.PNG, 
> c13692-3.0-dtest-results-updated.PNG, c13692-3.0-testall-results.PNG, 
> c13692-3.11-dtest-results.PNG, c13692-3.11-dtest-results-updated.PNG, 
> c13692-3.11-testall-results.PNG, c13692-dtest-results.PNG, 
> c13692-testall-results.PNG
>
>
> The CompactionAwareWriter_getWriteDirectory throws RuntimeException:
> {code}
> public Directories.DataDirectory getWriteDirectory(Iterable 
> sstables, long estimatedWriteSize)
> {
> File directory = null;
> for (SSTableReader sstable : sstables)
> {
> if (directory == null)
> directory = sstable.descriptor.directory;
> if (!directory.equals(sstable.descriptor.directory))
> {
> logger.trace("All sstables not from the same disk - putting 
> results in {}", directory);
> break;
> }
> }
> Directories.DataDirectory d = 
> getDirectories().getDataDirectoryForFile(directory);
> if (d != null)
> {
> long availableSpace = d.getAvailableSpace();
> if (availableSpace < estimatedWriteSize)
> throw new RuntimeException(String.format("Not enough space to 
> write %s to %s (%s available)",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize),
>  d.location,
>  
> FBUtilities.prettyPrintMemory(availableSpace)));
> logger.trace("putting compaction results in {}", directory);
> return d;
> }
> d = getDirectories().getWriteableLocation(estimatedWriteSize);
> if (d == null)
> throw new RuntimeException(String.format("Not enough disk space 
> to store %s",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize)));
> return d;
> }
> {code}
> However, the thrown exception does not  trigger the failure policy. 
> CASSANDRA-11448 fixed a similar problem. The buggy code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new RuntimeException("Insufficient disk space to write " + 
> writeSize + " bytes");
> return directory;
> }
> {code}
> The fixed code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new FSWriteError(new IOException("Insufficient disk space 
> to write " + writeSize + " bytes"), "");
> return directory;
> }
> {code}
> The fixed code throws FSWE and triggers the failure policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13004) Corruption while adding/removing a column to/from the table

2017-09-06 Thread Abhishek Darak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155584#comment-16155584
 ] 

Abhishek Darak commented on CASSANDRA-13004:


we are on cassandra 3.0.7 and we are hitting an issue related to data 
corruption but not the one listed in the description i believe but similar to 
the one commented by Nimi Wariboko jr on Feb 7, 2017 at 03:46.

Here is the one of the error we got related to data corruption. Any insights 
when this error arises ? Also upgrading to fix versions would resolve this 
issue? Please let us know

AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-4,5,main]: {} 
{color:red}java.lang.RuntimeException: 
org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
/apps/cassandra/data/wfs_edm_locations/location-6c589dd1b73711e6879d497285de6740/mb-
 48-big-Data.db{color} 
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2449)
 ~[cassandra-all-3.0.7.1159.jar:3.0.7.1159] 
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_74] 
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[cassandra-all-3.0.7.1159.jar:3.0.7.1159] 
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [cassandra-all-3.0.7.1159.jar:3. 0.7.1159] 
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-3.0.7.1159.jar:3.0.7.1159] 
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_74] 
Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
/apps/cassandra/data/wfs_edm_locations/location-6c589dd1b73711e6879d497285de6740/mb-48-big-Data.db
 
at 
org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:353)
 ~[cassandra-all-3.0.7.1159.jar:3.0.7.1159] 
at 
org.apache.cassandra.db.filter.ClusteringIndexNamesFilter$1.hasNext(ClusteringIndexNamesFilter.java:145)
 ~[cassandra-all-3.0.7.1159.jar:3.0.7.1159] 
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator

> Corruption while adding/removing a column to/from the table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stanislav Vishnevskiy
>Assignee: Alex Petrov
>Priority: Blocker
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (
> id bigint,
> guild_id bigint,
> type tinyint,
> name text,
> topic text,
> position int,
> owner_id bigint,
> icon_hash text,
> recipients map,
> permission_overwrites map,
> bitrate int,
> user_limit int,
> last_pin_timestamp timestamp,
> last_message_id bigint,
> PRIMARY KEY (id)
> );
> {code}
> And then we executed the following alter.
> {code:none}
> ALTER TABLE discord_channels.channels ADD application_id bigint;
> {code}
> And one row (that we can tell) got corrupted at the same time and could no 
> longer be read from the Python driver. 
> {code:none}
> [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassandra. 
> ver(4); flags(); stream(27); op(8); offset(9); len(887); buffer: 
> 

[jira] [Commented] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-09-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155526#comment-16155526
 ] 

ZhaoYang commented on CASSANDRA-13787:
--

| source |  test  |   dtest |
| [trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-trunk] 
| [passed|https://circleci.com/gh/jasonstack/cassandra/571] | (off) trunk-dtest 
hang due to netty messaging... |
| [3.11|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.11] | 
[passed|https://circleci.com/gh/jasonstack/cassandra/573] | 
batch_test.TestBatch.batchlog_replay_compatibility_1_test
batch_test.TestBatch.batchlog_replay_compatibility_4_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test |
| [3.0|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.0] | 
[passed|https://circleci.com/gh/jasonstack/cassandra/572] | these tests failed 
on cassandra-3.0 as well:
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
global_row_key_cache_test.TestGlobalRowKeyCache.functional_test
batch_test.TestBatch.batchlog_replay_compatibility_1_test
batch_test.TestBatch.batchlog_replay_compatibility_4_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test|

> RangeTombstoneMarker and ParitionDeletion is not properly included in MV
> 
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> -2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.- 
> (Merged in CASSANDRA-11500)
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> 

[jira] [Commented] (CASSANDRA-13530) GroupCommitLogService

2017-09-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155485#comment-16155485
 ] 

Ariel Weisberg commented on CASSANDRA-13530:


{quote}I'm sorry from outside.
What do you mean `That documentation in the YAML looks wrong to me.` ?
In the apache doc, it also states 2ms is the default value.
http://cassandra.apache.org/doc/latest/configuration/cassandra_config_file.html
I'm not really sure what you are trying to say here, 
but the current batch is not working as expected as I described below, and it 
is not very useful.
https://issues.apache.org/jira/browse/CASSANDRA-12864{quote}
I am agreeing with Benedict the documentation everywhere is wrong.

{quote}
Even if `commitlog_sync_batch_window_in_ms` is set a big number, 
it is the maximum length of time that queries may be batched together for, not 
the minimum,
so, it is pretty nondeterministic and the behavior is not predictable.{quote}
Predictability isn't the goal though it's the lowest average latency and lowest 
P99 (or whatever). More variability but lower average and P99 is still better.

{quote}You can't really balance between latency and throughput.{quote}
Is this true? Fsync latency has a fixed cost as well as a variable cost that is 
linear with the amount of data being written. So calling fsync as often as 
possible with whatever data is available seems like a reasonable strategy if 
you have a dedicated device that is doing nothing but waiting for the commit 
log to flush.
 
This should balance latency and throughput at any load level. The more 
throughput you have the more latency you have as the fsyncs take a little 
longer. The less throughput you have the less latency you have as the fsyncs 
are a little faster. But either way the latency is determined by the 
willingness of the device to sync data and not by a hard coded configuration 
which may or may not be optimal. Devices also don't have predictable fsync 
latency over time. As they fill up or run out of erase blocks or are contended 
by other IO the optimal batch size may change.

We see this effect at low concurrency where I expect it to be pronounced. 
What's unexpected is that we are also seeing worse throughput as op rate 
increases which is unexpected because I would expect the batches to grow as 
fsync latency increases until it converges on the optimal batch size for the 
device.


> GroupCommitLogService
> -
>
> Key: CASSANDRA-13530
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13530
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuji Ito
>Assignee: Yuji Ito
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
> Attachments: groupCommit22.patch, groupCommit30.patch, 
> groupCommit3x.patch, groupCommitLog_noSerial_result.xlsx, 
> groupCommitLog_result.xlsx, GuavaRequestThread.java, MicroRequestThread.java
>
>
> I propose a new CommitLogService, GroupCommitLogService, to improve the 
> throughput when lots of requests are received.
> It improved the throughput by maximum 94%.
> I'd like to discuss about this CommitLogService.
> Currently, we can select either 2 CommitLog services; Periodic and Batch.
> In Periodic, we might lose some commit log which hasn't written to the disk.
> In Batch, we can write commit log to the disk every time. The size of commit 
> log to write is too small (< 4KB). When high concurrency, these writes are 
> gathered and persisted to the disk at once. But, when insufficient 
> concurrency, many small writes are issued and the performance decreases due 
> to the latency of the disk. Even if you use SSD, processes of many IO 
> commands decrease the performance.
> GroupCommitLogService writes some commitlog to the disk at once.
> The patch adds GroupCommitLogService (It is enabled by setting 
> `commitlog_sync` and `commitlog_sync_group_window_in_ms` in cassandra.yaml).
> The difference from Batch is just only waiting for the semaphore.
> By waiting for the semaphore, some writes for commit logs are executed at the 
> same time.
> In GroupCommitLogService, the latency becomes worse if the there is no 
> concurrency.
> I measured the performance with my microbench (MicroRequestThread.java) by 
> increasing the number of threads.The cluster has 3 nodes (Replication factor: 
> 3). Each nodes is AWS EC2 m4.large instance + 200IOPS io1 volume.
> The result is as below. The GroupCommitLogService with 10ms window improved 
> update with Paxos by 94% and improved select with Paxos by 76%.
> h6. SELECT / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|192|103|
> |2|163|212|
> |4|264|416|
> |8|454|800|
> |16|744|1311|
> |32|1151|1481|
> |64|1767|1844|
> |128|2949|3011|
> |256|4723|5000|
> h6. UPDATE / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|45|26|
> |2|39|51|
> |4|58|102|
> |8|102|198|
> |16|167|213|
> 

[jira] [Commented] (CASSANDRA-13530) GroupCommitLogService

2017-09-06 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155467#comment-16155467
 ] 

Ariel Weisberg commented on CASSANDRA-13530:


One problem is that all the measurements with batch were taken with a small 
sync window which could be introducing extra syncs that hurt performance. I 
don't think it's likely, but I want to make sure.

The second issue is that it's not clear why one implementation is faster than 
the other and it's kind of unexpected that batch is slower then group. I want 
to find out why batch is slower than group so we know it's not just a bug in 
the batch implementation.

The third issue is that group isn't faster under every scenario. If it were 
clearly always faster I wouldn't dig deeper, but I don't want people who use 
group to have to make the tradeoff if we can fix the performance of batch. It's 
also not about the amount of code so much as it is adding another tuning and 
configuration choice for users.

Both approaches batch together several writes before calling sync. The 
difference is how they construct the batches. One is based on fixed time 
interval and the other is based on fsync latency. Fsync latency should yield 
the optimal batch size that is as low latency as possible under every level of 
throughput with zero configuration tuning.

The only way for group to be better is if batch is calling fsync when it 
shouldn't or with a small number of operations (why?) or if calling fsync as 
soon as possible is somehow killing throughput of the device.

This is dragging on a bit just because of coordination issues over what and how 
to measure. [~yuji] thank you for doing the work I really do appreciate you 
being so thorough.

> GroupCommitLogService
> -
>
> Key: CASSANDRA-13530
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13530
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuji Ito
>Assignee: Yuji Ito
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
> Attachments: groupCommit22.patch, groupCommit30.patch, 
> groupCommit3x.patch, groupCommitLog_noSerial_result.xlsx, 
> groupCommitLog_result.xlsx, GuavaRequestThread.java, MicroRequestThread.java
>
>
> I propose a new CommitLogService, GroupCommitLogService, to improve the 
> throughput when lots of requests are received.
> It improved the throughput by maximum 94%.
> I'd like to discuss about this CommitLogService.
> Currently, we can select either 2 CommitLog services; Periodic and Batch.
> In Periodic, we might lose some commit log which hasn't written to the disk.
> In Batch, we can write commit log to the disk every time. The size of commit 
> log to write is too small (< 4KB). When high concurrency, these writes are 
> gathered and persisted to the disk at once. But, when insufficient 
> concurrency, many small writes are issued and the performance decreases due 
> to the latency of the disk. Even if you use SSD, processes of many IO 
> commands decrease the performance.
> GroupCommitLogService writes some commitlog to the disk at once.
> The patch adds GroupCommitLogService (It is enabled by setting 
> `commitlog_sync` and `commitlog_sync_group_window_in_ms` in cassandra.yaml).
> The difference from Batch is just only waiting for the semaphore.
> By waiting for the semaphore, some writes for commit logs are executed at the 
> same time.
> In GroupCommitLogService, the latency becomes worse if the there is no 
> concurrency.
> I measured the performance with my microbench (MicroRequestThread.java) by 
> increasing the number of threads.The cluster has 3 nodes (Replication factor: 
> 3). Each nodes is AWS EC2 m4.large instance + 200IOPS io1 volume.
> The result is as below. The GroupCommitLogService with 10ms window improved 
> update with Paxos by 94% and improved select with Paxos by 76%.
> h6. SELECT / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|192|103|
> |2|163|212|
> |4|264|416|
> |8|454|800|
> |16|744|1311|
> |32|1151|1481|
> |64|1767|1844|
> |128|2949|3011|
> |256|4723|5000|
> h6. UPDATE / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|45|26|
> |2|39|51|
> |4|58|102|
> |8|102|198|
> |16|167|213|
> |32|289|295|
> |64|544|548|
> |128|1046|1058|
> |256|2020|2061|



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12992) when mapreduce create sstables and load to cassandra cluster,then drop the table there are much data file not moved to snapshot

2017-09-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

翟玉勇 updated CASSANDRA-12992:

Description: 
{code}
when mapreduce create sstables and load to cassandra cluster,then drop the 
table there are much data file not move to snapshot,

nodetool clearsnapshot can not free the disk,

wo must Manual delete the files 


cassandra table schema:

CREATE TABLE test.st_platform_api_restaurant_export (
id_date text PRIMARY KEY,
dt text,
eleme_order_total double,
order_amt bigint,
order_date text,
restaurant_id int,
total double
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = 'restaurant'
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 2592000
AND gc_grace_seconds = 1800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


mapreduce job:
CREATE EXTERNAL TABLE st_platform_api_restaurant_export_h2c_sstable
(
id_date string,
order_amt bigint,
total double,
eleme_order_total double,
order_date string,
restaurant_id int,
dt string)  STORED BY 
'org.apache.hadoop.hive.cassandra.bulkload.CqlBulkStorageHandler'
TBLPROPERTIES (
'cassandra.output.keyspace.username' = 'cassandra',
'cassandra.output.keyspace'='test',
'cassandra.output.partitioner.class'='org.apache.cassandra.dht.Murmur3Partitioner',
'cassandra.output.keyspace.passwd'='cassandra',
'mapreduce.output.basename'='st_platform_api_restaurant_export',
'cassandra.output.thrift.address'='casandra cluster ips',
'cassandra.output.delete.source'='true',
'cassandra.columnfamily.insert.st_platform_api_restaurant_export'='insert into 
test.st_platform_api_restaurant_export(id_date,order_amt,total,eleme_order_total,order_date,restaurant_id,dt)values(?,?,?,?,?,?,?)',
'cassandra.columnfamily.schema.st_platform_api_restaurant_export'='CREATE TABLE 
test.st_platform_api_restaurant_export (id_date text PRIMARY KEY,dt 
text,eleme_order_total double,order_amt bigint,order_date text,restaurant_id 
int,total double)');
{code}

  was:
when mapreduce create sstables and load to cassandra cluster,then drop the 
table there are much data file not move to snapshot,

nodetool clearsnapshot can not free the disk,

wo must Manual delete the files 


cassandra table schema:

CREATE TABLE test.st_platform_api_restaurant_export (
id_date text PRIMARY KEY,
dt text,
eleme_order_total double,
order_amt bigint,
order_date text,
restaurant_id int,
total double
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = 'restaurant'
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 2592000
AND gc_grace_seconds = 1800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


mapreduce job:
CREATE EXTERNAL TABLE st_platform_api_restaurant_export_h2c_sstable
(
id_date string,
order_amt bigint,
total double,
eleme_order_total double,
order_date string,
restaurant_id int,
dt string)  STORED BY 
'org.apache.hadoop.hive.cassandra.bulkload.CqlBulkStorageHandler'
TBLPROPERTIES (
'cassandra.output.keyspace.username' = 'cassandra',
'cassandra.output.keyspace'='test',
'cassandra.output.partitioner.class'='org.apache.cassandra.dht.Murmur3Partitioner',
'cassandra.output.keyspace.passwd'='cassandra',
'mapreduce.output.basename'='st_platform_api_restaurant_export',
'cassandra.output.thrift.address'='casandra cluster ips',
'cassandra.output.delete.source'='true',
'cassandra.columnfamily.insert.st_platform_api_restaurant_export'='insert into 
test.st_platform_api_restaurant_export(id_date,order_amt,total,eleme_order_total,order_date,restaurant_id,dt)values(?,?,?,?,?,?,?)',
'cassandra.columnfamily.schema.st_platform_api_restaurant_export'='CREATE TABLE 
test.st_platform_api_restaurant_export (id_date text PRIMARY KEY,dt 
text,eleme_order_total double,order_amt bigint,order_date text,restaurant_id 
int,total double)');



> when mapreduce create sstables and load to cassandra cluster,then drop the 
> table there are much data file not moved to snapshot
> ---

[jira] [Commented] (CASSANDRA-13782) Cassandra RPM has wrong owner for /usr/share directories

2017-09-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155445#comment-16155445
 ] 

Michael Shuler commented on CASSANDRA-13782:


Yes, for packaging related fixes like this permissions one, we should push to 
all active branches. {{9369db1dfd}} looks like it was committed to the 
{{cassandra-2.2+}} branches, so I backported those to 2.1. There may be 
version-specific packaging commits, but generally, yes.

> Cassandra RPM has wrong owner for /usr/share directories
> 
>
> Key: CASSANDRA-13782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13782
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: Hannu Kröger
>Assignee: Sasatani Takenobu
>  Labels: lhf
>
> Some Cassandra RPM directories are owned by cassandra user against the fedora 
> package guidelines.
> Offending lines: 
> https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L135-L136
> "Permissions on files MUST be set properly. Inside of /usr, files should be 
> owned by root:root unless a more specific user or group is needed for 
> security."
> - 
> https://fedoraproject.org/wiki/Packaging:Guidelines?rd=Packaging/Guidelines#File_Permissions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13782) Cassandra RPM has wrong owner for /usr/share directories

2017-09-06 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155419#comment-16155419
 ] 

Stefan Podkowinski commented on CASSANDRA-13782:


[~mshuler], looking at 9369db1dfd I assume we should commit any packaging 
related changes to all active branches, regardless if critical or not.

> Cassandra RPM has wrong owner for /usr/share directories
> 
>
> Key: CASSANDRA-13782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13782
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: Hannu Kröger
>Assignee: Sasatani Takenobu
>  Labels: lhf
>
> Some Cassandra RPM directories are owned by cassandra user against the fedora 
> package guidelines.
> Offending lines: 
> https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L135-L136
> "Permissions on files MUST be set properly. Inside of /usr, files should be 
> owned by root:root unless a more specific user or group is needed for 
> security."
> - 
> https://fedoraproject.org/wiki/Packaging:Guidelines?rd=Packaging/Guidelines#File_Permissions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13782) Cassandra RPM has wrong owner for /usr/share directories

2017-09-06 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13782:
---
Status: Ready to Commit  (was: Patch Available)

> Cassandra RPM has wrong owner for /usr/share directories
> 
>
> Key: CASSANDRA-13782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13782
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: Hannu Kröger
>Assignee: Sasatani Takenobu
>  Labels: lhf
>
> Some Cassandra RPM directories are owned by cassandra user against the fedora 
> package guidelines.
> Offending lines: 
> https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L135-L136
> "Permissions on files MUST be set properly. Inside of /usr, files should be 
> owned by root:root unless a more specific user or group is needed for 
> security."
> - 
> https://fedoraproject.org/wiki/Packaging:Guidelines?rd=Packaging/Guidelines#File_Permissions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13782) Cassandra RPM has wrong owner for /usr/share directories

2017-09-06 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13782:
---
Reviewer: Stefan Podkowinski

> Cassandra RPM has wrong owner for /usr/share directories
> 
>
> Key: CASSANDRA-13782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13782
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Packaging
>Reporter: Hannu Kröger
>Assignee: Sasatani Takenobu
>  Labels: lhf
>
> Some Cassandra RPM directories are owned by cassandra user against the fedora 
> package guidelines.
> Offending lines: 
> https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L135-L136
> "Permissions on files MUST be set properly. Inside of /usr, files should be 
> owned by root:root unless a more specific user or group is needed for 
> security."
> - 
> https://fedoraproject.org/wiki/Packaging:Guidelines?rd=Packaging/Guidelines#File_Permissions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12014) IndexSummary > 2G causes an assertion error

2017-09-06 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12014:

Reviewer: Marcus Eriksson

> IndexSummary > 2G causes an assertion error
> ---
>
> Key: CASSANDRA-12014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0.x, 3.11.x
>
>
> {noformat}
> ERROR [CompactionExecutor:1546280] 2016-06-01 13:21:00,444  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[CompactionExecutor:1546280,1,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.maybeAddEntry(IndexSummaryBuilder.java:171)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.append(SSTableWriter.java:634)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.afterAppend(SSTableWriter.java:179)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:205) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_51]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> {noformat}
> I believe this can be fixed by raising the min_index_interval, but we should 
> have a better method of coping with this than throwing the AE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13530) GroupCommitLogService

2017-09-06 Thread Hiroyuki Yamada (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155319#comment-16155319
 ] 

Hiroyuki Yamada commented on CASSANDRA-13530:
-

I'm sorry from outside.

What do you mean `That documentation in the YAML looks wrong to me.` ?
In the apache doc, it also states 2ms is the default value.
http://cassandra.apache.org/doc/latest/configuration/cassandra_config_file.html

I'm not really sure what you are trying to say here, 
but the current batch is not working as expected as I described below, and it 
is not very useful.
https://issues.apache.org/jira/browse/CASSANDRA-12864

Even if `commitlog_sync_batch_window_in_ms` is set a big number, 
it is the maximum length of time that queries may be batched together for, not 
the minimum,
so, it is pretty nondeterministic and the behavior is not predictable.
You can't really balance between latency and throughput.

On the other hand, GroupCommitLogService makes more sense and 
actually makes things very different from performance perspective and it seems 
behaving pretty predictable.
Also, It only updates few lines of code changes without much complication.

I'm sorry from outside, but this discussion seems unnecessarily long without 
good directions even though the proposal looks pretty good.

> GroupCommitLogService
> -
>
> Key: CASSANDRA-13530
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13530
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuji Ito
>Assignee: Yuji Ito
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
> Attachments: groupCommit22.patch, groupCommit30.patch, 
> groupCommit3x.patch, groupCommitLog_noSerial_result.xlsx, 
> groupCommitLog_result.xlsx, GuavaRequestThread.java, MicroRequestThread.java
>
>
> I propose a new CommitLogService, GroupCommitLogService, to improve the 
> throughput when lots of requests are received.
> It improved the throughput by maximum 94%.
> I'd like to discuss about this CommitLogService.
> Currently, we can select either 2 CommitLog services; Periodic and Batch.
> In Periodic, we might lose some commit log which hasn't written to the disk.
> In Batch, we can write commit log to the disk every time. The size of commit 
> log to write is too small (< 4KB). When high concurrency, these writes are 
> gathered and persisted to the disk at once. But, when insufficient 
> concurrency, many small writes are issued and the performance decreases due 
> to the latency of the disk. Even if you use SSD, processes of many IO 
> commands decrease the performance.
> GroupCommitLogService writes some commitlog to the disk at once.
> The patch adds GroupCommitLogService (It is enabled by setting 
> `commitlog_sync` and `commitlog_sync_group_window_in_ms` in cassandra.yaml).
> The difference from Batch is just only waiting for the semaphore.
> By waiting for the semaphore, some writes for commit logs are executed at the 
> same time.
> In GroupCommitLogService, the latency becomes worse if the there is no 
> concurrency.
> I measured the performance with my microbench (MicroRequestThread.java) by 
> increasing the number of threads.The cluster has 3 nodes (Replication factor: 
> 3). Each nodes is AWS EC2 m4.large instance + 200IOPS io1 volume.
> The result is as below. The GroupCommitLogService with 10ms window improved 
> update with Paxos by 94% and improved select with Paxos by 76%.
> h6. SELECT / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|192|103|
> |2|163|212|
> |4|264|416|
> |8|454|800|
> |16|744|1311|
> |32|1151|1481|
> |64|1767|1844|
> |128|2949|3011|
> |256|4723|5000|
> h6. UPDATE / sec
> ||\# of threads||Batch 2ms||Group 10ms||
> |1|45|26|
> |2|39|51|
> |4|58|102|
> |8|102|198|
> |16|167|213|
> |32|289|295|
> |64|544|548|
> |128|1046|1058|
> |256|2020|2061|



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13471) [PerDiskMemtableFlushWriter_0:1312] 2017-04-25 09:48:14,818 CassandraDaemon.java:226 - Exception in thread Thread[PerDiskMemtableFlushWriter_0:1312,5,main]

2017-09-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155286#comment-16155286
 ] 

Marcus Eriksson commented on CASSANDRA-13471:
-

[~cany123] could you provide some more details on your setup? Partitioner? 
Doing any schema changes when this happens?

> [PerDiskMemtableFlushWriter_0:1312] 2017-04-25 09:48:14,818 
> CassandraDaemon.java:226 - Exception in thread 
> Thread[PerDiskMemtableFlushWriter_0:1312,5,main]
> ---
>
> Key: CASSANDRA-13471
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13471
> Project: Cassandra
>  Issue Type: Bug
> Environment: Centos7.3 x86_64, cassandra 3.9
>Reporter: Chernishev Aleksandr
> Fix For: 3.9
>
>
> with sasi index in table test.object: 
> {code}
>  CREATE TABLE test.object (
> bname text,
> name text,
> acl text,
> checksum text,
> chunksize bigint,
> contenttype text,
> creationdate timestamp,
> inode uuid,
> lastmodified timestamp,
> metadata map,
> parts map,
> size bigint,
> storageclass text,
> version uuid,
> PRIMARY KEY (bname, name)
> ) WITH CLUSTERING ORDER BY (name ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '128', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 0.5
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE INDEX inode_index ON test.object (inode);
> {code}
> im get error(on random servers in cluster in random times) :
> {code}
> ERROR [PerDiskMemtableFlushWriter_0:1312] 2017-04-25 09:48:14,818 
> CassandraDaemon.java:226 - Exception in thread 
> Thread[PerDiskMemtableFlushWriter_0:1312,5,main]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(d3f60675-e56e-4551-b468-d4e31c8ee82b, 
> d3f60675e56e4551b468d4e31c8ee82b) >= current key 
> DecoratedKey(6a473364-2f43-3876-574d-693461546d73, 
> d3f6355a5bd8415290668f987a15594c) writing into 
> /data3/test/object-f40120c028d111e78e26c57aefc93bac/.inode_index/mc-26-big-Data.db
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.beforeAppend(BigTableWriter.java:122)
>  ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:161)
>  ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.append(SimpleSSTableMultiWriter.java:48)
>  ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:458)
>  ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:493) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:380) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}
> After that in debug log repeated messages:
> {code}
>  DEBUG [MemtablePostFlush:405] 2017-04-25 09:48:15,944 
> ColumnFamilyStore.java:936 - forceFlush requested but everything is clean in 
> object
> {code}
> Commitlog not truncate and grows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13396) Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager

2017-09-06 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155255#comment-16155255
 ] 

Stefano Ortolani commented on CASSANDRA-13396:
--

I had the very same issue when using the {{EmbeddedCassandra}} from 
{{spark-cassandra-connector}}. Moved to {{logback}} fixes the issue. This was a 
bit annoying because I had to exclude all other sf4j implementations (there's 
no way afaik to force one implementation over another in case multiple ones are 
loaded). Anyway, anything but a ClassCastException is better option imho.

> Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager
> 
>
> Key: CASSANDRA-13396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13396
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Eugene Fedotov
>Priority: Minor
>
> https://www.mail-archive.com/user@cassandra.apache.org/msg51603.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds

2017-09-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155233#comment-16155233
 ] 

Marcus Eriksson commented on CASSANDRA-13771:
-

Running unit tests here: https://circleci.com/gh/krummas/cassandra/104
and dtests here: 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/284/

> Emit metrics whenever we hit tombstone failures and warn thresholds
> ---
>
> Key: CASSANDRA-13771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13771
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: TIRU ADDANKI
>Assignee: Sarath P S
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 
> 0001-Emit-metrics-whenever-we-hit-tombstone-failures-and-.patch, 13771.patch
>
>
> Many a times we see cassandra timeouts, but unless we check the logs we won’t 
> be able to tell if the time outs are result of too many tombstones or some 
> other issue. It would be easier if we have metrics published whenever we hit 
> tombstone failure/warning thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds

2017-09-06 Thread Sarath P S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155215#comment-16155215
 ] 

Sarath P S edited comment on CASSANDRA-13771 at 9/6/17 11:35 AM:
-

Moved tombstone WARN and FAILURE metrics to TableMetrics, Attaching 
0001-Emit-metrics-whenever-we-hit-tombstone-failures-and-.patch


was (Author: sarath.ps):
Moved tombstone WARN and FAILURE metrics to TableMetrics

> Emit metrics whenever we hit tombstone failures and warn thresholds
> ---
>
> Key: CASSANDRA-13771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13771
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: TIRU ADDANKI
>Assignee: Sarath P S
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 
> 0001-Emit-metrics-whenever-we-hit-tombstone-failures-and-.patch, 13771.patch
>
>
> Many a times we see cassandra timeouts, but unless we check the logs we won’t 
> be able to tell if the time outs are result of too many tombstones or some 
> other issue. It would be easier if we have metrics published whenever we hit 
> tombstone failure/warning thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds

2017-09-06 Thread Sarath P S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarath P S updated CASSANDRA-13771:
---
Attachment: 0001-Emit-metrics-whenever-we-hit-tombstone-failures-and-.patch

> Emit metrics whenever we hit tombstone failures and warn thresholds
> ---
>
> Key: CASSANDRA-13771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13771
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: TIRU ADDANKI
>Assignee: Sarath P S
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 
> 0001-Emit-metrics-whenever-we-hit-tombstone-failures-and-.patch, 13771.patch
>
>
> Many a times we see cassandra timeouts, but unless we check the logs we won’t 
> be able to tell if the time outs are result of too many tombstones or some 
> other issue. It would be easier if we have metrics published whenever we hit 
> tombstone failure/warning thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds

2017-09-06 Thread Sarath P S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarath P S updated CASSANDRA-13771:
---
Status: Patch Available  (was: Open)

Moved tombstone WARN and FAILURE metrics to TableMetrics

> Emit metrics whenever we hit tombstone failures and warn thresholds
> ---
>
> Key: CASSANDRA-13771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13771
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: TIRU ADDANKI
>Assignee: Sarath P S
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13771.patch
>
>
> Many a times we see cassandra timeouts, but unless we check the logs we won’t 
> be able to tell if the time outs are result of too many tombstones or some 
> other issue. It would be easier if we have metrics published whenever we hit 
> tombstone failure/warning thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13175) Integrate "Error Prone" Code Analyzer

2017-09-06 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155175#comment-16155175
 ] 

Stefan Podkowinski edited comment on CASSANDRA-13175 at 9/6/17 11:10 AM:
-

We should probably first try to integrate errorprone in build.xml and make it 
run in circle ci. Afterwards we can start filtering out more of the less useful 
warnings and start fixing reasonable warnings in the code until we get clean 
(probably not in scope of this ticket). 

I tried to give this a shot by creating a separate target for errorprone 2.1.1 
in the build.xml 
[here|https://github.com/spodkowinski/cassandra/tree/WIP-13175]. I also 
configured circleci.yaml to use container 4 for that, but 
[results|https://circleci.com/gh/spodkowinski/cassandra/121#config/containers/3]
 show a failed build due to an error that I can't reproduce locally. Running 
{{ant errorprone}} on my branch completes successfully with only warnings using 
both oracle or openjdk. Guess I have to use the same oracle version as on 
circle ci to reproduce. I also haven't encountered linked github #711 issue, 
but it's marked as fixed with 2.1.1 already. 

Edit:
Running ant errorprone with the exact same version as circle ci 
(oracle64-1.8.0.102) still completes fine locally on my linux machine..


was (Author: spo...@gmail.com):
We should probably first try to integrate errorprone in build.xml and make it 
run in circle ci. Afterwards we can start filtering out more of the less useful 
warnings and start fixing reasonable warnings in the code until we get clean 
(probably not in scope of this ticket). 

I tried to give this a shot by creating a separate target for errorprone 2.1.1 
in the build.xml 
[here|https://github.com/spodkowinski/cassandra/tree/WIP-13175]. I also 
configured circleci.yaml to use container 4 for that, but 
[results|https://circleci.com/gh/spodkowinski/cassandra/121#config/containers/3]
 show a failed build due to an error that I can't reproduce locally. Running 
{{ant errorprone}} on my branch completes successfully with only warnings using 
both oracle or openjdk. Guess I have to use the same oracle version as on 
circle ci to reproduce. I also haven't encountered linked github #711 issue, 
but it's marked as fixed with 2.1.1 already. 

> Integrate "Error Prone" Code Analyzer
> -
>
> Key: CASSANDRA-13175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13175
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Attachments: 0001-Add-Error-Prone-code-analyzer.patch, 
> checks-2_2.out, checks-3_0.out, checks-trunk.out
>
>
> I've been playing with [Error Prone|http://errorprone.info/] by integrating 
> it into the build process and to see what kind of warnings it would produce. 
> So far I'm positively impressed by the coverage and usefulness of some of the 
> implemented checks. See attachments for results.
> Unfortunately there are still some issues on how the analyzer is effecting 
> generated code and used guava versions, see 
> [#492|https://github.com/google/error-prone/issues/492]. In case those issues 
> have been solved and the resulting code isn't affected by the analyzer, I'd 
> suggest to add it to trunk with warn only behaviour and some less useful 
> checks disabled. Alternatively a new ant target could be added, maybe with 
> build breaking checks and CI integration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13175) Integrate "Error Prone" Code Analyzer

2017-09-06 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155175#comment-16155175
 ] 

Stefan Podkowinski commented on CASSANDRA-13175:


We should probably first try to integrate errorprone in build.xml and make it 
run in circle ci. Afterwards we can start filtering out more of the less useful 
warnings and start fixing reasonable warnings in the code until we get clean 
(probably not in scope of this ticket). 

I tried to give this a shot by creating a separate target for errorprone 2.1.1 
in the build.xml 
[here|https://github.com/spodkowinski/cassandra/tree/WIP-13175]. I also 
configured circleci.yaml to use container 4 for that, but 
[results|https://circleci.com/gh/spodkowinski/cassandra/121#config/containers/3]
 show a failed build due to an error that I can't reproduce locally. Running 
{{ant errorprone}} on my branch completes successfully with only warnings using 
both oracle or openjdk. Guess I have to use the same oracle version as on 
circle ci to reproduce. I also haven't encountered linked github #711 issue, 
but it's marked as fixed with 2.1.1 already. 

> Integrate "Error Prone" Code Analyzer
> -
>
> Key: CASSANDRA-13175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13175
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Attachments: 0001-Add-Error-Prone-code-analyzer.patch, 
> checks-2_2.out, checks-3_0.out, checks-trunk.out
>
>
> I've been playing with [Error Prone|http://errorprone.info/] by integrating 
> it into the build process and to see what kind of warnings it would produce. 
> So far I'm positively impressed by the coverage and usefulness of some of the 
> implemented checks. See attachments for results.
> Unfortunately there are still some issues on how the analyzer is effecting 
> generated code and used guava versions, see 
> [#492|https://github.com/google/error-prone/issues/492]. In case those issues 
> have been solved and the resulting code isn't affected by the analyzer, I'd 
> suggest to add it to trunk with warn only behaviour and some less useful 
> checks disabled. Alternatively a new ant target could be added, maybe with 
> build breaking checks and CI integration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13771) Emit metrics whenever we hit tombstone failures and warn thresholds

2017-09-06 Thread Sarath P S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarath P S reassigned CASSANDRA-13771:
--

Assignee: Sarath P S  (was: TIRU ADDANKI)

> Emit metrics whenever we hit tombstone failures and warn thresholds
> ---
>
> Key: CASSANDRA-13771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13771
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: TIRU ADDANKI
>Assignee: Sarath P S
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 13771.patch
>
>
> Many a times we see cassandra timeouts, but unless we check the logs we won’t 
> be able to tell if the time outs are result of too many tombstones or some 
> other issue. It would be easier if we have metrics published whenever we hit 
> tombstone failure/warning thresholds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-12743) Assertion error while running compaction

2017-09-06 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-12743.
-
Resolution: Cannot Reproduce

Ok, thanks, closing as cannot reproduce, please reopen if you see it again

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13692) CompactionAwareWriter_getWriteDirectory throws incompatible exceptions

2017-09-06 Thread Dimitar Dimitrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dimitar Dimitrov updated CASSANDRA-13692:
-
Attachment: c13692-3.0-dtest-results-updated.PNG
c13692-3.11-dtest-results-updated.PNG

Adding updated screenshots from CI dtest results.

> CompactionAwareWriter_getWriteDirectory throws incompatible exceptions
> --
>
> Key: CASSANDRA-13692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13692
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Hao Zhong
>Assignee: Dimitar Dimitrov
>  Labels: lhf
> Attachments: c13692-2.2-dtest-results.PNG, 
> c13692-2.2-testall-results.PNG, c13692-3.0-dtest-results.PNG, 
> c13692-3.0-dtest-results-updated.PNG, c13692-3.0-testall-results.PNG, 
> c13692-3.11-dtest-results.PNG, c13692-3.11-dtest-results-updated.PNG, 
> c13692-3.11-testall-results.PNG, c13692-dtest-results.PNG, 
> c13692-testall-results.PNG
>
>
> The CompactionAwareWriter_getWriteDirectory throws RuntimeException:
> {code}
> public Directories.DataDirectory getWriteDirectory(Iterable 
> sstables, long estimatedWriteSize)
> {
> File directory = null;
> for (SSTableReader sstable : sstables)
> {
> if (directory == null)
> directory = sstable.descriptor.directory;
> if (!directory.equals(sstable.descriptor.directory))
> {
> logger.trace("All sstables not from the same disk - putting 
> results in {}", directory);
> break;
> }
> }
> Directories.DataDirectory d = 
> getDirectories().getDataDirectoryForFile(directory);
> if (d != null)
> {
> long availableSpace = d.getAvailableSpace();
> if (availableSpace < estimatedWriteSize)
> throw new RuntimeException(String.format("Not enough space to 
> write %s to %s (%s available)",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize),
>  d.location,
>  
> FBUtilities.prettyPrintMemory(availableSpace)));
> logger.trace("putting compaction results in {}", directory);
> return d;
> }
> d = getDirectories().getWriteableLocation(estimatedWriteSize);
> if (d == null)
> throw new RuntimeException(String.format("Not enough disk space 
> to store %s",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize)));
> return d;
> }
> {code}
> However, the thrown exception does not  trigger the failure policy. 
> CASSANDRA-11448 fixed a similar problem. The buggy code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new RuntimeException("Insufficient disk space to write " + 
> writeSize + " bytes");
> return directory;
> }
> {code}
> The fixed code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new FSWriteError(new IOException("Insufficient disk space 
> to write " + writeSize + " bytes"), "");
> return directory;
> }
> {code}
> The fixed code throws FSWE and triggers the failure policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13692) CompactionAwareWriter_getWriteDirectory throws incompatible exceptions

2017-09-06 Thread Dimitar Dimitrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155061#comment-16155061
 ] 

Dimitar Dimitrov commented on CASSANDRA-13692:
--

Here's the table with the updated test results (in bold, trunk dtest stability 
issues notwithstanding):

| 
[2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...dimitarndimitrov:c13692-2.2]
 | [testall|^c13692-2.2-testall-results.PNG] | 
[dtest|^c13692-2.2-dtest-results.PNG] 
([dtest-baseline|https://cassci.datastax.com/job/cassandra-2.2_dtest/lastCompletedBuild/testReport/])
 |
| 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...dimitarndimitrov:c13692-3.0]
 | [testall|^c13692-3.0-testall-results.PNG] | 
[*dtest*|^c13692-3.0-dtest-results-updated.PNG] 
([*dtest-baseline*|https://cassci.datastax.com/job/cassandra-3.0_dtest/989/testReport/])
 |
| 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...dimitarndimitrov:c13692-3.11]
 | [testall|^c13692-3.11-testall-results.PNG] 
([testall-baseline|https://cassci.datastax.com/job/cassandra-3.11_testall/lastCompletedBuild/testReport/])
 | [*dtest*|^c13692-3.11-dtest-results-updated.PNG] 
([*dtest-baseline*|https://cassci.datastax.com/job/cassandra-3.11_dtest/165/testReport/])
 |
| 
[trunk|https://github.com/apache/cassandra/compare/trunk...dimitarndimitrov:c13692]
 | [testall|^c13692-testall-results.PNG] | [dtest|^c13692-dtest-results.PNG] 
([dtest-baseline|https://cassci.datastax.com/job/trunk_dtest/lastCompletedBuild/testReport/])
 |

> CompactionAwareWriter_getWriteDirectory throws incompatible exceptions
> --
>
> Key: CASSANDRA-13692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13692
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Hao Zhong
>Assignee: Dimitar Dimitrov
>  Labels: lhf
> Attachments: c13692-2.2-dtest-results.PNG, 
> c13692-2.2-testall-results.PNG, c13692-3.0-dtest-results.PNG, 
> c13692-3.0-testall-results.PNG, c13692-3.11-dtest-results.PNG, 
> c13692-3.11-testall-results.PNG, c13692-dtest-results.PNG, 
> c13692-testall-results.PNG
>
>
> The CompactionAwareWriter_getWriteDirectory throws RuntimeException:
> {code}
> public Directories.DataDirectory getWriteDirectory(Iterable 
> sstables, long estimatedWriteSize)
> {
> File directory = null;
> for (SSTableReader sstable : sstables)
> {
> if (directory == null)
> directory = sstable.descriptor.directory;
> if (!directory.equals(sstable.descriptor.directory))
> {
> logger.trace("All sstables not from the same disk - putting 
> results in {}", directory);
> break;
> }
> }
> Directories.DataDirectory d = 
> getDirectories().getDataDirectoryForFile(directory);
> if (d != null)
> {
> long availableSpace = d.getAvailableSpace();
> if (availableSpace < estimatedWriteSize)
> throw new RuntimeException(String.format("Not enough space to 
> write %s to %s (%s available)",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize),
>  d.location,
>  
> FBUtilities.prettyPrintMemory(availableSpace)));
> logger.trace("putting compaction results in {}", directory);
> return d;
> }
> d = getDirectories().getWriteableLocation(estimatedWriteSize);
> if (d == null)
> throw new RuntimeException(String.format("Not enough disk space 
> to store %s",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize)));
> return d;
> }
> {code}
> However, the thrown exception does not  trigger the failure policy. 
> CASSANDRA-11448 fixed a similar problem. The buggy code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new RuntimeException("Insufficient disk space to write " + 
> writeSize + " bytes");
> return directory;
> }
> {code}
> The fixed code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new FSWriteError(new IOException("Insufficient disk space 
> to write " + writeSize + " bytes"), "");
> return directory;
> }
> {code}
> The 

[jira] [Commented] (CASSANDRA-12743) Assertion error while running compaction

2017-09-06 Thread Jacques-Henri Berthemet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155060#comment-16155060
 ] 

Jacques-Henri Berthemet commented on CASSANDRA-12743:
-

[~krummas] I'm on the same team as [~jbleduigou], we just had a 60h perf 
testing run without such issue, so it looks like it's not happening anymore.

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12743) Assertion error while running compaction

2017-09-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155043#comment-16155043
 ] 

Marcus Eriksson commented on CASSANDRA-12743:
-

[~jbleduigou] Is this still happening for you?

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13818) Add support for --hosts, --force, and subrange repair to incremental repair

2017-09-06 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13818:

Status: Ready to Commit  (was: Patch Available)

> Add support for --hosts, --force, and subrange repair to incremental repair
> ---
>
> Key: CASSANDRA-13818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13818
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> It should be possible to run incremental repair with nodes down, we just 
> shouldn't promote the data to repaired afterwards



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13818) Add support for --hosts, --force, and subrange repair to incremental repair

2017-09-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155024#comment-16155024
 ] 

Marcus Eriksson commented on CASSANDRA-13818:
-

ok, I guess we can change that in the future if it turns out to be a problem

+1

> Add support for --hosts, --force, and subrange repair to incremental repair
> ---
>
> Key: CASSANDRA-13818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13818
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> It should be possible to run incremental repair with nodes down, we just 
> shouldn't promote the data to repaired afterwards



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13843) Debian init shadows CASSANDRA_HEAPDUMP_DIR

2017-09-06 Thread Simon Fontana Oscarsson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154982#comment-16154982
 ] 

Simon Fontana Oscarsson commented on CASSANDRA-13843:
-

I would rather solve this issue with CASSANDRA-13006. So instead of letting 
Cassandra creating it's own heapdumps leave that job to the JVM. 

> Debian init shadows CASSANDRA_HEAPDUMP_DIR
> --
>
> Key: CASSANDRA-13843
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13843
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Andrew Jorgensen
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 3.11.x, 4.x
>
> Attachments: 0001-Remove-debian-init-setting-heap-dump-file.patch
>
>
> The debian init script sets the heap dump file directly using the cassandra 
> users home directory and the -H flag to the cassandra 
> process[1|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/debian/init#L76].
>  The cassandra heap dump location can also be set in the cassandra-env.sh 
> file using CASSANDRA_HEAPDUMP_DIR. Unfortunately the debian init heap dump 
> location is based off the home directory of the cassandra user and cannot 
> easily be changed. Also unfortunately if you do `ps aux | grep casandra` you 
> can clearly see that the -H flag takes precedent over the value found in 
> cassandra-env.sh. This makes it difficult to change the heap dump location 
> for cassandra and is non-intuitive when the value is set in cassandra-env.sh 
> why the heap dump does not actually end up in the correct place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org