[jira] [Assigned] (CASSANDRA-14496) TWCS erroneously disabling tombstone compactions when unchecked_tombstone_compaction=true

2018-06-13 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves reassigned CASSANDRA-14496:


Assignee: Alexander Ivakov

> TWCS erroneously disabling tombstone compactions when 
> unchecked_tombstone_compaction=true
> -
>
> Key: CASSANDRA-14496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14496
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Robert Tarrall
>Assignee: Alexander Ivakov
>Priority: Minor
>
> This code:
> {code:java}
> this.options = new TimeWindowCompactionStrategyOptions(options);
> if 
> (!options.containsKey(AbstractCompactionStrategy.TOMBSTONE_COMPACTION_INTERVAL_OPTION)
>  && 
> !options.containsKey(AbstractCompactionStrategy.TOMBSTONE_THRESHOLD_OPTION))
> {
> disableTombstoneCompactions = true;
> logger.debug("Disabling tombstone compactions for TWCS");
> }
> else
> logger.debug("Enabling tombstone compactions for TWCS");
> }
> {code}
> ... in TimeWindowCompactionStrategy.java disables tombstone compactions in 
> TWCS if you have not *explicitly* set either tombstone_compaction_interval or 
> tombstone_threshold.  Adding 'tombstone_compaction_interval': '86400' to the 
> compaction stanza in a table definition has the (to me unexpected) side 
> effect of enabling tombstone compactions. 
> This is surprising and does not appear to be mentioned in the docs.
> I would suggest that tombstone compactions should be run unless these options 
> are both set to 0.
> If the concern is that (as with DTCS in CASSANDRA-9234) we don't want to 
> waste time on tombstone compactions when we expect the tables to eventually 
> be expired away, perhaps we should also check unchecked_tombstone_compaction 
> and still enable tombstone compactions if that's set to true.
> May also make sense to set defaults for interval & threshold to 0 & disable 
> if they're nonzero so that setting non-default values, rather than setting 
> ANY value, is what determines whether tombstone compactions are enabled?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14423) SSTables stop being compacted

2018-06-13 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511934#comment-16511934
 ] 

Kurt Greaves commented on CASSANDRA-14423:
--

Looks like we only _stopped_ anti-compacting repaired SSTables in 
CASSANDRA-13153, so this bug only occurs since 2.2.10, 3.0.13, and 3.11.0.

> SSTables stop being compacted
> -
>
> Key: CASSANDRA-14423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Major
> Fix For: 3.11.3
>
>
> So seeing a problem in 3.11.0 where SSTables are being lost from the view and 
> not being included in compactions/as candidates for compaction. It seems to 
> get progressively worse until there's only 1-2 SSTables in the view which 
> happen to be the most recent SSTables and thus compactions completely stop 
> for that table.
> The SSTables seem to still be included in reads, just not compactions.
> The issue can be fixed by restarting C*, as it will reload all SSTables into 
> the view, but this is only a temporary fix. User defined/major compactions 
> still work - not clear if they include the result back in the view but is not 
> a good work around.
> This also results in a discrepancy between SSTable count and SSTables in 
> levels for any table using LCS.
> {code:java}
> Keyspace : xxx
> Read Count: 57761088
> Read Latency: 0.10527088681224288 ms.
> Write Count: 2513164
> Write Latency: 0.018211106398149903 ms.
> Pending Flushes: 0
> Table: xxx
> SSTable count: 10
> SSTables in each level: [2, 0, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 894498746
> Space used (total): 894498746
> Space used by snapshots (total): 0
> Off heap memory used (total): 11576197
> SSTable Compression Ratio: 0.6956629530569777
> Number of keys (estimate): 3562207
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 87
> Local read count: 57761088
> Local read latency: 0.108 ms
> Local write count: 2513164
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 86.33
> Bloom filter false positives: 43
> Bloom filter false ratio: 0.0
> Bloom filter space used: 8046104
> Bloom filter off heap memory used: 8046024
> Index summary off heap memory used: 3449005
> Compression metadata off heap memory used: 81168
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 5722
> Compacted partition mean bytes: 175
> Average live cells per slice (last five minutes): 1.0
> Maximum live cells per slice (last five minutes): 1
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Dropped Mutations: 0
> {code}
> Also for STCS we've confirmed that SSTable count will be different to the 
> number of SSTables reported in the Compaction Bucket's. In the below example 
> there's only 3 SSTables in a single bucket - no more are listed for this 
> table. Compaction thresholds haven't been modified for this table and it's a 
> very basic KV schema.
> {code:java}
> Keyspace : yyy
> Read Count: 30485
> Read Latency: 0.06708991307200263 ms.
> Write Count: 57044
> Write Latency: 0.02204061776873992 ms.
> Pending Flushes: 0
> Table: yyy
> SSTable count: 19
> Space used (live): 18195482
> Space used (total): 18195482
> Space used by snapshots (total): 0
> Off heap memory used (total): 747376
> SSTable Compression Ratio: 0.7607394576769735
> Number of keys (estimate): 116074
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 39
> Local read count: 30485
> Local read latency: NaN ms
> Local write count: 57044
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 79.76
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 690912
> Bloom filter off heap memory used: 690760
> Index summary off heap memory used: 54736
> Compression metadata off heap memory used: 1880
> Compacted partition minimum bytes: 73
> Compacted partition maximum bytes: 124
> Compacted partition mean bytes: 96
> Average live cells per slice (last five minutes): NaN
> Maximum live cells per slice (last five minutes): 0
> Average tombstones per slice (last five minutes): NaN
> Maximum tombstones per slice (last five minutes): 0
> Dropped Mutations: 0 
> {code}
> {code:java}
> Apr 27 03:10:39 cassandra[9263]: TRACE o.a.c.d.c.SizeTieredCompactionStrategy 
> 

[jira] [Updated] (CASSANDRA-14423) SSTables stop being compacted

2018-06-13 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14423:
-
Fix Version/s: 3.11.3

> SSTables stop being compacted
> -
>
> Key: CASSANDRA-14423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Major
> Fix For: 3.11.3
>
>
> So seeing a problem in 3.11.0 where SSTables are being lost from the view and 
> not being included in compactions/as candidates for compaction. It seems to 
> get progressively worse until there's only 1-2 SSTables in the view which 
> happen to be the most recent SSTables and thus compactions completely stop 
> for that table.
> The SSTables seem to still be included in reads, just not compactions.
> The issue can be fixed by restarting C*, as it will reload all SSTables into 
> the view, but this is only a temporary fix. User defined/major compactions 
> still work - not clear if they include the result back in the view but is not 
> a good work around.
> This also results in a discrepancy between SSTable count and SSTables in 
> levels for any table using LCS.
> {code:java}
> Keyspace : xxx
> Read Count: 57761088
> Read Latency: 0.10527088681224288 ms.
> Write Count: 2513164
> Write Latency: 0.018211106398149903 ms.
> Pending Flushes: 0
> Table: xxx
> SSTable count: 10
> SSTables in each level: [2, 0, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 894498746
> Space used (total): 894498746
> Space used by snapshots (total): 0
> Off heap memory used (total): 11576197
> SSTable Compression Ratio: 0.6956629530569777
> Number of keys (estimate): 3562207
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 87
> Local read count: 57761088
> Local read latency: 0.108 ms
> Local write count: 2513164
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 86.33
> Bloom filter false positives: 43
> Bloom filter false ratio: 0.0
> Bloom filter space used: 8046104
> Bloom filter off heap memory used: 8046024
> Index summary off heap memory used: 3449005
> Compression metadata off heap memory used: 81168
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 5722
> Compacted partition mean bytes: 175
> Average live cells per slice (last five minutes): 1.0
> Maximum live cells per slice (last five minutes): 1
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Dropped Mutations: 0
> {code}
> Also for STCS we've confirmed that SSTable count will be different to the 
> number of SSTables reported in the Compaction Bucket's. In the below example 
> there's only 3 SSTables in a single bucket - no more are listed for this 
> table. Compaction thresholds haven't been modified for this table and it's a 
> very basic KV schema.
> {code:java}
> Keyspace : yyy
> Read Count: 30485
> Read Latency: 0.06708991307200263 ms.
> Write Count: 57044
> Write Latency: 0.02204061776873992 ms.
> Pending Flushes: 0
> Table: yyy
> SSTable count: 19
> Space used (live): 18195482
> Space used (total): 18195482
> Space used by snapshots (total): 0
> Off heap memory used (total): 747376
> SSTable Compression Ratio: 0.7607394576769735
> Number of keys (estimate): 116074
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 39
> Local read count: 30485
> Local read latency: NaN ms
> Local write count: 57044
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 79.76
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 690912
> Bloom filter off heap memory used: 690760
> Index summary off heap memory used: 54736
> Compression metadata off heap memory used: 1880
> Compacted partition minimum bytes: 73
> Compacted partition maximum bytes: 124
> Compacted partition mean bytes: 96
> Average live cells per slice (last five minutes): NaN
> Maximum live cells per slice (last five minutes): 0
> Average tombstones per slice (last five minutes): NaN
> Maximum tombstones per slice (last five minutes): 0
> Dropped Mutations: 0 
> {code}
> {code:java}
> Apr 27 03:10:39 cassandra[9263]: TRACE o.a.c.d.c.SizeTieredCompactionStrategy 
> Compaction buckets are 
> [[BigTableReader(path='/var/lib/cassandra/data/yyy/yyy-5f7a2d60e4a811e6868a8fd39a64fd59/mc-67168-big-Data.db'),
>  
> 

[jira] [Commented] (CASSANDRA-14423) SSTables stop being compacted

2018-06-13 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511927#comment-16511927
 ] 

Kurt Greaves commented on CASSANDRA-14423:
--

Figured this out. tl;dr is that full repairs are completely broken. We add the 
repaired sstables to the transaction for anti-compaction but we never do 
anything with them or re-write them (because they are already repaired), and 
thus they get "removed" from the compaction strategies SSTables along with the 
unrepaired SSTables that got anti-compacted.

 

This essentially means that full repairs has been terribly broken for a long 
time, haven't checked how far back yet but it seems reasonable to say 2.1. 
Going to mark this as a blocker for 3.11.3 only while doing more research.

> SSTables stop being compacted
> -
>
> Key: CASSANDRA-14423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Major
> Fix For: 3.11.3
>
>
> So seeing a problem in 3.11.0 where SSTables are being lost from the view and 
> not being included in compactions/as candidates for compaction. It seems to 
> get progressively worse until there's only 1-2 SSTables in the view which 
> happen to be the most recent SSTables and thus compactions completely stop 
> for that table.
> The SSTables seem to still be included in reads, just not compactions.
> The issue can be fixed by restarting C*, as it will reload all SSTables into 
> the view, but this is only a temporary fix. User defined/major compactions 
> still work - not clear if they include the result back in the view but is not 
> a good work around.
> This also results in a discrepancy between SSTable count and SSTables in 
> levels for any table using LCS.
> {code:java}
> Keyspace : xxx
> Read Count: 57761088
> Read Latency: 0.10527088681224288 ms.
> Write Count: 2513164
> Write Latency: 0.018211106398149903 ms.
> Pending Flushes: 0
> Table: xxx
> SSTable count: 10
> SSTables in each level: [2, 0, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 894498746
> Space used (total): 894498746
> Space used by snapshots (total): 0
> Off heap memory used (total): 11576197
> SSTable Compression Ratio: 0.6956629530569777
> Number of keys (estimate): 3562207
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 87
> Local read count: 57761088
> Local read latency: 0.108 ms
> Local write count: 2513164
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 86.33
> Bloom filter false positives: 43
> Bloom filter false ratio: 0.0
> Bloom filter space used: 8046104
> Bloom filter off heap memory used: 8046024
> Index summary off heap memory used: 3449005
> Compression metadata off heap memory used: 81168
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 5722
> Compacted partition mean bytes: 175
> Average live cells per slice (last five minutes): 1.0
> Maximum live cells per slice (last five minutes): 1
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Dropped Mutations: 0
> {code}
> Also for STCS we've confirmed that SSTable count will be different to the 
> number of SSTables reported in the Compaction Bucket's. In the below example 
> there's only 3 SSTables in a single bucket - no more are listed for this 
> table. Compaction thresholds haven't been modified for this table and it's a 
> very basic KV schema.
> {code:java}
> Keyspace : yyy
> Read Count: 30485
> Read Latency: 0.06708991307200263 ms.
> Write Count: 57044
> Write Latency: 0.02204061776873992 ms.
> Pending Flushes: 0
> Table: yyy
> SSTable count: 19
> Space used (live): 18195482
> Space used (total): 18195482
> Space used by snapshots (total): 0
> Off heap memory used (total): 747376
> SSTable Compression Ratio: 0.7607394576769735
> Number of keys (estimate): 116074
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 39
> Local read count: 30485
> Local read latency: NaN ms
> Local write count: 57044
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 79.76
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 690912
> Bloom filter off heap memory used: 690760
> Index summary off heap memory used: 54736
> Compression metadata off heap memory used: 1880
> Compacted partition minimum bytes: 73
> Compacted partition maximum 

[jira] [Resolved] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread mck (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck resolved CASSANDRA-10751.
-
Resolution: Won't Fix

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
>   at 
> 

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511806#comment-16511806
 ] 

mck commented on CASSANDRA-10751:
-

Reverted.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> 

[10/10] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-06-13 Thread mck
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3b56d4df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3b56d4df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3b56d4df

Branch: refs/heads/trunk
Commit: 3b56d4df40800f76dcf2c0019af43b7dbc244c57
Parents: bdb5280 f0403b4
Author: Mick Semb Wever 
Authored: Thu Jun 14 09:33:41 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 09:36:28 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3b56d4df/CHANGES.txt
--
diff --cc CHANGES.txt
index 49738cd,083f480..24aaabb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -293,7 -45,7 +293,6 @@@ Merged from 3.0
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
  Merged from 2.2:
-  * CqlRecordReader no longer quotes the keyspace when connecting, as the java 
driver will (CASSANDRA-10751)
 - * Incorrect counting of pending messages in OutboundTcpConnection 
(CASSANDRA-11551)
   * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-06-13 Thread mck
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f0403b4e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f0403b4e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f0403b4e

Branch: refs/heads/cassandra-3.11
Commit: f0403b4e9d0ebe0dab1a96c81f122e780c369e4b
Parents: ed5f834 897b55a
Author: Mick Semb Wever 
Authored: Thu Jun 14 09:32:37 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 09:33:11 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0403b4e/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0403b4e/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[03/10] cassandra git commit: Revert "CqlRecordReader unnecessarily quotes the keyspace when connecting, when the java driver will."

2018-06-13 Thread mck
Revert "CqlRecordReader unnecessarily quotes the keyspace when connecting, when 
the java driver will."

This reverts commit 1b0b113facb2d8ad125b9baa0127ffe5abe8a16e.

See 
https://issues.apache.org/jira/browse/CASSANDRA-10751?focusedCommentId=16511413=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16511413


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1143bc11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1143bc11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1143bc11

Branch: refs/heads/cassandra-3.11
Commit: 1143bc113dc456675a2f3a89c93fba8ce117ac3f
Parents: fc7a69b
Author: Mick Semb Wever 
Authored: Thu Jun 14 07:48:27 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 07:48:27 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1143bc11/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9faf499..7b1089e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,5 @@
 2.2.13
  * Incorrect counting of pending messages in OutboundTcpConnection 
(CASSANDRA-11551)
- * CqlRecordReader no longer quotes the keyspace when connecting, as the java 
driver will (CASSANDRA-10751)
  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
  * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1143bc11/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
index 18b2f50..b3e440d 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
@@ -133,7 +133,7 @@ public class CqlRecordReader extends RecordReader
 }
 
 if (cluster != null)
-session = cluster.connect(keyspace);
+session = cluster.connect(quote(keyspace));
 
 if (session == null)
   throw new RuntimeException("Can't create connection session");


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-06-13 Thread mck
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f0403b4e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f0403b4e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f0403b4e

Branch: refs/heads/trunk
Commit: f0403b4e9d0ebe0dab1a96c81f122e780c369e4b
Parents: ed5f834 897b55a
Author: Mick Semb Wever 
Authored: Thu Jun 14 09:32:37 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 09:33:11 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0403b4e/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0403b4e/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[04/10] cassandra git commit: Revert "CqlRecordReader unnecessarily quotes the keyspace when connecting, when the java driver will."

2018-06-13 Thread mck
Revert "CqlRecordReader unnecessarily quotes the keyspace when connecting, when 
the java driver will."

This reverts commit 1b0b113facb2d8ad125b9baa0127ffe5abe8a16e.

See 
https://issues.apache.org/jira/browse/CASSANDRA-10751?focusedCommentId=16511413=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16511413


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1143bc11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1143bc11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1143bc11

Branch: refs/heads/trunk
Commit: 1143bc113dc456675a2f3a89c93fba8ce117ac3f
Parents: fc7a69b
Author: Mick Semb Wever 
Authored: Thu Jun 14 07:48:27 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 07:48:27 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1143bc11/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9faf499..7b1089e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,5 @@
 2.2.13
  * Incorrect counting of pending messages in OutboundTcpConnection 
(CASSANDRA-11551)
- * CqlRecordReader no longer quotes the keyspace when connecting, as the java 
driver will (CASSANDRA-10751)
  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
  * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1143bc11/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
index 18b2f50..b3e440d 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
@@ -133,7 +133,7 @@ public class CqlRecordReader extends RecordReader
 }
 
 if (cluster != null)
-session = cluster.connect(keyspace);
+session = cluster.connect(quote(keyspace));
 
 if (session == null)
   throw new RuntimeException("Can't create connection session");


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[02/10] cassandra git commit: Revert "CqlRecordReader unnecessarily quotes the keyspace when connecting, when the java driver will."

2018-06-13 Thread mck
Revert "CqlRecordReader unnecessarily quotes the keyspace when connecting, when 
the java driver will."

This reverts commit 1b0b113facb2d8ad125b9baa0127ffe5abe8a16e.

See 
https://issues.apache.org/jira/browse/CASSANDRA-10751?focusedCommentId=16511413=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16511413


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1143bc11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1143bc11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1143bc11

Branch: refs/heads/cassandra-3.0
Commit: 1143bc113dc456675a2f3a89c93fba8ce117ac3f
Parents: fc7a69b
Author: Mick Semb Wever 
Authored: Thu Jun 14 07:48:27 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 07:48:27 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1143bc11/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9faf499..7b1089e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,5 @@
 2.2.13
  * Incorrect counting of pending messages in OutboundTcpConnection 
(CASSANDRA-11551)
- * CqlRecordReader no longer quotes the keyspace when connecting, as the java 
driver will (CASSANDRA-10751)
  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
  * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1143bc11/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
index 18b2f50..b3e440d 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
@@ -133,7 +133,7 @@ public class CqlRecordReader extends RecordReader
 }
 
 if (cluster != null)
-session = cluster.connect(keyspace);
+session = cluster.connect(quote(keyspace));
 
 if (session == null)
   throw new RuntimeException("Can't create connection session");


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-06-13 Thread mck
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/897b55a6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/897b55a6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/897b55a6

Branch: refs/heads/cassandra-3.0
Commit: 897b55a6b56130cb8b9c0af3907b788011623b37
Parents: cce9ab2 1143bc1
Author: Mick Semb Wever 
Authored: Thu Jun 14 09:30:28 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 09:31:43 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/897b55a6/CHANGES.txt
--
diff --cc CHANGES.txt
index dfdfbfd,7b1089e..94fbcd2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,34 -1,5 +1,33 @@@
 -2.2.13
 +3.0.17
 + * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
 + * Add Missing dependencies in pom-all (CASSANDRA-14422)
 + * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
 + * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)
 + * Cassandra not starting when using enhanced startup scripts in windows 
(CASSANDRA-14418)
 + * Fix progress stats and units in compactionstats (CASSANDRA-12244)
 + * Better handle missing partition columns in system_schema.columns 
(CASSANDRA-14379)
 + * Delay hints store excise by write timeout to avoid race with decommission 
(CASSANDRA-13740)
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Incorrect counting of pending messages in OutboundTcpConnection 
(CASSANDRA-11551)
-  * CqlRecordReader no longer quotes the keyspace when connecting, as the java 
driver will (CASSANDRA-10751)
   * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/897b55a6/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-06-13 Thread mck
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/897b55a6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/897b55a6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/897b55a6

Branch: refs/heads/cassandra-3.11
Commit: 897b55a6b56130cb8b9c0af3907b788011623b37
Parents: cce9ab2 1143bc1
Author: Mick Semb Wever 
Authored: Thu Jun 14 09:30:28 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 09:31:43 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/897b55a6/CHANGES.txt
--
diff --cc CHANGES.txt
index dfdfbfd,7b1089e..94fbcd2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,34 -1,5 +1,33 @@@
 -2.2.13
 +3.0.17
 + * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
 + * Add Missing dependencies in pom-all (CASSANDRA-14422)
 + * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
 + * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)
 + * Cassandra not starting when using enhanced startup scripts in windows 
(CASSANDRA-14418)
 + * Fix progress stats and units in compactionstats (CASSANDRA-12244)
 + * Better handle missing partition columns in system_schema.columns 
(CASSANDRA-14379)
 + * Delay hints store excise by write timeout to avoid race with decommission 
(CASSANDRA-13740)
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Incorrect counting of pending messages in OutboundTcpConnection 
(CASSANDRA-11551)
-  * CqlRecordReader no longer quotes the keyspace when connecting, as the java 
driver will (CASSANDRA-10751)
   * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/897b55a6/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[01/10] cassandra git commit: Revert "CqlRecordReader unnecessarily quotes the keyspace when connecting, when the java driver will."

2018-06-13 Thread mck
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 fc7a69b65 -> 1143bc113
  refs/heads/cassandra-3.0 cce9ab236 -> 897b55a6b
  refs/heads/cassandra-3.11 ed5f8347e -> f0403b4e9
  refs/heads/trunk bdb52801c -> 3b56d4df4


Revert "CqlRecordReader unnecessarily quotes the keyspace when connecting, when 
the java driver will."

This reverts commit 1b0b113facb2d8ad125b9baa0127ffe5abe8a16e.

See 
https://issues.apache.org/jira/browse/CASSANDRA-10751?focusedCommentId=16511413=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16511413


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1143bc11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1143bc11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1143bc11

Branch: refs/heads/cassandra-2.2
Commit: 1143bc113dc456675a2f3a89c93fba8ce117ac3f
Parents: fc7a69b
Author: Mick Semb Wever 
Authored: Thu Jun 14 07:48:27 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 07:48:27 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1143bc11/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9faf499..7b1089e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,5 @@
 2.2.13
  * Incorrect counting of pending messages in OutboundTcpConnection 
(CASSANDRA-11551)
- * CqlRecordReader no longer quotes the keyspace when connecting, as the java 
driver will (CASSANDRA-10751)
  * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
  * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1143bc11/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
index 18b2f50..b3e440d 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
@@ -133,7 +133,7 @@ public class CqlRecordReader extends RecordReader
 }
 
 if (cluster != null)
-session = cluster.connect(keyspace);
+session = cluster.connect(quote(keyspace));
 
 if (session == null)
   throw new RuntimeException("Can't create connection session");


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-06-13 Thread mck
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/897b55a6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/897b55a6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/897b55a6

Branch: refs/heads/trunk
Commit: 897b55a6b56130cb8b9c0af3907b788011623b37
Parents: cce9ab2 1143bc1
Author: Mick Semb Wever 
Authored: Thu Jun 14 09:30:28 2018 +1000
Committer: Mick Semb Wever 
Committed: Thu Jun 14 09:31:43 2018 +1000

--
 CHANGES.txt| 1 -
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/897b55a6/CHANGES.txt
--
diff --cc CHANGES.txt
index dfdfbfd,7b1089e..94fbcd2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,34 -1,5 +1,33 @@@
 -2.2.13
 +3.0.17
 + * Fix regression of lagging commitlog flush log message (CASSANDRA-14451)
 + * Add Missing dependencies in pom-all (CASSANDRA-14422)
 + * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447)
 + * Fix deprecated repair error notifications from 3.x clusters to legacy JMX 
clients (CASSANDRA-13121)
 + * Cassandra not starting when using enhanced startup scripts in windows 
(CASSANDRA-14418)
 + * Fix progress stats and units in compactionstats (CASSANDRA-12244)
 + * Better handle missing partition columns in system_schema.columns 
(CASSANDRA-14379)
 + * Delay hints store excise by write timeout to avoid race with decommission 
(CASSANDRA-13740)
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Incorrect counting of pending messages in OutboundTcpConnection 
(CASSANDRA-11551)
-  * CqlRecordReader no longer quotes the keyspace when connecting, as the java 
driver will (CASSANDRA-10751)
   * Fix compaction failure caused by reading un-flushed data (CASSANDRA-12743)
   * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/897b55a6/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread mck (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511698#comment-16511698
 ] 

mck commented on CASSANDRA-10751:
-

Right, even with the 2.2 java driver (that comes with C* 2.2) this does not 
apply.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> 

[jira] [Commented] (CASSANDRA-14520) ClosedChannelException handled as FSError

2018-06-13 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511666#comment-16511666
 ] 

Jason Brown commented on CASSANDRA-14520:
-

Here's a stack trace [~bdeggleston] sent me before with this, unedited from the 
original:

{noformat}
WARN  [Stream-Deserializer-/17.177.98.2:47610-9785e92c] 2018-02-14 16:46:26,470 
CompressedStreamReader.java:111 - [Stream 757ae2c0-11e9-11e8-8fc8-f7aed080d7f2] 
Error while reading partition DecoratedKey(-752965546089109, 
2f645c5d1654236063405e7f6c4f273637231a5f037e696a6f3b4e411a18462b0942754651100f381d24757311071a761f46170a46524d62083458313e0442245f6b4046401f46712b1e310456326c245e080d02294d216266252d1635662e093a7f2b027d00451f2c350c1a5e177e574a785d1b702b1b400105281e4c3f6359704875124165705e2c403d0346725454531f076f3d577203233a73213027113520172a7e01467a5c776d4d546d5f57343f64215f4c5b05502922694a5d38621440362d0960005b7d1663565c79501e5c3c7a5032603e6f5e7b19645a2c017b391547212e6a7b516270317a42)
 from stream on ks='fullks' and table='kvtest'.
ERROR [Stream-Deserializer-/17.177.98.2:47610-9785e92c] 2018-02-14 16:46:26,476 
StreamSession.java:617 - [Stream #757ae2c0-11e9-11e8-8fc8-f7aed080d7f2] 
Streaming error occurred on session with peer 17.177.98.2
org.apache.cassandra.streaming.StreamReceiveException: 
java.lang.AssertionError: stream can only read forward.
at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:63)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:41)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:55)
 ~[main/:na]
at 
org.apache.cassandra.streaming.async.StreamingInboundHandler$StreamDeserializingTask.run(StreamingInboundHandler.java:178)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: java.lang.AssertionError: stream can only read forward.
at 
org.apache.cassandra.streaming.compress.CompressedInputStream.position(CompressedInputStream.java:108)
 ~[main/:na]
at 
org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:94)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:58)
 ~[main/:na]
... 4 common frames omitted
INFO  [Stream-Deserializer-/17.177.98.2:47610-9785e92c] 2018-02-14 16:46:26,478 
StreamResultFuture.java:193 - [Stream #757ae2c0-11e9-11e8-8fc8-f7aed080d7f2] 
Session with /17.177.98.2 is complete
WARN  [Stream-Deserializer-/17.177.98.2:47610-9785e92c] 2018-02-14 16:46:26,479 
StreamResultFuture.java:220 - [Stream #757ae2c0-11e9-11e8-8fc8-f7aed080d7f2] 
Stream failed
ERROR [NettyStreaming-Outbound-/17.177.98.2:1] 2018-02-14 16:46:26,486 
CassandraDaemon.java:211 - Exception in thread 
Thread[NettyStreaming-Outbound-/17.177.98.2:1,5,main]
org.apache.cassandra.io.FSReadError: 
java.nio.channels.ClosedByInterruptException
at 
org.apache.cassandra.io.util.ChannelProxy.read(ChannelProxy.java:133) 
~[main/:na]
at 
org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:94)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage.serialize(OutgoingFileMessage.java:111)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:53)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
 ~[main/:na]
at 
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:41)
 ~[main/:na]
at 
org.apache.cassandra.streaming.async.NettyStreamingMessageSender$FileStreamTask.run(NettyStreamingMessageSender.java:324)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_131]
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
 [main/:na]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
Caused by: java.nio.channels.ClosedByInterruptException: null
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
 ~[na:1.8.0_131]
at sun.nio.ch.FileChannelImpl.readInternal(FileChannelImpl.java:746) 
~[na:1.8.0_131]
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:727) 
~[na:1.8.0_131]
at 

[jira] [Created] (CASSANDRA-14520) ClosedChannelException handled as FSError

2018-06-13 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-14520:
---

 Summary: ClosedChannelException handled as FSError
 Key: CASSANDRA-14520
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14520
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
Assignee: Jason Brown
 Fix For: 4.0


After the messaging service netty refactor, I’ve seen a few instances where a 
closed socket causes a ClosedChannelException (an IOException subclass) to be 
thrown. The exception is caught by ChannelProxy, interpreted as a disk error, 
and is then re-thrown as an FSError, causing the node to be shutdown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14518) Want to install cassandra on the single server with mulitple nodes on centos Single server. I will attach the specs of the server

2018-06-13 Thread Joshua McKenzie (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-14518.
-
Resolution: Invalid

This Jira is for tracking development on the project. Please reference the 
community site for FAQs and the [user mailing 
list|https://cassandra.apache.org/community/] for help setting up a cluster.

> Want to install cassandra on the single server with mulitple nodes on centos 
> Single server. I will attach the specs of the server
> -
>
> Key: CASSANDRA-14518
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14518
> Project: Cassandra
>  Issue Type: Task
>  Components: Hints
> Environment: |Server Specifications|1|
> | | |
> |Operating System|centos|
> |Control Panel|None|
> |Additional RAM|64|
> |1st and 2nd Hard Disk|2 x 250 GB SSD - RAID 1 - FREE|
> |3rd Hard Disk|500 GB SSD|
> |4th Hard Disk|500 GB HDD|
> |5th Hard Disk|None|
> |6th Hard Disk|None|
> |Hardware RAID|No|
> |Port Speed|100 Mbps - FREE|
> |Tier 1 Bandwidth|15 TB - FREE|
> |Remote Backup Storage|None|
> |IPMI KVM|None|
> |Model|79|
> |Model_Name|Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz|
> |Number of cores|32|
>Reporter: girish
>Priority: Major
>
> Datacenter: datacenter1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> -- Address Load Tokens Owns Host ID Rack
> UN 127.0.0.1 320.67 KiB 256 ? efb256fb-a04f-4c4f-948b-b105a7f7a658 rack1
> Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14519) have issues configuring the Cassandra on the

2018-06-13 Thread Joshua McKenzie (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-14519.
-
Resolution: Invalid

As on the previous ticket, please take usage questions to the mailing list.

> have issues configuring the Cassandra on the 
> -
>
> Key: CASSANDRA-14519
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14519
> Project: Cassandra
>  Issue Type: Test
>  Components: Core
>Reporter: girish
>Priority: Major
>
> I have installed the Cassandra with multiple nodes but I was not getting any 
> nodetools status. 
> where Should I get this done.
>  
> Take a look at this issue:
> [root@server2 opt]# nodetool status
> Datacenter: datacenter1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> -- Address Load Tokens Owns Host ID Rack
> UN 127.0.0.1 320.67 KiB 256 ? efb256fb-a04f-4c4f-948b-b105a7f7 a658 rack1
> Note: Non-system keyspaces don't have the same replication settings, 
> effective o wnership information is meaningless.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Abort compactions quicker

2018-06-13 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2aeed037e -> bdb52801c


Abort compactions quicker

Patch by marcuse; reviewed by Alex Petrov for CASSANDRA-14397


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bdb52801
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bdb52801
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bdb52801

Branch: refs/heads/trunk
Commit: bdb52801c7384ef07f7fc0b4f3b965bdf35d821d
Parents: 2aeed03
Author: Marcus Eriksson 
Authored: Fri Apr 13 15:15:03 2018 +0200
Committer: Marcus Eriksson 
Committed: Wed Jun 13 13:06:59 2018 -0700

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionIterator.java   | 38 +++-
 .../db/compaction/CompactionManager.java|  3 -
 .../cassandra/db/compaction/CompactionTask.java |  3 -
 .../db/repair/CassandraValidationIterator.java  |  8 ---
 .../db/compaction/CompactionIteratorTest.java   | 61 
 6 files changed, 99 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdb52801/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 629df0c..49738cd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Abort compactions quicker (CASSANDRA-14397)
  * Support light-weight transactions in cassandra-stress (CASSANDRA-13529)
  * Make AsyncOneResponse use the correct timeout (CASSANDRA-14509)
  * Add option to sanity check tombstones on reads/compactions (CASSANDRA-14467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdb52801/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
index dfbb6cc..c9d7e52 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
@@ -104,7 +104,8 @@ public class CompactionIterator extends 
CompactionInfo.Holder implements Unfilte
? 
EmptyIterators.unfilteredPartition(controller.cfs.metadata())
: 
UnfilteredPartitionIterators.merge(scanners, nowInSec, listener());
 merged = Transformation.apply(merged, new GarbageSkipper(controller, 
nowInSec));
-this.compacted = Transformation.apply(merged, new Purger(controller, 
nowInSec));
+merged = Transformation.apply(merged, new Purger(controller, 
nowInSec));
+compacted = Transformation.apply(merged, new 
AbortableUnfilteredPartitionTransformation(this));
 }
 
 public TableMetadata metadata()
@@ -542,4 +543,39 @@ public class CompactionIterator extends 
CompactionInfo.Holder implements Unfilte
 return new GarbageSkippingUnfilteredRowIterator(partition, 
UnfilteredRowIterators.merge(iters, nowInSec), nowInSec, cellLevelGC);
 }
 }
+
+private static class AbortableUnfilteredPartitionTransformation extends 
Transformation
+{
+private final AbortableUnfilteredRowTransformation abortableIter;
+
+private AbortableUnfilteredPartitionTransformation(CompactionIterator 
iter)
+{
+this.abortableIter = new 
AbortableUnfilteredRowTransformation(iter);
+}
+
+@Override
+protected UnfilteredRowIterator applyToPartition(UnfilteredRowIterator 
partition)
+{
+if (abortableIter.iter.isStopRequested())
+throw new 
CompactionInterruptedException(abortableIter.iter.getCompactionInfo());
+return Transformation.apply(partition, abortableIter);
+}
+}
+
+private static class AbortableUnfilteredRowTransformation extends 
Transformation
+{
+private final CompactionIterator iter;
+
+private AbortableUnfilteredRowTransformation(CompactionIterator iter)
+{
+this.iter = iter;
+}
+
+public Row applyToRow(Row row)
+{
+if (iter.isStopRequested())
+throw new 
CompactionInterruptedException(iter.getCompactionInfo());
+return row;
+}
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdb52801/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 5c61982..a872fea 100644
--- 

[jira] [Updated] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions

2018-06-13 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14397:

   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Patch Available)

and committed as {{bdb52801c7384ef07f7fc0b4f3b965bdf35d821d}}, thanks!

> Stop compactions quicker when compacting wide partitions
> 
>
> Key: CASSANDRA-14397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.0
>
>
> We should allow compactions to be stopped when compacting wide partitions, 
> this will help when a user wants to run upgradesstables for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12526) For LCS, single SSTable up-level is handled inefficiently

2018-06-13 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511600#comment-16511600
 ] 

Marcus Eriksson commented on CASSANDRA-12526:
-

I'll hold off committing this until CASSANDRA-14388 has been committed

> For LCS, single SSTable up-level is handled inefficiently
> -
>
> Key: CASSANDRA-12526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12526
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: compaction, lcs, performance
> Fix For: 4.x
>
>
> I'm using the latest trunk (as of August 2016, which probably is going to be 
> 3.10) to run some experiments on LeveledCompactionStrategy and noticed this 
> inefficiency.
> The test data is generated using cassandra-stress default parameters 
> (keyspace1.standard1), so as you can imagine, it consists of a ton of newly 
> inserted partitions that will never merge in compactions, which is probably 
> the worst kind of workload for LCS (however, I'll detail later why this 
> scenario should not be ignored as a corner case; for now, let's just assume 
> we still want to handle this scenario efficiently).
> After the compaction test is done, I scrubbed debug.log for patterns that 
> match  the "Compacted" summary so that I can see how long each individual 
> compaction took and how many bytes they processed. The search pattern is like 
> the following:
> {noformat}
> grep 'Compacted.*standard1' debug.log
> {noformat}
> Interestingly, I noticed a lot of the finished compactions are marked as 
> having *only one* SSTable involved. With the workload mentioned above, the 
> "single SSTable" compactions actually consist of the majority of all 
> compactions (as shown below), so its efficiency can affect the overall 
> compaction throughput quite a bit.
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | wc -l
> 243
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | grep ") 1 sstable" | wc -l
> 218
> {noformat}
> By looking at the code, it appears that there's a way to directly edit the 
> level of a particular SSTable like the following:
> {code}
> sstable.descriptor.getMetadataSerializer().mutateLevel(sstable.descriptor, 
> targetLevel);
> sstable.reloadSSTableMetadata();
> {code}
> To be exact, I summed up the time spent for these single-SSTable compactions 
> (the total data size is 60GB) and found that if each compaction only needs to 
> spend 100ms for only the metadata change (instead of the 10+ second they're 
> doing now), it can already achieve 22.75% saving on total compaction time.
> Compared to what we have now (reading the whole single-SSTable from old level 
> and writing out the same single-SSTable at the new level), the only 
> difference I could think of by using this approach is that the new SSTable 
> will have the same file name (sequence number) as the old one's, which could 
> break some assumptions on some other part of the code. However, not having to 
> go through the full read/write IO, and not having to bear the overhead of 
> cleaning up the old file, creating the new file, creating more churns in heap 
> and file buffer, it seems the benefits outweigh the inconvenience. So I'd 
> argue this JIRA belongs to LHF and should be made available in 3.0.x as well.
> As mentioned in the 2nd paragraph, I'm also going to address why this kind of 
> all-new-partition workload should not be ignored as a corner case. Basically, 
> for the main use case of LCS where you need to frequently merge partitions to 
> optimize read and eliminate tombstones and expired data sooner, LCS can be 
> perfectly happy and efficiently perform the partition merge and tombstone 
> elimination for a long time. However, as soon as the node becomes a bit 
> unhealthy for various reasons (could be a bad disk so it's missing a whole 
> bunch of mutations and need repair, could be the user chooses to ingest way 
> more data than it usually takes and exceeds its capability, or god-forbidden, 
> some DBA chooses to run offline sstablelevelreset), you will have to handle 
> this kind of "all-new-partition with a lot of SSTables in L0" scenario, and 
> once all L0 SSTables finally gets up-leveled to L1, you will likely see a lot 
> of such single-SSTable compactions, which is the situation this JIRA is 
> intended to address.
> Actually, when I think more about this, to make this kind of single SSTable 
> up-level more efficient will not only help the all-new-partition scenario, 
> but also help in general any time when there is a big backlog of L0 SSTables 
> due to too many flushes or 

[jira] [Updated] (CASSANDRA-12526) For LCS, single SSTable up-level is handled inefficiently

2018-06-13 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12526:

Status: Ready to Commit  (was: Patch Available)

> For LCS, single SSTable up-level is handled inefficiently
> -
>
> Key: CASSANDRA-12526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12526
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: compaction, lcs, performance
> Fix For: 4.x
>
>
> I'm using the latest trunk (as of August 2016, which probably is going to be 
> 3.10) to run some experiments on LeveledCompactionStrategy and noticed this 
> inefficiency.
> The test data is generated using cassandra-stress default parameters 
> (keyspace1.standard1), so as you can imagine, it consists of a ton of newly 
> inserted partitions that will never merge in compactions, which is probably 
> the worst kind of workload for LCS (however, I'll detail later why this 
> scenario should not be ignored as a corner case; for now, let's just assume 
> we still want to handle this scenario efficiently).
> After the compaction test is done, I scrubbed debug.log for patterns that 
> match  the "Compacted" summary so that I can see how long each individual 
> compaction took and how many bytes they processed. The search pattern is like 
> the following:
> {noformat}
> grep 'Compacted.*standard1' debug.log
> {noformat}
> Interestingly, I noticed a lot of the finished compactions are marked as 
> having *only one* SSTable involved. With the workload mentioned above, the 
> "single SSTable" compactions actually consist of the majority of all 
> compactions (as shown below), so its efficiency can affect the overall 
> compaction throughput quite a bit.
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | wc -l
> 243
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | grep ") 1 sstable" | wc -l
> 218
> {noformat}
> By looking at the code, it appears that there's a way to directly edit the 
> level of a particular SSTable like the following:
> {code}
> sstable.descriptor.getMetadataSerializer().mutateLevel(sstable.descriptor, 
> targetLevel);
> sstable.reloadSSTableMetadata();
> {code}
> To be exact, I summed up the time spent for these single-SSTable compactions 
> (the total data size is 60GB) and found that if each compaction only needs to 
> spend 100ms for only the metadata change (instead of the 10+ second they're 
> doing now), it can already achieve 22.75% saving on total compaction time.
> Compared to what we have now (reading the whole single-SSTable from old level 
> and writing out the same single-SSTable at the new level), the only 
> difference I could think of by using this approach is that the new SSTable 
> will have the same file name (sequence number) as the old one's, which could 
> break some assumptions on some other part of the code. However, not having to 
> go through the full read/write IO, and not having to bear the overhead of 
> cleaning up the old file, creating the new file, creating more churns in heap 
> and file buffer, it seems the benefits outweigh the inconvenience. So I'd 
> argue this JIRA belongs to LHF and should be made available in 3.0.x as well.
> As mentioned in the 2nd paragraph, I'm also going to address why this kind of 
> all-new-partition workload should not be ignored as a corner case. Basically, 
> for the main use case of LCS where you need to frequently merge partitions to 
> optimize read and eliminate tombstones and expired data sooner, LCS can be 
> perfectly happy and efficiently perform the partition merge and tombstone 
> elimination for a long time. However, as soon as the node becomes a bit 
> unhealthy for various reasons (could be a bad disk so it's missing a whole 
> bunch of mutations and need repair, could be the user chooses to ingest way 
> more data than it usually takes and exceeds its capability, or god-forbidden, 
> some DBA chooses to run offline sstablelevelreset), you will have to handle 
> this kind of "all-new-partition with a lot of SSTables in L0" scenario, and 
> once all L0 SSTables finally gets up-leveled to L1, you will likely see a lot 
> of such single-SSTable compactions, which is the situation this JIRA is 
> intended to address.
> Actually, when I think more about this, to make this kind of single SSTable 
> up-level more efficient will not only help the all-new-partition scenario, 
> but also help in general any time when there is a big backlog of L0 SSTables 
> due to too many flushes or excessive repair streaming with vnode. In those 
> 

svn commit: r1833476 - in /cassandra/site: publish/index.html src/index.html

2018-06-13 Thread mshuler
Author: mshuler
Date: Wed Jun 13 19:55:42 2018
New Revision: 1833476

URL: http://svn.apache.org/viewvc?rev=1833476=rev
Log:
Fix text spacing in index page

Modified:
cassandra/site/publish/index.html
cassandra/site/src/index.html

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1833476=1833475=1833476=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Wed Jun 13 19:55:42 2018
@@ -112,7 +112,7 @@
   The Apache Cassandra database is the right choice when you need 
scalability and high availability without
   compromising performance. http://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html;>Linear
 scalability
   and proven fault-tolerance on commodity hardware or cloud infrastructure 
make it the perfect platform for
-  mission-critical data.Cassandra's support for replicating across 
multiple datacenters is best-in-class, providing
+  mission-critical data. Cassandra's support for replicating across 
multiple datacenters is best-in-class, providing
   lower latency for your users and the peace of mind of knowing that you 
can survive regional outages.
   
 

Modified: cassandra/site/src/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/index.html?rev=1833476=1833475=1833476=diff
==
--- cassandra/site/src/index.html (original)
+++ cassandra/site/src/index.html Wed Jun 13 19:55:42 2018
@@ -12,7 +12,7 @@ is_homepage: true
   The Apache Cassandra database is the right choice when you need 
scalability and high availability without
   compromising performance. http://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html;>Linear
 scalability
   and proven fault-tolerance on commodity hardware or cloud infrastructure 
make it the perfect platform for
-  mission-critical data.Cassandra's support for replicating across 
multiple datacenters is best-in-class, providing
+  mission-critical data. Cassandra's support for replicating across 
multiple datacenters is best-in-class, providing
   lower latency for your users and the peace of mind of knowing that you 
can survive regional outages.
   
 



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14519) have issues configuring the Cassandra on the

2018-06-13 Thread girish (JIRA)
girish created CASSANDRA-14519:
--

 Summary: have issues configuring the Cassandra on the 
 Key: CASSANDRA-14519
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14519
 Project: Cassandra
  Issue Type: Test
  Components: Core
Reporter: girish


I have installed the Cassandra with multiple nodes but I was not getting any 
nodetools status. 

where Should I get this done.

 

Take a look at this issue:

[root@server2 opt]# nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 320.67 KiB 256 ? efb256fb-a04f-4c4f-948b-b105a7f7 a658 rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
o wnership information is meaningless.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions

2018-06-13 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510859#comment-16510859
 ] 

Alex Petrov edited comment on CASSANDRA-14397 at 6/13/18 6:52 PM:
--

[~krummas] thank you for the patch! 

+1 LGTM, I just have 2 minor comments (please feel free to address on commit if 
applicable):

  * We might benefit from also adding the call to {{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * I've tried to reuse {{CompactionIteratorTest}} to write a more "precise" 
test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
{{CompactionInterruptedException}} is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).


was (Author: ifesdjeen):
[~krummas] thank you for the patch! +1 LGTM, I just have 2 minor comments:

  * We might benefit from also adding the call to {{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * I've tried to reuse {{CompactionIteratorTest}} to write a more "precise" 
test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
{{CompactionInterruptedException}} is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).

> Stop compactions quicker when compacting wide partitions
> 
>
> Key: CASSANDRA-14397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> We should allow compactions to be stopped when compacting wide partitions, 
> this will help when a user wants to run upgradesstables for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions

2018-06-13 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510859#comment-16510859
 ] 

Alex Petrov edited comment on CASSANDRA-14397 at 6/13/18 6:51 PM:
--

[~krummas] thank you for the patch! +1 LGTM, I just have 2 minor comments:

  * We might benefit from also adding the call to {{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * I've tried to reuse {{CompactionIteratorTest}} to write a more "precise" 
test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
{{CompactionInterruptedException}} is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).


was (Author: ifesdjeen):
[~krummas] thank you for the patch! It looks good, I just have 2 minor comments:

  * We might benefit from also adding the call to {{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * I've tried to reuse {{CompactionIteratorTest}} to write a more "precise" 
test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
{{CompactionInterruptedException}} is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).

> Stop compactions quicker when compacting wide partitions
> 
>
> Key: CASSANDRA-14397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> We should allow compactions to be stopped when compacting wide partitions, 
> this will help when a user wants to run upgradesstables for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread Andy Tolbert (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511528#comment-16511528
 ] 

Andy Tolbert edited comment on CASSANDRA-10751 at 6/13/18 6:51 PM:
---

(y) the driver does not quote the keyspace value given to {{Cluster.connect}} 
so this change will cause problems for mixed-case keyspaces.  To further 
expound on [~jjordan]'s comment, it's misleading but the [original 
code|https://github.com/datastax/java-driver/blob/2.1.8/driver-core/src/main/java/com/datastax/driver/core/Connection.java#L477]
 referenced is quoting the value returned from a {{set_keyspace}} response sent 
by C*.  In that case, C* sends the internal form of the keyspace, which is 
never quoted, so we have to quote it there.


was (Author: andrew.tolbert):
(y) the driver does not quote the keyspace value given to {{Cluster.connect 
}}so this change will cause problems for mixed-case keyspaces.  To further 
expound on [~jjordan]'s comment, it's misleading but the [original 
code|https://github.com/datastax/java-driver/blob/2.1.8/driver-core/src/main/java/com/datastax/driver/core/Connection.java#L477]
 referenced is quoting the value returned from a {{set_keyspace}} response sent 
by C*.  In that case, C* sends the internal form of the keyspace, which is 
never quoted, so we have to quote it there.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: 

[jira] [Commented] (CASSANDRA-12526) For LCS, single SSTable up-level is handled inefficiently

2018-06-13 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511542#comment-16511542
 ] 

Alex Petrov commented on CASSANDRA-12526:
-

+1, LGTM.

Thank you for the patch!

> For LCS, single SSTable up-level is handled inefficiently
> -
>
> Key: CASSANDRA-12526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12526
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: compaction, lcs, performance
> Fix For: 4.x
>
>
> I'm using the latest trunk (as of August 2016, which probably is going to be 
> 3.10) to run some experiments on LeveledCompactionStrategy and noticed this 
> inefficiency.
> The test data is generated using cassandra-stress default parameters 
> (keyspace1.standard1), so as you can imagine, it consists of a ton of newly 
> inserted partitions that will never merge in compactions, which is probably 
> the worst kind of workload for LCS (however, I'll detail later why this 
> scenario should not be ignored as a corner case; for now, let's just assume 
> we still want to handle this scenario efficiently).
> After the compaction test is done, I scrubbed debug.log for patterns that 
> match  the "Compacted" summary so that I can see how long each individual 
> compaction took and how many bytes they processed. The search pattern is like 
> the following:
> {noformat}
> grep 'Compacted.*standard1' debug.log
> {noformat}
> Interestingly, I noticed a lot of the finished compactions are marked as 
> having *only one* SSTable involved. With the workload mentioned above, the 
> "single SSTable" compactions actually consist of the majority of all 
> compactions (as shown below), so its efficiency can affect the overall 
> compaction throughput quite a bit.
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | wc -l
> 243
> automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' 
> debug.log-test1 | grep ") 1 sstable" | wc -l
> 218
> {noformat}
> By looking at the code, it appears that there's a way to directly edit the 
> level of a particular SSTable like the following:
> {code}
> sstable.descriptor.getMetadataSerializer().mutateLevel(sstable.descriptor, 
> targetLevel);
> sstable.reloadSSTableMetadata();
> {code}
> To be exact, I summed up the time spent for these single-SSTable compactions 
> (the total data size is 60GB) and found that if each compaction only needs to 
> spend 100ms for only the metadata change (instead of the 10+ second they're 
> doing now), it can already achieve 22.75% saving on total compaction time.
> Compared to what we have now (reading the whole single-SSTable from old level 
> and writing out the same single-SSTable at the new level), the only 
> difference I could think of by using this approach is that the new SSTable 
> will have the same file name (sequence number) as the old one's, which could 
> break some assumptions on some other part of the code. However, not having to 
> go through the full read/write IO, and not having to bear the overhead of 
> cleaning up the old file, creating the new file, creating more churns in heap 
> and file buffer, it seems the benefits outweigh the inconvenience. So I'd 
> argue this JIRA belongs to LHF and should be made available in 3.0.x as well.
> As mentioned in the 2nd paragraph, I'm also going to address why this kind of 
> all-new-partition workload should not be ignored as a corner case. Basically, 
> for the main use case of LCS where you need to frequently merge partitions to 
> optimize read and eliminate tombstones and expired data sooner, LCS can be 
> perfectly happy and efficiently perform the partition merge and tombstone 
> elimination for a long time. However, as soon as the node becomes a bit 
> unhealthy for various reasons (could be a bad disk so it's missing a whole 
> bunch of mutations and need repair, could be the user chooses to ingest way 
> more data than it usually takes and exceeds its capability, or god-forbidden, 
> some DBA chooses to run offline sstablelevelreset), you will have to handle 
> this kind of "all-new-partition with a lot of SSTables in L0" scenario, and 
> once all L0 SSTables finally gets up-leveled to L1, you will likely see a lot 
> of such single-SSTable compactions, which is the situation this JIRA is 
> intended to address.
> Actually, when I think more about this, to make this kind of single SSTable 
> up-level more efficient will not only help the all-new-partition scenario, 
> but also help in general any time when there is a big backlog of L0 SSTables 
> due to too many flushes or excessive repair streaming with vnode. In 

[jira] [Commented] (CASSANDRA-14499) node-level disk quota

2018-06-13 Thread Jeremy Hanna (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511532#comment-16511532
 ] 

Jeremy Hanna commented on CASSANDRA-14499:
--

I still have a concern about including something into the codebase that shuts 
down node operations automatically - even if it's opt-in.  Considering that 
under normal circumstances, nodes will have around the same amount of data, 
that leads to some fairly normal cascading failure scenarios when this is 
enabled.  That leads me to wonder when this would be useful.

{quote}
One use case where we see this as valuable is QA/perf/test clusters that may 
not have the full monitoring setup but need to be protected from errant clients 
filling up disks to a point where worse things happen.
{quote}

So is it that there is not a lot of access to the machine or the VM or the OS 
in those QA/perf/test clusters but there *is* access to Cassandra so utilize 
that access to make sure an errant client doesn't do things that require 
getting access (or contacting the people with access) to the machine to 
rectify, like when the volume fills up?

Would the only circumstances where this is useful be in QA/perf/test clusters 
and therefore cascading failure of the cluster isn't the end of the world?

I'm just concerned that while a very mature user is going to use this 
appropriately, others out there will inadvertently misuse the feature.  If this 
is something that gets into the codebase, I would just want to make extra sure 
that people are aware of both the intended use cases/scenarios and especially 
the risks of cascading failure.  That said, introducing something that may 
introduce cascading failure *automatically* for the purpose of test 
environments seems unwise.

I'm happy to be wrong about the probability of cascading failure or the 
expected use cases, but please help me understand.

> node-level disk quota
> -
>
> Key: CASSANDRA-14499
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14499
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Major
>
> Operators should be able to specify, via YAML, the amount of usable disk 
> space on a node as a percentage of the total available or as an absolute 
> value. If both are specified, the absolute value should take precedence. This 
> allows operators to reserve space available to the database for background 
> tasks -- primarily compaction. When a node reaches its quota, gossip should 
> be disabled to prevent it taking further writes (which would increase the 
> amount of data stored), being involved in reads (which are likely to be more 
> inconsistent over time), or participating in repair (which may increase the 
> amount of space used on the machine). The node re-enables gossip when the 
> amount of data it stores is below the quota.   
> The proposed option differs from {{min_free_space_per_drive_in_mb}}, which 
> reserves some amount of space on each drive that is not usable by the 
> database.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread Andy Tolbert (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511528#comment-16511528
 ] 

Andy Tolbert edited comment on CASSANDRA-10751 at 6/13/18 6:37 PM:
---

(y) the driver does not quote the keyspace value given to {{Cluster.connect 
}}so this change will cause problems for mixed-case keyspaces.  To further 
expound on [~jjordan]'s comment, it's misleading but the [original 
code|https://github.com/datastax/java-driver/blob/2.1.8/driver-core/src/main/java/com/datastax/driver/core/Connection.java#L477]
 referenced is quoting the value returned from a {{set_keyspace}} response sent 
by C*.  In that case, C* sends the internal form of the keyspace, which is 
never quoted, so we have to quote it there.


was (Author: andrew.tolbert):
(y), to further expound on [~jjordan]'s comment, it's misleading but the 
[original 
code|https://github.com/datastax/java-driver/blob/2.1.8/driver-core/src/main/java/com/datastax/driver/core/Connection.java#L477]
 referenced is quoting the value returned from a {{set_keyspace}} response sent 
by C*.  In that case, C* sends the internal form of the keyspace, which is 
never quoted, so we have to quote it there.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is 

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread Andy Tolbert (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511528#comment-16511528
 ] 

Andy Tolbert commented on CASSANDRA-10751:
--

(y), to further expound on [~jjordan]'s comment, it's misleading but the 
[original 
code|https://github.com/datastax/java-driver/blob/2.1.8/driver-core/src/main/java/com/datastax/driver/core/Connection.java#L477]
 referenced is quoting the value returned from a {{set_keyspace}} response sent 
by C*.  In that case, C* sends the internal form of the keyspace, which is 
never quoted, so we have to quote it there.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> 

[jira] [Created] (CASSANDRA-14518) Want to install cassandra on the single server with mulitple nodes on centos Single server. I will attach the specs of the server

2018-06-13 Thread girish (JIRA)
girish created CASSANDRA-14518:
--

 Summary: Want to install cassandra on the single server with 
mulitple nodes on centos Single server. I will attach the specs of the server
 Key: CASSANDRA-14518
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14518
 Project: Cassandra
  Issue Type: Task
  Components: Hints
 Environment: |Server Specifications|1|
| | |
|Operating System|centos|
|Control Panel|None|
|Additional RAM|64|
|1st and 2nd Hard Disk|2 x 250 GB SSD - RAID 1 - FREE|
|3rd Hard Disk|500 GB SSD|
|4th Hard Disk|500 GB HDD|
|5th Hard Disk|None|
|6th Hard Disk|None|
|Hardware RAID|No|
|Port Speed|100 Mbps - FREE|
|Tier 1 Bandwidth|15 TB - FREE|
|Remote Backup Storage|None|
|IPMI KVM|None|
|Model|79|
|Model_Name|Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz|
|Number of cores|32|
Reporter: girish


Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 320.67 KiB 256 ? efb256fb-a04f-4c4f-948b-b105a7f7a658 rack1

Note: Non-system keyspaces don't have the same replication settings, effective 
ownership information is meaningless



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13529) cassandra-stress light-weight transaction support

2018-06-13 Thread Jaydeepkumar Chovatia (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511449#comment-16511449
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-13529:
---

Thanks!!! [~jasobrown] [~djoshi3]

> cassandra-stress light-weight transaction support
> -
>
> Key: CASSANDRA-13529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13529
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Stress
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
>  Labels: LWT
> Fix For: 4.0
>
> Attachments: 13529.txt, lwttest.yaml
>
>
> It would be nice to have a light-weight transaction support in 
> cassandra-stress.
> Although currently in cassandra-stress we can achieve light-weight 
> transaction partially by using static conditions like "IF col1 != null" or 
> "IF not EXIST". 
> If would be ideal to have full fledged light-weight transaction support like 
> "IF col1 = ? and col2 = ?". One way to implement is to read values from 
> Cassandra and use that in the condition so it will execute all the paxos 
> phases in Cassandra.
> Please find git link for the patch: 
> https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:13529-trunk?expand=1
> ||trunk|
> |[branch|https://github.com/jaydeepkumar1984/cassandra/tree/13529-trunk]|
> |[utests|https://circleci.com/gh/jaydeepkumar1984/cassandra/8]|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14465) Consider logging prepared statements bound values in Audit Log

2018-06-13 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511439#comment-16511439
 ] 

Jason Brown commented on CASSANDRA-14465:
-

I'm kind of in favor of [~eperott]'s option 3. Making it configurable 
(defaulting to off) offers the most flexibility with the least potential impact 
to performance.

> Consider logging prepared statements bound values in Audit Log
> --
>
> Key: CASSANDRA-14465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14465
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vinay Chella
>Priority: Minor
>
> The Goal of this ticket is to determine the best way to implement audit 
> logging of actual bound values from prepared statement execution. The current 
> default implementation does not log bound values
> Here are the options I see
>  1. Log bound values of prepared statements 
>  2. Let a custom implementation of IAuditLogger decide what to do
> *Context*:
>  Option #1: Works for teams which expects bind values to be logged in audit 
> log without any security or compliance concerns
>  Option #2: Allows teams make the best choice for themselves
> Note that the efforts of securing C* clusters by certs, authentication, and 
> audit logging can go in vain when log rotation and log aggregation systems 
> are not equally secure enough since logging bind values allow someone to 
> replay the database events and expose sensitive data.
> [~spo...@gmail.com] [~jasobrown]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Reopened] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread Jeremiah Jordan (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-10751:
-

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
>   at 
> 

[jira] [Resolved] (CASSANDRA-14317) Auditing Plug-in for Cassandra

2018-06-13 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown resolved CASSANDRA-14317.
-
Resolution: Won't Fix

Implementing a plug-in model is far beyond the scope of achieving a stable 3.11 
at this time. Thus, I'm closing this as CASSANDRA-12151 is now committed. 

[~vinaykumarcse] has told me that backporting CASSANDRA-12151 is not an overly 
burdensome (yet not completely simple) task. We won't do that for 3.11, but you 
might want to try it if you need auditing in 3.11.

> Auditing Plug-in for Cassandra
> --
>
> Key: CASSANDRA-14317
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14317
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
> Environment: Cassandra 3.11.x
>Reporter: Anuj Wadehra
>Priority: Major
>  Labels: security
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> Cassandra lacks database auditing feature. Till the new feature is 
> implemented as part of CASSANDRA-12151, a database auditing plug-in can be 
> built. The plug-in can be implemented and plugged into Cassandra by 
> customizing components such as Query Handler , Authenticator and Role 
> Manager. The Auditing plug-in shall log all CQL queries and user logins. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread Jeremiah Jordan (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511413#comment-16511413
 ] 

Jeremiah Jordan edited comment on CASSANDRA-10751 at 6/13/18 4:47 PM:
--

[~michaelsembwever] the C* 3.0+ java drivers to not quote that.  This change is 
bad on versions 3.0+ and breaks stuff.

https://github.com/datastax/java-driver/blob/3.0.8/driver-core/src/main/java/com/datastax/driver/core/Cluster.java#L336


was (Author: jjordan):
[~michaelsembwever] the C* 3.0+ java drivers to not quote that.  This change is 
bad on versions 3.0+ and breaks stuff.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> 

[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread Jeremiah Jordan (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511413#comment-16511413
 ] 

Jeremiah Jordan edited comment on CASSANDRA-10751 at 6/13/18 4:47 PM:
--

[~michaelsembwever] the C* 3.0+ java drivers do not quote that.  This change is 
bad on versions 3.0+ and breaks stuff.

https://github.com/datastax/java-driver/blob/3.0.8/driver-core/src/main/java/com/datastax/driver/core/Cluster.java#L336


was (Author: jjordan):
[~michaelsembwever] the C* 3.0+ java drivers to not quote that.  This change is 
bad on versions 3.0+ and breaks stuff.

https://github.com/datastax/java-driver/blob/3.0.8/driver-core/src/main/java/com/datastax/driver/core/Cluster.java#L336

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: 

[jira] [Comment Edited] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread Jeremiah Jordan (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511413#comment-16511413
 ] 

Jeremiah Jordan edited comment on CASSANDRA-10751 at 6/13/18 4:44 PM:
--

[~michaelsembwever] the C* 3.0+ java drivers to not quote that.  This change is 
bad on versions 3.0+ and breaks stuff.


was (Author: jjordan):
[~michaelsembwever] the C* 3.0+ java drivers to not quote that.  This change is 
bad.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> 

[jira] [Commented] (CASSANDRA-10751) "Pool is shutdown" error when running Hadoop jobs on Yarn

2018-06-13 Thread Jeremiah Jordan (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511413#comment-16511413
 ] 

Jeremiah Jordan commented on CASSANDRA-10751:
-

[~michaelsembwever] the C* 3.0+ java drivers to not quote that.  This change is 
bad.

> "Pool is shutdown" error when running Hadoop jobs on Yarn
> -
>
> Key: CASSANDRA-10751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10751
> Project: Cassandra
>  Issue Type: Bug
> Environment: Hadoop 2.7.1 (HDP 2.3.2)
> Cassandra 2.1.11
>Reporter: Cyril Scetbon
>Assignee: Cyril Scetbon
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
> Attachments: CASSANDRA-10751-2.2.patch, CASSANDRA-10751-3.0.patch, 
> output.log
>
>
> Trying to execute an Hadoop job on Yarn, I get errors from Cassandra's 
> internal code. It seems that connections are shutdown but we can't understand 
> why ...
> Here is a subtract of the errors. I also add a file with the complete debug 
> logs.
> {code}
> 15/11/22 20:05:54 [main]: DEBUG core.RequestHandler: Error querying 
> node006.internal.net/192.168.12.22:9042, trying next host (error is: 
> com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown)
> Failed with exception java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> 15/11/22 20:05:54 [main]: ERROR CliDriver: Failed with exception 
> java.io.IOException:java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
> java.io.IOException: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:415)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1672)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.io.IOException: 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> org.apache.hadoop.hive.cassandra.input.cql.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:132)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
>   ... 15 more
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: 
> node006.internal.net/192.168.12.22:9042 
> (com.datastax.driver.core.ConnectionException: 
> [node006.internal.net/192.168.12.22:9042] Pool is shutdown))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 

[jira] [Updated] (CASSANDRA-14470) Repair validation failed/unable to create merkle tree

2018-06-13 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14470:

Attachment: (was: Warning - iworkcassandramgr - Abnormal instance 
load.eml)

> Repair validation failed/unable to create merkle tree
> -
>
> Key: CASSANDRA-14470
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14470
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Harry Hough
>Priority: Major
>
> I had trouble repairing with a full repair across all nodes and keyspaces so 
> I swapped to doing table by table. This table will not repair even after 
> scrub/restart of all nodes. I am using command:
> {code:java}
> nodetool repair -full -seq keyspace table
> {code}
> {code:java}
> [2018-05-25 19:26:36,525] Repair session 0198ee50-6050-11e8-a3b7-9d0793eab507 
> for range [(165598500763544933,166800441975877433], 
> (-5455068259072262254,-5445777107512274819], 
> (-4614366950466274594,-4609359222424798148], 
> (3417371506258365094,3421921915575816226], 
> (5221788898381458942,5222846663270250559], 
> (3421921915575816226,3429175540277204991], 
> (3276484330153091115,3282213186258578546], 
> (-3306169730424140596,-3303439264231406101], 
> (5228704360821395206,5242415853745535023], 
> (5808045095951939338,5808562658315740708], 
> (-3303439264231406101,-3302592736123212969]] finished (progress: 1%)
> [2018-05-25 19:27:23,848] Repair session 0180f980-6050-11e8-a3b7-9d0793eab507 
> for range [(-8495158945319933291,-8482949618583319581], 
> (1803296697741516342,1805330812863783941], 
> (8633191319643427141,8637771071728131257], 
> (2214097236323810344,2218253238829661319], 
> (8637771071728131257,8639627594735133685], 
> (2195525904029414718,2214097236323810344], 
> (-8500127431270773970,-8495158945319933291], 
> (7151693083782264341,7152162989417914407], 
> (-8482949618583319581,-8481973749935314249]] finished (progress: 1%)
> [2018-05-25 19:30:32,590] Repair session 01ac9d62-6050-11e8-a3b7-9d0793eab507 
> for range [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731], 
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]] failed with error [repair 
> #01ac9d62-6050-11e8-a3b7-9d0793eab507 on keyspace/table, 
> [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731],
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]]] Validation failed in 
> /192.168.8.64 (progress: 1%)
> [2018-05-25 19:30:38,744] Repair session 01ab16c1-6050-11e8-a3b7-9d0793eab507 
> for range [(4474598255414218354,4477186372547790770], 
> (-8368931070988054567,-8367389908801757978], 
> (4445104759712094068,4445123832517144036], 
> (6749641233379918040,6749879473217708908], 
> (717627050679001698,729408043324000761], 
> (8984622403893999385,8990662643404904110], 
> (4457612694557846994,4474598255414218354], 
> (5589049422573545528,5593079877787783784], 
> (3609693317839644945,3613727999875360405], 
> (8499016262183246473,8504603366117127178], 
> (-5421277973540712245,-5417725796037372830], 
> (5586405751301680690,5589049422573545528], 
> (-2611069890590917549,-2603911539353128123], 
> (2424772330724108233,2427564448454334730], 
> (3172651438220766183,3175226710613527829], 
> (4445123832517144036,4457612694557846994], 
> (-6827531712183440570,-6800863837312326365], 
> (5593079877787783784,5596020904874304252], 
> (716705770783505310,717627050679001698], 
> (115377252345874298,119626359210683992], 
> (239394377432130766,240250561347730054]] failed with error [repair 
> #01ab16c1-6050-11e8-a3b7-9d0793eab507 on keyspace/table, 
> [(4474598255414218354,4477186372547790770], 
> (-8368931070988054567,-8367389908801757978], 
> (4445104759712094068,4445123832517144036], 
> (6749641233379918040,6749879473217708908], 
> (717627050679001698,729408043324000761], 
> (8984622403893999385,8990662643404904110], 
> 

[jira] [Commented] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions

2018-06-13 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511340#comment-16511340
 ] 

Marcus Eriksson commented on CASSANDRA-14397:
-

[~ifesdjeen] thanks for the review, all good points, I have pushed a commit 
fixing them, tests should be running

> Stop compactions quicker when compacting wide partitions
> 
>
> Key: CASSANDRA-14397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> We should allow compactions to be stopped when compacting wide partitions, 
> this will help when a user wants to run upgradesstables for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14470) Repair validation failed/unable to create merkle tree

2018-06-13 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14470:

Attachment: Warning - iworkcassandramgr - Abnormal instance load.eml

> Repair validation failed/unable to create merkle tree
> -
>
> Key: CASSANDRA-14470
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14470
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Harry Hough
>Priority: Major
> Attachments: Warning - iworkcassandramgr - Abnormal instance load.eml
>
>
> I had trouble repairing with a full repair across all nodes and keyspaces so 
> I swapped to doing table by table. This table will not repair even after 
> scrub/restart of all nodes. I am using command:
> {code:java}
> nodetool repair -full -seq keyspace table
> {code}
> {code:java}
> [2018-05-25 19:26:36,525] Repair session 0198ee50-6050-11e8-a3b7-9d0793eab507 
> for range [(165598500763544933,166800441975877433], 
> (-5455068259072262254,-5445777107512274819], 
> (-4614366950466274594,-4609359222424798148], 
> (3417371506258365094,3421921915575816226], 
> (5221788898381458942,5222846663270250559], 
> (3421921915575816226,3429175540277204991], 
> (3276484330153091115,3282213186258578546], 
> (-3306169730424140596,-3303439264231406101], 
> (5228704360821395206,5242415853745535023], 
> (5808045095951939338,5808562658315740708], 
> (-3303439264231406101,-3302592736123212969]] finished (progress: 1%)
> [2018-05-25 19:27:23,848] Repair session 0180f980-6050-11e8-a3b7-9d0793eab507 
> for range [(-8495158945319933291,-8482949618583319581], 
> (1803296697741516342,1805330812863783941], 
> (8633191319643427141,8637771071728131257], 
> (2214097236323810344,2218253238829661319], 
> (8637771071728131257,8639627594735133685], 
> (2195525904029414718,2214097236323810344], 
> (-8500127431270773970,-8495158945319933291], 
> (7151693083782264341,7152162989417914407], 
> (-8482949618583319581,-8481973749935314249]] finished (progress: 1%)
> [2018-05-25 19:30:32,590] Repair session 01ac9d62-6050-11e8-a3b7-9d0793eab507 
> for range [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731], 
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]] failed with error [repair 
> #01ac9d62-6050-11e8-a3b7-9d0793eab507 on keyspace/table, 
> [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731],
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]]] Validation failed in 
> /192.168.8.64 (progress: 1%)
> [2018-05-25 19:30:38,744] Repair session 01ab16c1-6050-11e8-a3b7-9d0793eab507 
> for range [(4474598255414218354,4477186372547790770], 
> (-8368931070988054567,-8367389908801757978], 
> (4445104759712094068,4445123832517144036], 
> (6749641233379918040,6749879473217708908], 
> (717627050679001698,729408043324000761], 
> (8984622403893999385,8990662643404904110], 
> (4457612694557846994,4474598255414218354], 
> (5589049422573545528,5593079877787783784], 
> (3609693317839644945,3613727999875360405], 
> (8499016262183246473,8504603366117127178], 
> (-5421277973540712245,-5417725796037372830], 
> (5586405751301680690,5589049422573545528], 
> (-2611069890590917549,-2603911539353128123], 
> (2424772330724108233,2427564448454334730], 
> (3172651438220766183,3175226710613527829], 
> (4445123832517144036,4457612694557846994], 
> (-6827531712183440570,-6800863837312326365], 
> (5593079877787783784,5596020904874304252], 
> (716705770783505310,717627050679001698], 
> (115377252345874298,119626359210683992], 
> (239394377432130766,240250561347730054]] failed with error [repair 
> #01ab16c1-6050-11e8-a3b7-9d0793eab507 on keyspace/table, 
> [(4474598255414218354,4477186372547790770], 
> (-8368931070988054567,-8367389908801757978], 
> (4445104759712094068,4445123832517144036], 
> (6749641233379918040,6749879473217708908], 
> (717627050679001698,729408043324000761], 

[jira] [Commented] (CASSANDRA-14517) Short read protection can cause partial updates to be read

2018-06-13 Thread Blake Eggleston (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16511271#comment-16511271
 ] 

Blake Eggleston commented on CASSANDRA-14517:
-

It doesn't break the guarantee on the write path, partitions are still updated 
atomically internally. It breaks on the read path though, since an SRP can 
create a read response which contains a torn write.

> Short read protection can cause partial updates to be read
> --
>
> Key: CASSANDRA-14517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14517
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> If a read is performed in two parts due to short read protection, and the 
> data being read is written to between reads, the coordinator will return a 
> partial update. Specifically, this will occur if a single partition batch 
> updates clustering values on both sides of the SRP break, or if a range 
> tombstone is written that deletes data on both sides of the break. At the 
> coordinator level, this breaks the expectation that updates to a partition 
> are atomic, and that you can’t see partial updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13529) cassandra-stress light-weight transaction support

2018-06-13 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-13529:

   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Patch Available)

OK, finally got around to finish up the review here.

Everything was largely fine, but I've made some editorial choices:

- moved {{ArgSelect}} to {{SchemaStatement}}. it seems out of place in a 
'utils' class.
- simplified implemention of {{dynamicConditionExists}} and moved it into 
{{StressProfile}} and thus eliminated {{QueryUtils}}.
- eliminate the double checked locking in {{CASQuery.JavaDriverRun#bind()}} by 
moving the {{client#bind()}} into the constructor of the {{JavaDriverRun}}.
- made some fields final and collections immutable in {{CASQuery}}
- i've made a few minor cleanups (nothing significant)

Committed as sha {{2aeed037e0f105e72366e15afa012257e910a25d}}. Thanks, 
[~chovatia.jayd...@gmail.com] and [~djoshi3]!

> cassandra-stress light-weight transaction support
> -
>
> Key: CASSANDRA-13529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13529
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Stress
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
>  Labels: LWT
> Fix For: 4.0
>
> Attachments: 13529.txt, lwttest.yaml
>
>
> It would be nice to have a light-weight transaction support in 
> cassandra-stress.
> Although currently in cassandra-stress we can achieve light-weight 
> transaction partially by using static conditions like "IF col1 != null" or 
> "IF not EXIST". 
> If would be ideal to have full fledged light-weight transaction support like 
> "IF col1 = ? and col2 = ?". One way to implement is to read values from 
> Cassandra and use that in the condition so it will execute all the paxos 
> phases in Cassandra.
> Please find git link for the patch: 
> https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:13529-trunk?expand=1
> ||trunk|
> |[branch|https://github.com/jaydeepkumar1984/cassandra/tree/13529-trunk]|
> |[utests|https://circleci.com/gh/jaydeepkumar1984/cassandra/8]|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Support light-weight transactions in cassandra-stress

2018-06-13 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6da5fb56c -> 2aeed037e


Support light-weight transactions in cassandra-stress

patch by Jaydeepkumar Chovatia; reviewed by Dinesh Joshi and jasobrown for 
CASSANDRA-13529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2aeed037
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2aeed037
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2aeed037

Branch: refs/heads/trunk
Commit: 2aeed037e0f105e72366e15afa012257e910a25d
Parents: 6da5fb5
Author: Jaydeepkumar Chovatia 
Authored: Fri May 12 17:23:44 2017 -0700
Committer: Jason Brown 
Committed: Wed Jun 13 05:46:20 2018 -0700

--
 CHANGES.txt |   1 +
 doc/source/tools/cassandra_stress.rst   |  18 ++
 doc/source/tools/stress-lwt-example.yaml|  70 ++
 .../cql3/conditions/ColumnCondition.java|   7 +-
 .../cql3/statements/ModificationStatement.java  |  11 +
 .../apache/cassandra/stress/StressProfile.java  |  46 +++-
 .../stress/generate/PartitionGenerator.java |  10 +
 .../stress/operations/PartitionOperation.java   |  13 +-
 .../stress/operations/userdefined/CASQuery.java | 227 +++
 .../operations/userdefined/SchemaQuery.java |  11 +-
 .../operations/userdefined/SchemaStatement.java |   6 +
 11 files changed, 403 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2aeed037/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6819711..629df0c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Support light-weight transactions in cassandra-stress (CASSANDRA-13529)
  * Make AsyncOneResponse use the correct timeout (CASSANDRA-14509)
  * Add option to sanity check tombstones on reads/compactions (CASSANDRA-14467)
  * Add a virtual table to expose all running sstable tasks (CASSANDRA-14457)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2aeed037/doc/source/tools/cassandra_stress.rst
--
diff --git a/doc/source/tools/cassandra_stress.rst 
b/doc/source/tools/cassandra_stress.rst
index 322a981..bcac54e 100644
--- a/doc/source/tools/cassandra_stress.rst
+++ b/doc/source/tools/cassandra_stress.rst
@@ -220,6 +220,24 @@ Running a user mode test with multiple yaml files::
 This will run operations as specified in both the example.yaml and 
example2.yaml files. example.yaml and example2.yaml can reference the same table
  although care must be taken that the table definition is identical (data 
generation specs can be different).
 
+Lightweight transaction support

+
+cassandra-stress supports lightweight transactions. In this it will first read 
current data from Cassandra and then uses read value(s)
+to fulfill lightweight transaction condition(s).
+
+Lightweight transaction update query::
+
+queries:
+  regularupdate:
+  cql: update blogposts set author = ? where domain = ? and 
published_date = ?
+  fields: samerow
+  updatewithlwt:
+  cql: update blogposts set author = ? where domain = ? and 
published_date = ? IF body = ? AND url = ?
+  fields: samerow
+
+The full example can be found here :download:`yaml <./stress-lwt-example.yaml>`
+
 Graphing
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2aeed037/doc/source/tools/stress-lwt-example.yaml
--
diff --git a/doc/source/tools/stress-lwt-example.yaml 
b/doc/source/tools/stress-lwt-example.yaml
new file mode 100644
index 000..fc5db08
--- /dev/null
+++ b/doc/source/tools/stress-lwt-example.yaml
@@ -0,0 +1,70 @@
+# Keyspace Name
+keyspace: stresscql
+
+# The CQL for creating a keyspace (optional if it already exists)
+# Would almost always be network topology unless running something locall
+keyspace_definition: |
+  CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
+
+# Table name
+table: blogposts
+
+# The CQL for creating a table you wish to stress (optional if it already 
exists)
+table_definition: |
+  CREATE TABLE blogposts (
+domain text,
+published_date timeuuid,
+url text,
+author text,
+title text,
+body text,
+PRIMARY KEY(domain, published_date)
+  ) WITH CLUSTERING ORDER BY (published_date DESC) 
+AND compaction = { 'class':'LeveledCompactionStrategy' } 
+AND comment='A table to hold blog posts'
+
+### Column Distribution Specifications ###
+ 
+columnspec:
+  - name: domain
+size: gaussian(5..100)   #domain names are relatively short
+population: 

[jira] [Comment Edited] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions

2018-06-13 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510859#comment-16510859
 ] 

Alex Petrov edited comment on CASSANDRA-14397 at 6/13/18 9:42 AM:
--

[~krummas] thank you for the patch! It looks good, I just have 2 minor comments:

  * We might benefit from also adding the call to {{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * I've tried to reuse {{CompactionIteratorTest}} to write a more "precise" 
test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
{{CompactionInterruptedException}} is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).


was (Author: ifesdjeen):
[~krummas] thank you for the patch! It looks good, I just have 2 minor comments:

  * We might benefit from also adding the call to {{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * I've tried to reuse {{CompactionIteratorTest}} to write a more "precise" 
test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
[CompactionInterruptedException] is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).

> Stop compactions quicker when compacting wide partitions
> 
>
> Key: CASSANDRA-14397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> We should allow compactions to be stopped when compacting wide partitions, 
> this will help when a user wants to run upgradesstables for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions

2018-06-13 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510859#comment-16510859
 ] 

Alex Petrov edited comment on CASSANDRA-14397 at 6/13/18 9:41 AM:
--

[~krummas] thank you for the patch! It looks good, I just have 2 minor comments:

  * We might benefit from also adding the call to {{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * I've tried to reuse {{CompactionIteratorTest}} to write a more "precise" 
test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
[CompactionInterruptedException] is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).


was (Author: ifesdjeen):
[~krummas] thank you for the patch! It looks good, I just have 2 minor comments:

  * It looks like we might benefit from also adding the call to 
{{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * Another thing, I've tried to reuse {{CompactionIteratorTest}} to write a 
more "precise" test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
[CompactionInterruptedException] is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).

> Stop compactions quicker when compacting wide partitions
> 
>
> Key: CASSANDRA-14397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> We should allow compactions to be stopped when compacting wide partitions, 
> this will help when a user wants to run upgradesstables for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions

2018-06-13 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510859#comment-16510859
 ] 

Alex Petrov commented on CASSANDRA-14397:
-

[~krummas] thank you for the patch! It looks good, I just have 2 minor comments:

  * It looks like we might benefit from also adding the call to 
{{isStopRequested}} to 
{{AbortableUnfilteredPartitionTransformation#applyToPartition}}. This way 
[CompactionManager#doCleanupOne|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1158-L1159]
 and 
[CompactionTask#runMayThrow|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L200-L201]
 won't need to have this logic separately. Also, it will be self-contained in a 
single class, which might be a good thing. The downside of that is that 
upstream transformations {{applyToPartition}} calls will be still made, but it 
might be minor enough considering potentially simpler code. What do you think?
  * Another thing, I've tried to reuse {{CompactionIteratorTest}} to write a 
more "precise" test (also, get rid of sleeps there), and so far 
[this|https://gist.github.com/ifesdjeen/4caf84423fa321ceca79d9b8c041fe0c] was 
what I came up with. In short, relying on timing is a good thing for review and 
it was nice to be able to use with this test while reviewing, but testing 
compaction iterator directly might have a benefit of both knowing that 
[CompactionInterruptedException] is thrown in a precise moment in time, and 
might remove some flakiness (which to be honest I could not reproduce locally, 
but as your comment says it begs to).

> Stop compactions quicker when compacting wide partitions
> 
>
> Key: CASSANDRA-14397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> We should allow compactions to be stopped when compacting wide partitions, 
> this will help when a user wants to run upgradesstables for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14517) Short read protection can cause partial updates to be read

2018-06-13 Thread Stefan Podkowinski (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510786#comment-16510786
 ] 

Stefan Podkowinski commented on CASSANDRA-14517:


I'm a bit confused what kind of guarantees related to atomicity and partial 
updates this ticket refers to. How can any potential SRP triggered extra reads 
(which are only regular reads) on the coordinator break guarantees that 
partition updates will be atomic?

> Short read protection can cause partial updates to be read
> --
>
> Key: CASSANDRA-14517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14517
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> If a read is performed in two parts due to short read protection, and the 
> data being read is written to between reads, the coordinator will return a 
> partial update. Specifically, this will occur if a single partition batch 
> updates clustering values on both sides of the SRP break, or if a range 
> tombstone is written that deletes data on both sides of the break. At the 
> coordinator level, this breaks the expectation that updates to a partition 
> are atomic, and that you can’t see partial updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org