[jira] [Updated] (CASSANDRA-14598) [dtest] flakey test: test_decommissioned_node_cant_rejoin - topology_test.TestTopology

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14598:
-
Component/s: Testing

> [dtest] flakey test: test_decommissioned_node_cant_rejoin - 
> topology_test.TestTopology
> --
>
> Key: CASSANDRA-14598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14598
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jason Brown
>Priority: Minor
>  Labels: dtest
>
> Only saw this fail on 3.0, but looks like a problem with the dtest itself 
> (ubder some failure scenario). Output from pytest error:
> {noformat}
> > assert re.search(rejoin_err,
>  '\n'.join(['\n'.join(err_list) for err_list in 
> node3.grep_log_for_errors()]), re.MULTILINE)
> E AssertionError: assert None
> E + where None = ('This node was 
> decommissioned and will not rejoin the ring', '', )
> E + where  = re.search
> E + and '' = ([])
> E + where  = '\n'.join
> E + and  = re.MULTILINE
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14708) protocol v5 duration wire format is overly complex and awkward to implement for clients

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14708:
-
Component/s: CQL

> protocol v5 duration wire format is overly complex and awkward to implement 
> for clients
> ---
>
> Key: CASSANDRA-14708
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14708
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Chris Bannister
>Priority: Major
>
> Protocol V5 defines the duration type to be on the wire as months, days and 
> nanoseconds. Days and months require a timezone to make sense of the duration 
> and varies depending on from which they are applied for.
>  
> Go defines a [duration|https://golang.org/pkg/time/#Duration] type as 
> nanoseconds in int64 which can represent ~290 years. Java 
> [duration|https://docs.oracle.com/javase/8/docs/api/java/time/Duration.html] 
> does not have way to handle months.
>  
> I suggest that before 4.0 is release the duration format is converted to just 
> be represented as nanoseconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14706) Support "IF EXISTS/IF NOT EXISTS" for all clauses of "ALTER TABLE"

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14706:
-
Component/s: CQL

> Support "IF EXISTS/IF NOT EXISTS" for all clauses of "ALTER TABLE"
> --
>
> Key: CASSANDRA-14706
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14706
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Dmitry Lazurkin
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Like so:
> {noformat}
> ALTER TABLE  ALTER  TYPE ;
> ALTER TABLE [ IF EXISTS ]  ADD [ IF NOT EXISTS ]  ;
> ALTER TABLE [ IF EXISTS ]  ADD [ IF NOT EXISTS ] ( 
> , .  );
> ALTER TABLE [ IF EXISTS ]  DROP [ IF EXISTS ] ;
> ALTER TABLE [ IF EXISTS ]  DROP [ IF EXISTS ] ( 
> ,.);
> ALTER TABLE [ IF EXISTS ]  RENAME [ IF EXISTS ]  TO ;
> ALTER TABLE [ IF EXISTS ]  WITH  = ;
> {noformat}
> I think common IF EXISTS/IF NOT EXISTS clause for ADD/DROP/RENAME better than 
> clause for each column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14414) Errors in Supercolumn support in 2.0 upgrade

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-14414.
--
Resolution: Information Provided

> Errors in Supercolumn support in 2.0 upgrade
> 
>
> Key: CASSANDRA-14414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14414
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ken Hancock
>Priority: Major
>
> In upgrading from 1.2.18 to 2.0.17, the following exceptions started showing 
> in cassandra log files when the 2.0.17 node is chosen as the coordinator.  
> CL=ALL reads will fail as a result.
> The following ccm script will create a 3-node cassandra cluster and upgrade 
> the 3rd node to cassandra 2.0.17
> {code}
> ccm create -n3 -v1.2.17 test
> ccm start
> ccm node1 cli -v -x "create keyspace test with 
> placement_strategy='org.apache.cassandra.locator.SimpleStrategy' and 
> strategy_options={replication_factor:3}"
> ccm node1 cli -v -x "use test;
>   create column family super with column_type = 'Super' and 
> key_validation_class='IntegerType' and comparator = 'IntegerType' and 
> subcomparator = 'IntegerType' and default_validation_class = 'AsciiType'"
> ccm node1 cli -v -x "use test;
>   create column family shadow with column_type = 'Super' and 
> key_validation_class='IntegerType' and comparator = 'IntegerType' and 
> subcomparator = 'IntegerType' and default_validation_class = 'AsciiType'"
> ccm node1 cli -v -x "use test;
>   set super[1][1][1]='1-1-1';
>   set super[1][1][2]='1-1-2';
>   set super[1][2][1]='1-2-1';
>   set super[1][2][2]='1-2-2';
>   set super[2][1][1]='2-1-1';
>   set super[2][1][2]='2-1-2';
>   set super[2][2][1]='2-2-1';
>   set super[2][2][2]='2-2-2';
>   set super[3][1][1]='3-1-1';
>   set super[3][1][2]='3-1-2';
>   "
> ccm flush
> ccm node3 stop
> ccm node3 setdir -v2.0.17
> ccm node3 start
> ccm node3 nodetool upgradesstables
> {code}
> The following python uses pycassa to exercise the range_slice Thrift API:
> {code}
> import pycassa
> from pycassa.pool import ConnectionPool
> from pycassa.columnfamily import ColumnFamily
> from pycassa import ConsistencyLevel
> pool = ConnectionPool('test', server_list=['127.0.0.3:9160'], max_retries=0)
> super = ColumnFamily(pool, 'super')
> print "fails with ClassCastException"
> super.get(1, columns=[1,2], read_consistency_level=ConsistencyLevel.ALL)
> print "fails with RuntimeException: Cannot convert filter to old super column 
> format...""
> super.get(1, column_start=2, column_finish=3, 
> read_consistency_level=ConsistencyLevel.ALL)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13898) Stack overflow error with UDF using IBM JVM

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13898:
-
Component/s: Core

> Stack overflow error with UDF using IBM JVM
> ---
>
> Key: CASSANDRA-13898
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13898
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 3.11 with IBM JVM 8.0.5
>Reporter: Sumant Padbidri
>Priority: Major
>
> I'm using Cassandra 3.11 right out of the box (i.e. all default parameters) 
> with the IBM JVM 8.0.5. Using any UDF results in a stack overflow error. They 
> work fine with the Oracle JVM. "Create function" works, but using the 
> function in a query results in the error.
> CREATE TABLE test (
> id int,
> val1 int,
> val2 int,
> PRIMARY KEY(id)
> );
> INSERT INTO test(id, val1, val2) VALUES(1, 100, 200);
> INSERT INTO test(id, val1, val2) VALUES(2, 100, 300);
> INSERT INTO test(id, val1, val2) VALUES(3, 200, 150);
> CREATE OR REPLACE FUNCTION maxOf(current int, testvalue int)
> CALLED ON NULL INPUT
> RETURNS int
> LANGUAGE java
> AS $$return Math.max(current,testvalue);$$;
> SELECT id, val1, val2, maxOf(val1,val2) FROM test WHERE id = 1;
> Here's the stack trace from debug.log:
> java.lang.RuntimeException: java.lang.StackOverflowError
> at 
> org.apache.cassandra.cql3.functions.UDFunction.async(UDFunction.java:453) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.functions.UDFunction.executeAsync(UDFunction.java:398)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.functions.UDFunction.execute(UDFunction.java:298) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.ScalarFunctionSelector.getOutput(ScalarFunctionSelector.java:61)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$SelectionWithProcessing$1.getOutputRow(Selection.java:592)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:430)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.build(Selection.java:417)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:763)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:378)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:79)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:217)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:233) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522) 
> [na:1.8.0]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> 

[jira] [Updated] (CASSANDRA-14732) Range queries do not always query local node

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14732:
-
Component/s: Coordination

> Range queries do not always query local node
> 
>
> Key: CASSANDRA-14732
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14732
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benedict
>Priority: Minor
> Fix For: 4.0.x
>
>
> The logic for choosing to submit a local request is very peculiar, only doing 
> so if it is the only replica to be queried.  Going through the NIC to query 
> oneself otherwise is fairly strange, and surely a bug.
> I wonder if we should detect and warn messages sent from/to ourselves, as 
> this is surely always a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14517) Short read protection can cause partial updates to be read

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14517:
-
Component/s: Coordination

> Short read protection can cause partial updates to be read
> --
>
> Key: CASSANDRA-14517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14517
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> If a read is performed in two parts due to short read protection, and the 
> data being read is written to between reads, the coordinator will return a 
> partial update. Specifically, this will occur if a single partition batch 
> updates clustering values on both sides of the SRP break, or if a range 
> tombstone is written that deletes data on both sides of the break. At the 
> coordinator level, this breaks the expectation that updates to a partition 
> are atomic, and that you can’t see partial updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13606) Improve handling of 2i initialization failures

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13606:
-
Component/s: Secondary Indexes

> Improve handling of 2i initialization failures
> --
>
> Key: CASSANDRA-13606
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13606
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Secondary Indexes
>Reporter: Sergio Bossa
>Assignee: Sergio Bossa
>Priority: Major
> Fix For: 4.0
>
>
> CASSANDRA-10130 fixes the 2i build management, but initialization failures 
> are still not properly handled, most notably because:
> * Initialization failures make the index non-queryable, but it can still be 
> written to.
> * Initialization failures can be recovered via full rebuilds.
> Both points above are probably suboptimal because the initialization logic 
> could be more complex than just an index build, hence it shouldn't be made 
> recoverable via a simple rebuild, and could cause the index to be fully 
> unavailable not just for reads, but for writes as well.
> So, we should better handle initialization failures by:
> * Allowing the index implementation to specify if unavailable for reads, 
> writes, or both. 
> * Providing a proper method to recover, distinct from index rebuilds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14455) Transient Replication: Support replication factor changes

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14455:
-
Component/s: Coordination

> Transient Replication: Support replication factor changes
> -
>
> Key: CASSANDRA-14455
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14455
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> The initial refactor didn't allow any replication factor changes to 
> transiently replicated keyspaces aside from increasing the number of 
> transient replicas. We should add support for increasing/decreasing full and 
> transient replicas and add the tests and needed tooling. This should come 
> after the read and write path work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14520) ClosedChannelException handled as FSError

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14520:
-
Component/s: Streaming and Messaging

> ClosedChannelException handled as FSError
> -
>
> Key: CASSANDRA-14520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14520
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Blake Eggleston
>Assignee: Jason Brown
>Priority: Major
> Fix For: 4.0
>
>
> After the messaging service netty refactor, I’ve seen a few instances where a 
> closed socket causes a ClosedChannelException (an IOException subclass) to be 
> thrown. The exception is caught by ChannelProxy, interpreted as a disk error, 
> and is then re-thrown as an FSError, causing the node to be shutdown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14307) Refactor commitlog

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14307:
-
Component/s: Core

> Refactor commitlog
> --
>
> Key: CASSANDRA-14307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14307
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Dikang Gu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14336) sstableloader fails if sstables contains removed columns

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14336:
-
Component/s: Tools

> sstableloader fails if sstables contains removed columns
> 
>
> Key: CASSANDRA-14336
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14336
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Jaydeepkumar Chovatia
>Priority: Major
>
> If I copy the schema and try to load in sstables with sstableloader, loading 
> sometimes fails with
> {code:java}
> Exception in thread "main" org.apache.cassandra.tools.BulkLoadException: 
> java.lang.RuntimeException: Failed to list files in /tmp/test/bug3_dest-acdc
>     at org.apache.cassandra.tools.BulkLoader.load(BulkLoader.java:93)
>     at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:48)
> Caused by: java.lang.RuntimeException: Failed to list files in 
> /tmp/test/bug3_dest-acdc
>     at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:77)
>     at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:561)
>     at 
> org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:76)
>     at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:165)
>     at org.apache.cassandra.tools.BulkLoader.load(BulkLoader.java:80)
>     ... 1 more
> Caused by: java.lang.RuntimeException: Unknown column d during deserialization
>     at 
> org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:321)
>     at 
> org.apache.cassandra.io.sstable.format.SSTableReader.openForBatch(SSTableReader.java:440)
>     at 
> org.apache.cassandra.io.sstable.SSTableLoader.lambda$openSSTables$0(SSTableLoader.java:121)
>     at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.lambda$innerList$2(LogAwareFileLister.java:99)
>     at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
>     at java.util.TreeMap$EntrySpliterator.forEachRemaining(TreeMap.java:2969)
>     at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>     at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>     at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>     at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>     at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>     at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.innerList(LogAwareFileLister.java:101)
>     at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:73)
>     ... 5 more{code}
> This requires that we have dropped columns in the source table and sstables 
> exist from the "old schema" time.
> This can be very easily reproduced. I used following script:
> {code:java}
> KS=test
> SRCTABLE=bug3_source
> DESTTABLE=bug3_dest
> DATADIR=/usr/local/var/lib/cassandra/data
> TMPDIR=/tmp
> cqlsh -e "CREATE TABLE $KS.$SRCTABLE(a int primary key, b int, c int, d int);"
> cqlsh -e "CREATE TABLE $KS.$DESTTABLE(a int primary key, b int, c int);"
> cqlsh -e "INSERT INTO $KS.$SRCTABLE(a,b,c,d) values(1,2,3,4);"
> nodetool flush $KS $SRCTABLE
> cqlsh -e "ALTER TABLE $KS.$SRCTABLE DROP d;"
> nodetool flush $KS $SRCTABLE
> mkdir -p $TMPDIR/$KS/$DESTTABLE-acdc
> cp $DATADIR/$KS/$SRCTABLE-*/* $TMPDIR/$KS/$DESTTABLE-acdc
> sstableloader -d 127.0.0.1 $TMPDIR/$KS/$DESTTABLE-acdc{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14380) Cassandra crashes after fsync exception

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14380:
-
Component/s: Core

> Cassandra crashes after fsync exception
> ---
>
> Key: CASSANDRA-14380
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14380
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Geiger
>Priority: Critical
> Attachments: debug.log, debug.log.1.zip, 
> logs-from-cassandra-in-r97bb66e967-apiconnect-cc-0.txt
>
>
> Running Cassandra with a Rook Ceph filesystem within Kubernetes.  During the 
> startup, the following Warnings in the debug log pop up and then Cassandra 
> crashes shortly after and restarts.  It looks like before hitting this error, 
> it is doing a lot of writing and flushing
> WARN [MemtableFlushWriter:2] 2018-04-11 14:34:42,748 NativeLibrary.java:328 - 
> fsync(666) failed, errorno (22) {}
> com.sun.jna.LastErrorException: [22] Invalid argument
>  at org.apache.cassandra.utils.NativeLibraryLinux.fsync(Native Method) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.utils.NativeLibraryLinux.callFsync(NativeLibraryLinux.java:107)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
>  at org.apache.cassandra.utils.NativeLibrary.trySync(NativeLibrary.java:317) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
>  at org.apache.cassandra.utils.SyncUtil.trySync(SyncUtil.java:179) 
> [apache-cassandra-3.11.0.jar:3.11.0]
>  at org.apache.cassandra.utils.SyncUtil.trySyncDir(SyncUtil.java:190) 
> [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.io.util.SequentialWriter.openChannel(SequentialWriter.java:107)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.io.util.SequentialWriter.(SequentialWriter.java:141)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.writeMetadata(BigTableWriter.java:402)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$300(BigTableWriter.java:53)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:368)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.prepareToCommit(SSTableWriter.java:281)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.prepareToCommit(SimpleSSTableMultiWriter.java:101)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1153)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1086)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
>  [na:1.8.0]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [na:1.8.0]
>  at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.0.jar:3.11.0]
>  at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$12.BCF32600.run(Unknown
>  Source) ~[na:na]
>  at java.lang.Thread.run(Thread.java:811) ~[na:2.9 (12-15-2017)]
>  
> Syslog shows the following 
> (logs-from-cassandra-in-r97bb66e967-apiconnect-cc-0.txt):
> INFO  [main] 2018-04-11 14:49:01,848 ColumnFamilyStore.java:406 - 
> Initializing apim.ur_to_op_by_op
> INFO  [MemoryMXBean notification dispatcher] 2018-04-11 14:49:25,889 
> GCInspector.java:284 - global GC in 206ms.  class storage: 28700680 -> 
> 28692744; miscellaneous non-heap storage: 49871216 -> 53570176; 
> nursery-allocate: 1296878920 -> 149116672; tenured-SOA: 140321968 -> 139143760
> #0: /opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x302a94) 
> [0x7f17e4f10a94]
> #1: /opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x306b2d) 
> [0x7f17e4f14b2d]
> #2: /opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so(+0xc82da) 
> [0x7f17e4cd62da]
> #3: /opt/ibm/java/jre/lib/amd64/compressedrefs/libj9prt29.so(+0x22056) 
> [0x7f17e6531056]
> #4: /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390) [0x7f17ed0de390]
> #5: /opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x2c4e1f) 
> [0x7f17e4ed2e1f]
> #6: /opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x158c04) 
> [0x7f17e4d66c04]
> #7: /opt/ibm/java/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x542d24) 
> [0x7f17e5150d24]
> #8: 

[jira] [Updated] (CASSANDRA-14124) Abstract storage engine API from Keyspace/CFS

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14124:
-
Component/s: Core

> Abstract storage engine API from Keyspace/CFS
> -
>
> Key: CASSANDRA-14124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14124
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Dikang Gu
>Priority: Major
>
> At this moment, we hope we already finished the refactor of most components 
> in Cassandra, and we should be able to abstract a storage engine API, which 
> defines a clear boundary for Cassandra storage engine.
> For now, refer to 
> https://docs.google.com/document/d/1suZlvhzgB6NIyBNpM9nxoHxz_Ri7qAm-UEO8v8AIFsc
>  for high level designs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14184) Cleanup cannot run before a node has joined the ring

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14184:
-
Component/s: Lifecycle

> Cleanup cannot run before a node has joined the ring
> 
>
> Key: CASSANDRA-14184
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14184
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Lifecycle
>Reporter: Thanh
>Priority: Minor
>
> Server log shows messages like
> [RMI TCP Connection(18)-127.0.0.1] 2018-01-20 08:12:24,900 
> CompactionManager.java:415 - Cleanup cannot run before a node has joined the 
> ring 
> which is alarming because it seems to indicate a problem and that there is a 
> node joining the ring.  Actually, the message "Cleanup cannot run before a 
> node has joined the ring" can be produced even when there is _not_ a node 
> joining the ring (it looks like it can be produced when there are no sstables 
> for the keyspace.table in question).
> Request:
> Change the wording of the message since the message indicates there's some 
> problem due to node not joining the ring, even though that might not be the 
> case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13988) Add a timeout field to EXECUTE / QUERY / BATCH messages

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13988:
-
Component/s: CQL

> Add a timeout field to EXECUTE / QUERY / BATCH messages
> ---
>
> Key: CASSANDRA-13988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13988
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Michaël Figuière
>Priority: Minor
>
> The request timeout at the coordinator level is currently statically 
> configured through the {{request_timeout_in_ms}} and 
> {{xxx_request_timeout_in_ms}} parameters in cassandra.yaml. There would be 
> some benefits in making it possible for the client to dynamically define it 
> through the CQL Protocol:
> * In practice, there's often a misalignment between the timeout configured in 
> Cassandra and in the client leading non-optimal query execution flow, where 
> the coordinator continues to work while the client is not waiting anymore, or 
> where the client waits for too long for a potential response. The 99th 
> percentile latency can be significantly impacted by such issues. 
> * While the read timeout is typically statically configured on the Drivers, 
> on the Java Driver 3.x the developer is free to set a custom timeout using 
> {{ResultSetFuture#get(long, TimeUnit)}} which can lead to an extra 
> misalignment of timeouts with the coordinator. The Java Driver 4.x will make 
> the timeout configurable per query through its new {{DriverConfigProfile}} 
> abstraction.
> * It makes it possible for applications to shift to a "remaining time budget" 
> approach rather than the often inappropriate static timeout one. Also, the 
> Java Driver 4.x plans to change its definition of {{readTimeout}} from a per 
> execution attempt time to an overall query execution time. So the Driver 
> itself would also be able to work on a "remaining time budget" for each of 
> its execution attempts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14119) uTest cql3.ViewTest timeout

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14119:
-
Component/s: Testing

> uTest cql3.ViewTest timeout
> ---
>
> Key: CASSANDRA-14119
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14119
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Priority: Major
>  Labels: Testing
>
> It's failing a lot in 
> [CircleCI|https://circleci.com/gh/cooldoger/cassandra/163#tests/containers/0],
>  also locally:
> {noformat}
> $ ant test -Dtest.name=ViewTest
> ...
>  [parallel] 2017-12-13 22:23:27
>  [parallel] Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.111-b14 
> mixed mode):
>  [parallel]
>  [parallel] "Attach Listener" #453 daemon prio=9 os_prio=0 
> tid=0x7fc6d0002000 nid=0xd16b waiting on condition [0x]
>  [parallel]java.lang.Thread.State: RUNNABLE
>  [parallel]
>  [parallel]Locked ownable synchronizers:
>  [parallel] - None
>  [parallel]
>  [parallel] "MutationStage-6" #404 daemon prio=1 os_prio=0 
> tid=0x7fc584003800 nid=0xca90 waiting on condition [0x7fc69c816000]
>  [parallel]java.lang.Thread.State: WAITING (parking)
>  [parallel] at sun.misc.Unsafe.park(Native Method)
>  [parallel] at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  [parallel] at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:87)
>  [parallel] at java.lang.Thread.run(Thread.java:745)
>  [parallel]
>  [parallel]Locked ownable synchronizers:
>  [parallel] - None
>  [parallel]
> ...
> [junit] Testsuite: org.apache.cassandra.cql3.ViewTest
> [junit] Testsuite: org.apache.cassandra.cql3.ViewTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit]
> [junit] Testcase: 
> org.apache.cassandra.cql3.ViewTest:testBuilderWidePartition:  Caused an 
> ERROR
> [junit] Timeout occurred. Please note the time in the report does not 
> reflect the time until the timeout.
> [junit] junit.framework.AssertionFailedError: Timeout occurred. Please 
> note the time in the report does not reflect the time until the timeout.
> [junit] at java.lang.Thread.run(Thread.java:745)
> [junit]
> [junit]
> [junit] Test org.apache.cassandra.cql3.ViewTest FAILED (timeout)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14213) increase speed of search in lost data

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14213:
-
Component/s: Local Write-Read Paths

> increase speed of search in lost data
> -
>
> Key: CASSANDRA-14213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14213
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: mahdi manavi
>Priority: Major
>
> As in the "search in lost data" section , we should get tombstone from 
> atleast one node. We should use the database strategy which keeps the deleted 
> data in a table. And in the filtring proccess it should triger bloom filter 
> and search in the table and in the case that the search has result report it 
> to user. This approach cause to increased search speed and it decrease search 
> costs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14278) Testing

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14278:
-
Component/s: Observability

> Testing
> ---
>
> Key: CASSANDRA-14278
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14278
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Observability, Testing
>Reporter: Sumant Sahney
>Priority: Major
>
> Test to see if all the logs are written correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14277) Update Log Files in Patches / Modularly

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14277:
-
Component/s: Observability

> Update Log Files in Patches / Modularly 
> 
>
> Key: CASSANDRA-14277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14277
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Observability
>Reporter: Sumant Sahney
>Priority: Major
>
> Make changes in the Logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14278) Testing

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14278:
-
Component/s: Testing

> Testing
> ---
>
> Key: CASSANDRA-14278
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14278
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Observability, Testing
>Reporter: Sumant Sahney
>Priority: Major
>
> Test to see if all the logs are written correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14276) Walk through the code

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14276:
-
Component/s: Observability

>  Walk through the code
> -
>
> Key: CASSANDRA-14276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14276
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Observability
>Reporter: Sumant Sahney
>Priority: Major
>
> 1. Walk through the code and understand the modules logging size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14236) C* should gradually recycle open connections after hot reloading SSL Certificates

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14236:
-
Component/s: Streaming and Messaging

> C* should gradually recycle open connections after hot reloading SSL 
> Certificates
> -
>
> Key: CASSANDRA-14236
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14236
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Minor
>  Labels: security
>
> Currently the way SSL certificate hot reloading is implemented, it only 
> applies the new certificates to new connections. Open connections are not 
> terminated. Immediate termination of these connections is undesirable as it 
> will cause a thundering herd problem. We need a way to gradually drain 
> existing connections so that the new SSL certificates are used by all 
> connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14343) CF -> Tablename Comment cleanup

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14343:
-
Component/s: Core

> CF -> Tablename Comment cleanup
> ---
>
> Key: CASSANDRA-14343
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14343
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Major
>
> There's going to be sporadic comments in the codebase referring to column 
> families, these should be updated to tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14316) Read repair mutations should be sent to pending nodes

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14316:
-
Component/s: Coordination

> Read repair mutations should be sent to pending nodes
> -
>
> Key: CASSANDRA-14316
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14316
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Blake Eggleston
>Priority: Major
>
> Since read repair doesn't mirror mutations to pending endpoints, it seems 
> likely that there's an edge case that can break the monotonic quorum read 
> guarantee blocking read repair is supposed to provide.
> Assuming there are 3 nodes (A, B, & C) which replicate a token range. A new 
> node D is added, which will take over some of A's token range. During the 
> bootstrap of D, if there's a failed write that only makes it to a single node 
> (A) after bootstrap has started, then there's a quorum read including A & B, 
> which replicates that value to B. If A is removed when D finishes 
> bootstrapping, a quorum read including node C & D will not see the value 
> returned in the last quorum read which queried A & B. 
> Table to illustrate:
> |state | A | B | C | D|
> |1 begin |  | | | pending|
> |2 write |1 | | | pending|
> |3 repair|1|1| | pending|
> |4 joined| n/a |1| | |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14342) Refactor getColumnFamilyStore() to getTable()

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14342:
-
Component/s: Local Write-Read Paths

> Refactor getColumnFamilyStore() to getTable()
> -
>
> Key: CASSANDRA-14342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14342
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14114) uTest failed: NettyFactoryTest.createServerChannel_UnbindableAddress()

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14114:
-
Component/s: Testing

> uTest failed: NettyFactoryTest.createServerChannel_UnbindableAddress()
> --
>
> Key: CASSANDRA-14114
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14114
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Dinesh Joshi
>Priority: Minor
>  Labels: Testing
>
> {noformat}
> [junit] Testcase: 
> createServerChannel_UnbindableAddress(org.apache.cassandra.net.async.NettyFactoryTest):
>FAILED
> [junit] Expected exception: 
> org.apache.cassandra.exceptions.ConfigurationException
> [junit] junit.framework.AssertionFailedError: Expected exception: 
> org.apache.cassandra.exceptions.ConfigurationException
> [junit]
> [junit]
> [junit] Test org.apache.cassandra.net.async.NettyFactoryTest FAILED
> {noformat}
> I'm unable to reproduce the problem on a mac or circleCI, but on some hosts 
> (Linux 4.4.38), it's able to bind IP {{1.1.1.1}}, or any other valid IP 
> (which breaks the testcase):
> {noformat}
> ...
> [junit] INFO  [main] 2017-12-13 21:20:48,470 NettyFactory.java:190 - 
> Starting Messaging Service on /1.1.1.1:9876 , encryption: disabled
> ...
> {noformat}
> Is it because a network/kernal configuration?
> +[~jasobrown]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14095) New configuration settings using duration types

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14095:
-
Component/s: Configuration

> New configuration settings using duration types
> ---
>
> Key: CASSANDRA-14095
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14095
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Major
>
> All the _ms configuration params are difficult to work with.  We can allow 
> users to provide a duration type, like 3h, for settings instead of requiring 
> _ms based settings.
> This first jira is to allow for cassandra.yaml to take duration types instead 
> of ms based settings.  
> Given a setting, blah_ms, the new setting should be {{blah}} and require a 
> duration like {{3h}}.  The old _ms settings should continue to work and error 
> if both are used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14282) Add PID file directive in /etc/init.d/cassandra for debian

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14282:
-
Component/s: Packaging

> Add PID file directive in /etc/init.d/cassandra for debian
> --
>
> Key: CASSANDRA-14282
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14282
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Christian Becker
>Priority: Major
>
> The fix for CASSANDRA-13434 is also required on Debian. Debian doesn't care 
> about chkconfig headers, but it's also relying on {{systemd-sysv-generator}} 
> which uses these headers.
>  
> So the pidfile comment needs to be added on Debian too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13623) Official cassandra docker image: Connections closing and timing out inserting/deleting data

2018-11-18 Thread C. Scott Andreas (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691158#comment-16691158
 ] 

C. Scott Andreas commented on CASSANDRA-13623:
--

Hi [~a8775], sorry for the delay in reply on this issue. This bug tracker is 
primarily used by contributors of the Apache Cassandra project toward 
development of the database itself. If this is still an issue, can you reach 
out to the user's list or public IRC channel? A member of the community may be 
able to help.

Here's a page with information on the best channels for support: 
http://cassandra.apache.org/community/

> Official cassandra docker image: Connections closing and timing out 
> inserting/deleting data 
> 
>
> Key: CASSANDRA-13623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13623
> Project: Cassandra
>  Issue Type: Bug
> Environment: Official cassandra docker image 3.10 running under 
> Windows10, no extra configuration
>Reporter: a8775
>Priority: Major
>
> The problem looks like https://github.com/docker-library/cassandra/issues/101
> After some time the update is timing out eg. updating one row in a loop every 
> 500ms, the problem is after about 12h. This has never happen when working 
> with native installation of Cassandra.
> The orginal issue is about "Docker for Mac" but this time is on Windows10. I 
> couldn't find the reference for a similar problem under Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13623) Official cassandra docker image: Connections closing and timing out inserting/deleting data

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-13623.
--
Resolution: Information Provided

> Official cassandra docker image: Connections closing and timing out 
> inserting/deleting data 
> 
>
> Key: CASSANDRA-13623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13623
> Project: Cassandra
>  Issue Type: Bug
> Environment: Official cassandra docker image 3.10 running under 
> Windows10, no extra configuration
>Reporter: a8775
>Priority: Major
>
> The problem looks like https://github.com/docker-library/cassandra/issues/101
> After some time the update is timing out eg. updating one row in a loop every 
> 500ms, the problem is after about 12h. This has never happen when working 
> with native installation of Cassandra.
> The orginal issue is about "Docker for Mac" but this time is on Windows10. I 
> couldn't find the reference for a similar problem under Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13803) dtest failure: compaction_test.TestCompaction_with_LeveledCompactionStrategy.sstable_deletion_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13803:
-
Component/s: Testing

> dtest failure: 
> compaction_test.TestCompaction_with_LeveledCompactionStrategy.sstable_deletion_test
> --
>
> Key: CASSANDRA-13803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13803
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Marcus Eriksson
>Priority: Major
>  Labels: dtest
>
> http://cassci.datastax.com/job/cassandra-3.0_dtest/977/testReport/compaction_test/TestCompaction_with_LeveledCompactionStrategy/sstable_deletion_test



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13829) upgrade dtest failure: upgrade_tests.cql_tests.TestCQL....multiordering_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13829:
-
Component/s: Testing

> upgrade dtest failure: upgrade_tests.cql_tests.TestCQLmultiordering_test
> 
>
> Key: CASSANDRA-13829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13829
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Marcus Eriksson
>Priority: Major
>  Labels: dtest, upgrade-dtest
>
> http://cassci.datastax.com/job/cassandra-3.0_dtest_upgrade/112/testReport/upgrade_tests/cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/multiordering_test



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13536) Table space repair fails

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13536:
-
Component/s: Repair

> Table space repair fails
> 
>
> Key: CASSANDRA-13536
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13536
> Project: Cassandra
>  Issue Type: Bug
>  Components: Repair
>Reporter: Igmar Palsenberg
>Priority: Major
> Attachments: cassandra.system.log, nodetool.log
>
>
> I've somehow ended up with a corrupt keyspace. Repairing it always fails. 
> I've attached the logs for the nodetool repair command, and the system.log of 
> a mentioned node.
> Things tried :
> 1) Restarting all nodes prior to a repair attempt
> 2) nodetool repair --in-dc 
> 3) nodetool repair on a node in that DC
> Setup : 6 nodes in one DC, 4 in the other. Both DC's are in the same cluster.
> In case it's handy : I also have debug.log files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13571) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13571:
-
Component/s: Testing

> test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
> ---
>
> Key: CASSANDRA-13571
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13571
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Hamm
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_dtest/119/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test
> {noformat}
> Unexpected error in node2 log, error: 
> ERROR [Native-Transport-Requests-2] 2017-06-01 11:04:00,350 Message.java:625 
> - Unexpected exception during request; channel = [id: 0x4f42bdd4, 
> L:/127.0.0.2:9042 - R:/127.0.0.1:47376]
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
> consistency level QUORUM
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:513)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310)
>  ~[main/:na]
>   at org.apache.cassandra.service.ClientState.login(ClientState.java:271) 
> ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_131]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot 
> achieve consistency level QUORUM
>   at 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1774)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1736) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1682) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1597) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:1006)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:277)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:247)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:521)
>  ~[main/:na]
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503)
>  ~[main/:na]
>   ... 13 common frames omitted
> Unexpected error in node2 log, error: 
> ERROR [Native-Transport-Requests-1] 2017-06-01 11:04:01,552 Message.java:625 
> - Unexpected exception during request; channel = [id: 0x4dab9531, 
> L:/127.0.0.2:9042 - R:/127.0.0.1:47400]
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
> consistency level QUORUM
>   at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:513)
>  ~[main/:na]
>   at 
> 

[jira] [Commented] (CASSANDRA-13575) Snapshot fails on IndexInfo

2018-11-18 Thread C. Scott Andreas (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691159#comment-16691159
 ] 

C. Scott Andreas commented on CASSANDRA-13575:
--

Hi [~hkroger], sorry for the delay in reply on this issue. This bug tracker is 
primarily used by contributors of the Apache Cassandra project toward 
development of the database itself. If this is still an issue, can you reach 
out to the user's list or public IRC channel? A member of the community may be 
able to help.

Here's a page with information on the best channels for support: 
http://cassandra.apache.org/community/

> Snapshot fails on IndexInfo
> ---
>
> Key: CASSANDRA-13575
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13575
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Major
>
> Snapshot creation fails on IndexInfo table. This has happened in several 
> Cassandra environments.
> There is also Stratio lucene index installed 2.2.3.1. I don't know if that 
> matters.
> {code}
> [root@host1 IndexInfo-9f5c6374d48532299a0a5094af9ad1e3]# nodetool snapshot -t 
> testsnapshot
> Requested creating snapshot(s) for [all keyspaces] with snapshot name 
> [testsnapshot]
> error: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> -- StackTrace --
> java.lang.RuntimeException: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:85)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.createLinks(SSTableReader.java:1763)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2328)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2453)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2443)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:198)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2604)
> at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 

[jira] [Resolved] (CASSANDRA-13575) Snapshot fails on IndexInfo

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-13575.
--
Resolution: Information Provided

> Snapshot fails on IndexInfo
> ---
>
> Key: CASSANDRA-13575
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13575
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Major
>
> Snapshot creation fails on IndexInfo table. This has happened in several 
> Cassandra environments.
> There is also Stratio lucene index installed 2.2.3.1. I don't know if that 
> matters.
> {code}
> [root@host1 IndexInfo-9f5c6374d48532299a0a5094af9ad1e3]# nodetool snapshot -t 
> testsnapshot
> Requested creating snapshot(s) for [all keyspaces] with snapshot name 
> [testsnapshot]
> error: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> -- StackTrace --
> java.lang.RuntimeException: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:85)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.createLinks(SSTableReader.java:1763)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2328)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2453)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2443)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:198)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2604)
> at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> 

[jira] [Updated] (CASSANDRA-13582) test failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD.rolling_upgrade_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13582:
-
Component/s: Testing

> test failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD.rolling_upgrade_test
> -
>
> Key: CASSANDRA-13582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13582
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Hamm
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_large_dtest/39/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV3Upgrade_AllVersions_EndsAt_Trunk_HEAD/rolling_upgrade_test
> {noformat}
> Error Message
> Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', ['upgradesstables', 
> '-a']] exited with non-zero status; exit status: 2; 
> stderr: error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 279, in rolling_upgrade_test
> self.upgrade_scenario(rolling=True)
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 345, in upgrade_scenario
> self.upgrade_to_version(version_meta, partial=True, nodes=(node,))
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 446, in upgrade_to_version
> node.nodetool('upgradesstables -a')
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 792, in nodetool
> return handle_external_tool_process(p, ['nodetool', '-h', 'localhost', 
> '-p', str(self.jmx_port), cmd.split()])
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 2018, in handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> 'Subprocess [\'nodetool\', \'-h\', \'localhost\', \'-p\', \'7100\', 
> [\'upgradesstables\', \'-a\']] exited with non-zero status; exit status: 2; 
> \nstderr: error: null\n-- StackTrace --\njava.lang.AssertionError\n\tat 
> 

[jira] [Updated] (CASSANDRA-13485) Better handle IO errors on 3.0+ flat files

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13485:
-
Component/s: Local Write-Read Paths

> Better handle IO errors on 3.0+ flat files 
> ---
>
> Key: CASSANDRA-13485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13485
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Major
>
> In 3.0, hints and compaction transaction data both move into flat files. Like 
> every other part of cassandra, we can have IO errors either reading or 
> writing those files, and should properly handle IO exceptions on those files 
> (including respecting the disk failure policies).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13604) test failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13604:
-
Component/s: Testing

> test failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> -
>
> Key: CASSANDRA-13604
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13604
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Hamm
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/957/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> {noformat}
> Error Message
> Expected [['IN_PROGRESS']] from SELECT bootstrapped FROM system.local WHERE 
> key='local', but got [[u'COMPLETED']]
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 321, in 
> resumable_bootstrap_test
> assert_bootstrap_state(self, node3, 'IN_PROGRESS')
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 300, in 
> assert_bootstrap_state
> assert_one(session, "SELECT bootstrapped FROM system.local WHERE 
> key='local'", [expected_bootstrap_state])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> "Expected [['IN_PROGRESS']] from SELECT bootstrapped FROM system.local WHERE 
> key='local', but got [[u'COMPLETED']]\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-wg9qHk\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ncassandra.cluster: INFO: New Cassandra host  datacenter1> discovered\ncassandra.cluster: INFO: New Cassandra host  127.0.0.2 datacenter1> discovered\n- >> end captured 
> logging << -"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13586) test failure in paxos_tests.TestPaxos.replica_availability_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13586:
-
Component/s: Testing

> test failure in paxos_tests.TestPaxos.replica_availability_test
> ---
>
> Key: CASSANDRA-13586
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13586
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Hamm
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_dtest/122/testReport/paxos_tests/TestPaxos/replica_availability_test
> {noformat}
> Error Message
> errors={: WriteTimeout('Error from server: 
> code=1100 [Coordinator node timed out waiting for replica nodes\' responses] 
> message="Operation timed out - received only 1 responses." 
> info={\'received_responses\': 1, \'required_responses\': 2, \'consistency\': 
> \'SERIAL\'}',)}, last_host=127.0.0.1
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/paxos_tests.py", line 51, in 
> replica_availability_test
> assert_unavailable(session.execute, "INSERT INTO test (k, v) VALUES (2, 
> 2) IF NOT EXISTS")
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 84, in 
> assert_unavailable
> _assert_exception(fun, *args, expected=(Unavailable, WriteTimeout, 
> WriteFailure, ReadTimeout, ReadFailure))
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 62, in 
> _assert_exception
> raise e
> 'errors={: WriteTimeout(\'Error from server: 
> code=1100 [Coordinator node timed out waiting for replica nodes\\\' 
> responses] message="Operation timed out - received only 1 responses." 
> info={\\\'received_responses\\\': 1, \\\'required_responses\\\': 2, 
> \\\'consistency\\\': \\\'SERIAL\\\'}\',)}, last_host=127.0.0.1\n
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13491) Emit metrics for JVM safepoint pause

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13491:
-
Component/s: Metrics

> Emit metrics for JVM safepoint pause
> 
>
> Key: CASSANDRA-13491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13491
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Metrics
>Reporter: Simon Zhou
>Priority: Major
>
> GC pause is not the only source of latency from JVM. In one of our recent 
> production issues, the metrics for GC looks good (some >200ms and longest 
> 500ms) but GC logs show periodic pauses like this:
> {code}
> 2017-04-26T01:51:29.420+: 352535.998: Total time for which application 
> threads were stopped: 19.8835870 seconds, Stopping threads took: 19.7842073 
> seconds
> {code}
> This huge delay should be JVM malfunction but it caused some requests 
> timeout. So I'm suggesting to add support for safepoint pause for better 
> observability. Two problems though:
> 1. This depends on JVM. Some JVMs may not expose these internal MBeans. This 
> is actually the same case for existing GCInspector.
> 2. For Hotspot, it has HotspotRuntime as an internal MBean so that we can get 
> safepoint pause. However, there is no notification support for that. I got 
> error "MBean sun.management:type=HotspotRuntime does not implement 
> javax.management.NotificationBroadcaster" when trying to register a listener. 
> This means we will need to pull the safepoint pauses from HotspotRuntime 
> periodically.
> Reference:
> http://blog.ragozin.info/2012/10/safepoints-in-hotspot-jvm.html
> Anyone think we should support this?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14616) cassandra-stress write hangs with default options

2018-11-18 Thread Stefania (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691164#comment-16691164
 ] 

Stefania commented on CASSANDRA-14616:
--

[~Yarnspinner], [~jay.zhuang] are you OK with committing Jay's approach? I 
don't mind too much which approach, they are both OK, it's just a matter of 
picking a default value.

Jay you are a committer correct? So if Jeremey is OK committing your patch, I 
assume you prefer to merge it yourself?

> cassandra-stress write hangs with default options
> -
>
> Key: CASSANDRA-14616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Stress
>Reporter: Chris Lohfink
>Assignee: Jeremy
>Priority: Major
>
> Cassandra stress sits there for incredibly long time after connecting to JMX. 
> To reproduce {code}./tools/bin/cassandra-stress write{code}
> If you give it a -n its not as bad which is why dtests etc dont seem to be 
> impacted. Does not occur in 3.0 branch but does in 3.11 and trunk



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13258) Rethink read-time defragmentation introduced in 1.1 (CASSANDRA-2503)

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13258:
-
Component/s: Compaction

> Rethink read-time defragmentation introduced in 1.1 (CASSANDRA-2503)
> 
>
> Key: CASSANDRA-13258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13258
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Nate McCall
>Priority: Major
>
> tl,dr; we issue a Mutation(!) on a read when using STCS and there are more 
> than minCompactedThreshold SSTables encountered by the iterator. (See 
> org/apache/cassandra/db/SinglePartitionReadCommand.java:782)
> I can see a couple of use cases where this *might* be useful, but from a 
> practical stand point, this is an excellent way to exacerbate compaction 
> falling behind.
> With the introduction of other, purpose built compaction strategies, I would 
> be interested to hear why anyone would consider this still a good idea. Note 
> that we only do it for STCS so at best, we are inconsistent. 
> There are some interesting comments on CASSANDRA-10342 regarding this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13242) Null Pointer Exception when upgrading from 2.1.13 to 3.9

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-13242.
--
Resolution: Information Provided

> Null Pointer Exception when upgrading from 2.1.13 to 3.9
> 
>
> Key: CASSANDRA-13242
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13242
> Project: Cassandra
>  Issue Type: Bug
>Reporter: JianwenSun
>Priority: Major
>
> We get an error during startup, when upgrade from 2.1.13 to 3.9 similar with 
> CASSANDRA-11008.  help plz.
> INFO  09:44:54 Migrating legacy hints to new storage
> INFO  09:44:54 Forcing a major compaction of system.hints table
> INFO  09:44:54 Writing legacy hints to the new storage
> Exception (java.lang.NullPointerException) encountered during startup: null
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.serializers.Int32Serializer.deserialize(Int32Serializer.java:31)
>   at 
> org.apache.cassandra.serializers.Int32Serializer.deserialize(Int32Serializer.java:25)
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:115)
>   at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getInt(UntypedResultSet.java:288)
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.convertLegacyHint(LegacyHintsMigrator.java:197)
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHintsInternal(LegacyHintsMigrator.java:175)
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHints(LegacyHintsMigrator.java:158)
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHints(LegacyHintsMigrator.java:151)
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHints(LegacyHintsMigrator.java:142)
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.lambda$migrateLegacyHints$1(LegacyHintsMigrator.java:128)
>   at java.lang.Iterable.forEach(Iterable.java:75)
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHints(LegacyHintsMigrator.java:128)
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:96)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:334)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730)
> ERROR 09:44:54 Exception encountered during startup
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.serializers.Int32Serializer.deserialize(Int32Serializer.java:31)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.serializers.Int32Serializer.deserialize(Int32Serializer.java:25)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:115) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getInt(UntypedResultSet.java:288)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.convertLegacyHint(LegacyHintsMigrator.java:197)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHintsInternal(LegacyHintsMigrator.java:175)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHints(LegacyHintsMigrator.java:158)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHints(LegacyHintsMigrator.java:151)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHints(LegacyHintsMigrator.java:142)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.lambda$migrateLegacyHints$1(LegacyHintsMigrator.java:128)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_121]
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrateLegacyHints(LegacyHintsMigrator.java:128)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.hints.LegacyHintsMigrator.migrate(LegacyHintsMigrator.java:96)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:334) 
> [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730) 
> [apache-cassandra-3.9.jar:3.9]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, 

[jira] [Updated] (CASSANDRA-13251) testall failure in org.apache.cassandra.dht.SplitterTest.randomSplitTestVNodesRandomPartitioner-compression

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13251:
-
Component/s: Testing

> testall failure in 
> org.apache.cassandra.dht.SplitterTest.randomSplitTestVNodesRandomPartitioner-compression
> ---
>
> Key: CASSANDRA-13251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13251
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Priority: Major
>  Labels: test-failure, testall
> Attachments: TEST-org.apache.cassandra.dht.SplitterTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/90/testReport/org.apache.cassandra.dht/SplitterTest/randomSplitTestVNodesRandomPartitioner_compression
> {code}
> Error Message
> Could not split 178 tokens with rf=2 into 5 parts 
> (localRanges=[(66983521362719266944456123083914511,85081278311930091024145847217249230],
>  (94621693294701866050131421626890369,155147924265201495106534043294393024], 
> (176538082184963300389348242162332035,305792111240343398238253450510339347], 
> (479458943131028805264977024868840564,744055653641599802914615283831492936], 
> (810589196372603073528089393161162866,831830538079653220048616862854206345], 
> (884694272972916497279600499364771720,917906129467813841720691087209229564], 
> (1190363158655498033200868362034857246,1343415933811428449688964742003709272],
>  
> (1636900237997891740636664883920747008,1753727385092156556836442591826435605],
>  
> (180124568401227495084977300291690,2528306899457406839219749257884865888],
>  
> (3150748204757995073614524576195208894,3224051670062256982022834977572896550],
>  
> (3585487947480198191351503755389672465,3634878371349827945309870445650281331],
>  
> (4030308375544369097920949957235793513,4370371117561995384766231663965756140],
>  
> (4535438169759213265090259029630802611,4689892560927039402953380743426486504],
>  
> (5287584146396015010572576411692276009,5406097111560335434282124987847604722],
>  
> (5474798573838792524085531666932132221,5783795445729041321850197497423607389],
>  
> (6483611830704416452149527075977393509,6554490532209812368334712015857832723],
>  
> (6769198671784117930281531448533862551,7068461798659361674887545682057269261],
>  
> (7321459781750457301852692823388183099,7448750723236904943507403940589890738],
>  
> (781742922765805774811726757866324,8469396585535281921975026967786880428],
>  
> (8756410520771255293393363154194997341,9286890681506309947906334839397894960],
>  
> (9289984562785308211551139473388367716,9321331828840441710600306717915048457],
>  
> (9469995775135107062094287100974323625,9470382363792511770098226296871216798],
>  
> (9631257428510732421512152732314666805,9641321600658666064769257433185081866],
>  
> (9702013322927550302527720051522091997,9780241872626230893279083352467815255],
>  
> (9879007640770428646298110236321423212,10182152036393621386537695331787643362],
>  
> (10360080166939453903119417544860436452,10603429529189820773994857334773625709],
>  
> (10692207402833482716776821297019768717,10838535107339145180497727737635889074],
>  
> (10860619443975305618726435337827820858,1114057559685360156937465840697571],
>  
> (11322875792646115624211851961004524304,11825112425833682786791532461593039643],
>  
> (12048513781140312188464187583088896317,12687919191650223582005439448687058175],
>  
> (12861816251305671464581304548879554150,12979292108191268111303901479436955430],
>  
> (13687075371975202983118946011266083299,13897613765528967198086671762344986499],
>  
> (13937446982894296731415008433912855071,14177085493396759172022529428284337497],
>  
> (14299014790401822915039688229331410744,14466298269240910614250610688198779566],
>  
> (14536342682029939793576617476528003691,14768357171390498288534602234783042815],
>  
> (14988099981250705818356170858242038058,15599412989878329362948087921887339668],
>  
> (15968468146419818468325000987016256626,16079182076390109538477997836412212011],
>  
> (16428466130201894182631991195294399068,17088413021649633531826439581836418129],
>  
> (17341287068644540915001525583405079818,17535942875297112645807432133857786812],
>  
> (18060989480816468648803710515232301420,18133373395482491286896814704489031702],
>  
> (18986353754623042416570994098920335302,19014892241623475840049677638441566177],
>  
> (19310575071842346465137910020595975021,19452801992936812168339988259445068315],
>  
> (19697153648980116903028606506675484549,19925997818291151917322736489810605468],
>  
> (19975410273801635978423827243772366001,1388998629507009063561500619627753],
>  
> (20019551596158054497149280446907913505,20031426394853251559839477096283033016],
>  
> 

[jira] [Updated] (CASSANDRA-13191) test failure in org.apache.cassandra.hints.HintsBufferPoolTest.testBackpressure

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13191:
-
Component/s: Testing

> test failure in 
> org.apache.cassandra.hints.HintsBufferPoolTest.testBackpressure
> ---
>
> Key: CASSANDRA-13191
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13191
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Priority: Major
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1392/testReport/org.apache.cassandra.hints/HintsBufferPoolTest/testBackpressure
> {noformat}
> Error Message
> Connection reset
> Stacktrace
> java.net.SocketException: Connection reset
>   at java.net.SocketInputStream.read(SocketInputStream.java:209)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
>   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
>   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
>   at java.io.InputStreamReader.read(InputStreamReader.java:184)
>   at java.io.BufferedReader.fill(BufferedReader.java:161)
>   at java.io.BufferedReader.readLine(BufferedReader.java:324)
>   at java.io.BufferedReader.readLine(BufferedReader.java:389)
>   at 
> org.jboss.byteman.agent.submit.Submit$Comm.readResponse(Submit.java:941)
>   at org.jboss.byteman.agent.submit.Submit.submitRequest(Submit.java:790)
>   at org.jboss.byteman.agent.submit.Submit.addScripts(Submit.java:603)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnit.loadScriptText(BMUnit.java:268)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$10.evaluate(BMUnitRunner.java:369)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$6.evaluate(BMUnitRunner.java:241)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:75)
> Standard Output
> ERROR [main] 2017-02-07 11:03:07,465 ?:? - SLF4J: stderr
> INFO  [main] 2017-02-07 11:03:07,650 ?:? - Configuration location: 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2017-02-07 11:03:07,651 ?:? - Loading settings from 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2017-02-07 11:03:08,225 ?:? - Node 
> configuration:[allocate_tokens_for_keyspace=null; authenticator=null; 
> authorizer=null; auto_bootstrap=true; auto_snapshot=true; 
> back_pressure_enabled=false; back_pressure_strategy=null; 
> batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; 
> batchlog_replay_throttle_in_kb=1024; broadcast_address=null; 
> broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; 
> cas_contention_timeout_in_ms=1000; cdc_enabled=false; 
> cdc_free_space_check_interval_ms=250; 
> cdc_raw_directory=build/test/cassandra/cdc_raw:222; cdc_total_space_in_mb=0; 
> client_encryption_options=; cluster_name=Test Cluster; 
> column_index_cache_size_in_kb=2; column_index_size_in_kb=4; 
> commit_failure_policy=stop; commitlog_compression=null; 
> commitlog_directory=build/test/cassandra/commitlog:222; 
> commitlog_max_compression_buffers_in_pool=3; 
> commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=5; 
> commitlog_sync=batch; commitlog_sync_batch_window_in_ms=1.0; 
> commitlog_sync_period_in_ms=0; commitlog_total_space_in_mb=null; 
> compaction_large_partition_warning_threshold_mb=100; 
> compaction_throughput_mb_per_sec=0; concurrent_compactors=4; 
> concurrent_counter_writes=32; concurrent_materialized_view_writes=32; 
> concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; 
> counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; 
> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; 
> credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; 
> credentials_validity_in_ms=2000; cross_node_timeout=false; 
> data_file_directories=[Ljava.lang.String;@e7edb54; disk_access_mode=mmap; 
> disk_failure_policy=ignore; disk_optimization_estimate_percentile=0.95; 
> disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; 
> dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; 
> dynamic_snitch_reset_interval_in_ms=60; 
> dynamic_snitch_update_interval_in_ms=100; 
> enable_scripted_user_defined_functions=true; 
> enable_user_defined_functions=true; 
> enable_user_defined_functions_threads=true; encryption_options=null; 
> endpoint_snitch=org.apache.cassandra.locator.SimpleSnitch; 
> file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; 
> gc_warn_threshold_in_ms=0; hinted_handoff_disabled_datacenters=[]; 
> hinted_handoff_enabled=true; 

[jira] [Updated] (CASSANDRA-13220) test failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_0_x_To_indev_2_1_x.ticket_5230_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13220:
-
Component/s: Testing

> test failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_0_x_To_indev_2_1_x.ticket_5230_test
> --
>
> Key: CASSANDRA-13220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13220
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node2.log, node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/21/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_0_x_To_indev_2_1_x/ticket_5230_test
> {noformat}
> Error Message
> Unexpected error in log, see stdout
>  >> begin captured logging << 
> dtest: DEBUG: Upgrade test beginning, setting CASSANDRA_VERSION to 2.0.17, 
> and jdk to 7. (Prior values will be restored after test).
> dtest: DEBUG: Switching jdk to version 7 (JAVA_HOME is changing from 
> /usr/lib/jvm/jdk1.8.0_51 to /usr/lib/jvm/jdk1.7.0_80)
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-NvCzEj
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> dtest: DEBUG: upgrading node1 to 
> github:apache/a6237bf65a95d654b7e702e81fd0d353460d0c89
> dtest: DEBUG: Switching jdk to version 8 (JAVA_HOME is changing from 
> /usr/lib/jvm/jdk1.7.0_80 to /usr/lib/jvm/jdk1.8.0_51)
> ccm: INFO: Fetching Cassandra updates...
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> dtest: DEBUG: Querying upgraded node
> dtest: DEBUG: Querying old node
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-NvCzEj
> dtest: DEBUG: clearing ssl stores from [/tmp/dtest-NvCzEj] directory
> - >> end captured logging << -
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 219, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 593, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13192) PK indices in 'Prepared' response can overflow

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13192:
-
Component/s: CQL

> PK indices in 'Prepared' response can overflow
> --
>
> Key: CASSANDRA-13192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13192
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Olivier Michallat
>Priority: Minor
>
> CASSANDRA-7660 added PK indices to the {{Prepared}} response. They are 
> encoded as shorts.
> It's possible to prepare a query with more than 32768 placeholders (the hard 
> limit is 64K). For example, we sometimes see users running IN queries with 
> thousands of elements (a bad practice of course, but still possible).
> When a PK component is present after the 32768th position, the PK index 
> overflows and a negative value is returned. This can throw off clients if 
> they're not prepared to handle it. For example, the Java driver currently 
> accepts the response, but will fail much later if you try to compute a bound 
> statement's routing key.
> Failing fast would be safer here, the prepare query should error out if we 
> detect a PK index overflow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13196) test failure in snitch_test.TestGossipingPropertyFileSnitch.test_prefer_local_reconnect_on_listen_address

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13196:
-
Component/s: Testing

> test failure in 
> snitch_test.TestGossipingPropertyFileSnitch.test_prefer_local_reconnect_on_listen_address
> -
>
> Key: CASSANDRA-13196
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13196
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1487/testReport/snitch_test/TestGossipingPropertyFileSnitch/test_prefer_local_reconnect_on_listen_address
> {code}
> {novnode}
> Error Message
> Error from server: code=2200 [Invalid query] message="keyspace keyspace1 does 
> not exist"
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-k6b0iF
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.policies: INFO: Using datacenter 'dc1' for DCAwareRoundRobinPolicy 
> (via host '127.0.0.1'); if incorrect, please specify a local_dc to the 
> constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/snitch_test.py", line 87, in 
> test_prefer_local_reconnect_on_listen_address
> new_rows = list(session.execute("SELECT * FROM {}".format(stress_table)))
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 1998, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 3784, in result
> raise self._final_exception
> 'Error from server: code=2200 [Invalid query] message="keyspace keyspace1 
> does not exist"\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-k6b0iF\ndtest: DEBUG: Done setting configuration options:\n{   
> \'initial_token\': None,\n\'num_tokens\': \'32\',\n
> \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n  
>   \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': 
> 1,\n\'truncate_request_timeout_in_ms\': 1,\n
> \'write_request_timeout_in_ms\': 1}\ncassandra.policies: INFO: Using 
> datacenter \'dc1\' for DCAwareRoundRobinPolicy (via host \'127.0.0.1\'); if 
> incorrect, please specify a local_dc to the constructor, or limit contact 
> points to local cluster nodes\ncassandra.cluster: INFO: New Cassandra host 
>  discovered\n- >> end captured 
> logging << -'
> {novnode}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13194) test failure in repair_tests.incremental_repair_test.TestIncRepair.compaction_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13194:
-
Component/s: Testing

> test failure in 
> repair_tests.incremental_repair_test.TestIncRepair.compaction_test
> --
>
> Key: CASSANDRA-13194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13194
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Blake Eggleston
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/333/testReport/repair_tests.incremental_repair_test/TestIncRepair/compaction_test
> {noformat}
> Error Message
> errors={'127.0.0.3': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.3
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-2JszEQ
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 225, in compaction_test
> create_ks(session, 'ks', 3)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 725, in create_ks
> session.execute(query % (name, "'class':'SimpleStrategy', 
> 'replication_factor':%d" % rf))
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 1998, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 3784, in result
> raise self._final_exception
> "errors={'127.0.0.3': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.3\n 
> >> begin captured logging << \ndtest: DEBUG: cluster ccm 
> directory: /tmp/dtest-2JszEQ\ndtest: DEBUG: Done setting configuration 
> options:\n{   'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ncassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster 
> nodes\ncassandra.cluster: INFO: New Cassandra host  datacenter1> discovered\ncassandra.cluster: INFO: New Cassandra host  127.0.0.2 datacenter1> discovered\n- >> end captured 
> logging << -"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13208) test failure in paging_test.TestPagingWithDeletions.test_failure_threshold_deletions

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13208:
-
Component/s: Testing

> test failure in 
> paging_test.TestPagingWithDeletions.test_failure_threshold_deletions
> 
>
> Key: CASSANDRA-13208
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13208
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node2.log, node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/528/testReport/paging_test/TestPagingWithDeletions/test_failure_threshold_deletions
> {noformat}
> Error Message
> errors={'127.0.0.2': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.2
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-iXUhoT
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> cassandra.cluster: INFO: New Cassandra host  
> discovered
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/paging_test.py", line 3404, in 
> test_failure_threshold_deletions
> self.session.execute(SimpleStatement("select * from paging_test", 
> fetch_size=1000, consistency_level=CL.ALL, 
> retry_policy=FallthroughRetryPolicy()))
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 1998, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 3784, in result
> raise self._final_exception
> "errors={'127.0.0.2': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.2\n 
> >> begin captured logging << \ndtest: DEBUG: cluster ccm 
> directory: /tmp/dtest-iXUhoT\ndtest: DEBUG: Done setting configuration 
> options:\n{   'initial_token': None,\n'num_tokens': '32',\n
> 'phi_convict_threshold': 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ncassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster 
> nodes\ncassandra.cluster: INFO: New Cassandra host  datacenter1> discovered\ncassandra.cluster: INFO: New Cassandra host  127.0.0.2 datacenter1> discovered\n- >> end captured 
> logging << -"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13223) Unable to compute when histogram overflowed

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13223:
-
Component/s: Metrics

> Unable to compute when histogram overflowed
> ---
>
> Key: CASSANDRA-13223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Metrics
>Reporter: Vladimir Bukhtoyarov
>Priority: Minor
>
> DecayingEstimatedHistogramReservoir throws exception when value upper max 
> recorded to reservoir. It is very undesired behavior, because functionality 
> like logging or monitoring should never fail with exception. Current behavior 
> of DecayingEstimatedHistogramReservoir violates contract for 
> [Reservoir|https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/Reservoir.java],
>  as you can see javadocs for Reservoir says nothing that implementation can 
> throw exception in getSnapshot method. As result all Dropwizzard/Metrics 
> reporters are broken, because nobody expect that metric will throw exception 
> on get, for example our monitoring pipeline is broken with exception:
> {noformat}
> com.fasterxml.jackson.databind.JsonMappingException: Unable to compute when 
> histogram overflowed (through reference chain: 
> java.util.UnmodifiableSortedMap["org.apache.cassandra.metrics.Table
> .ColUpdateTimeDeltaHistogram.all"])
> at 
> com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:339)
> at 
> com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:299)
> at 
> com.fasterxml.jackson.databind.ser.std.StdSerializer.wrapAndThrow(StdSerializer.java:342)
> at 
> com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeFields(MapSerializer.java:620)
> at 
> com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:519)
> at 
> com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:31)
> at 
> com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130)
> at 
> com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:2436)
> at 
> com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:355)
> at 
> com.fasterxml.jackson.core.JsonGenerator.writeObjectField(JsonGenerator.java:1442)
> at 
> com.codahale.metrics.json.MetricsModule$MetricRegistrySerializer.serialize(MetricsModule.java:188)
> at 
> com.codahale.metrics.json.MetricsModule$MetricRegistrySerializer.serialize(MetricsModule.java:171)
> at 
> com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130)
> at 
> com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1428)
> at 
> com.fasterxml.jackson.databind.ObjectWriter._configAndWriteValue(ObjectWriter.java:1129)
> at 
> com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:967)
> at 
> com.codahale.metrics.servlets.MetricsServlet.doGet(MetricsServlet.java:176)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
> at 
> com.ringcentral.slf4j.CleanMDCFilter.doFilter(CleanMDCFilter.java:18)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:524)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at 

[jira] [Updated] (CASSANDRA-13175) Integrate "Error Prone" Code Analyzer

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13175:
-
Component/s: Testing
 Libraries

> Integrate "Error Prone" Code Analyzer
> -
>
> Key: CASSANDRA-13175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13175
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries, Testing
>Reporter: Stefan Podkowinski
>Priority: Major
> Attachments: 0001-Add-Error-Prone-code-analyzer.patch, 
> checks-2_2.out, checks-3_0.out, checks-trunk.out
>
>
> I've been playing with [Error Prone|http://errorprone.info/] by integrating 
> it into the build process and to see what kind of warnings it would produce. 
> So far I'm positively impressed by the coverage and usefulness of some of the 
> implemented checks. See attachments for results.
> Unfortunately there are still some issues on how the analyzer is effecting 
> generated code and used guava versions, see 
> [#492|https://github.com/google/error-prone/issues/492]. In case those issues 
> have been solved and the resulting code isn't affected by the analyzer, I'd 
> suggest to add it to trunk with warn only behaviour and some less useful 
> checks disabled. Alternatively a new ant target could be added, maybe with 
> build breaking checks and CI integration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13195) Add dtest commit id to the CI builds

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13195:
-
Component/s: Testing
 Build

> Add dtest commit id to the CI builds
> 
>
> Key: CASSANDRA-13195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13195
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build, Testing
>Reporter: Aleksandr Sorokoumov
>Assignee: Michael Shuler
>Priority: Minor
>  Labels: dtest
>
> When investigating on the dtest-related issues, it might be useful to see 
> dtest commit id in the Jenkins logs. AFAIK Jenkins clones dtest master right 
> now.
> For example, it would be a bit easier to tackle some issues like 
> CASSANDRA-13140 because one can immediately reproduce it with the same 
> version of the dtest as used in the build and after that reason about the 
> changes between this revision and the master using git history for the 
> particular test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13284) break CQL specification & reference parser into separate repo

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13284:
-
Component/s: CQL

> break CQL specification & reference parser into separate repo
> -
>
> Key: CASSANDRA-13284
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13284
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Jon Haddad
>Priority: Major
>
> It would be pretty awesome to be able to use the CQL parser in other 
> projects, such as a driver, developer studio, etc.  I propose we break out 
> the CQL spec and parser into a sub-project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13296) test failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_3_0_x_To_indev_3_x.rolling_upgrade_with_internode_ssl_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13296:
-
Component/s: Testing

> test failure in 
> upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_3_0_x_To_indev_3_x.rolling_upgrade_with_internode_ssl_test
> --
>
> Key: CASSANDRA-13296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13296
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_large_dtest/22/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_3_0_x_To_indev_3_x/rolling_upgrade_with_internode_ssl_test
> {code}
> Error Message
> Ran out of time waiting for queue size (1) to be 'le' to 0. Aborting.
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 291, in rolling_upgrade_with_internode_ssl_test
> self.upgrade_scenario(rolling=True, internode_ssl=True)
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 356, in upgrade_scenario
> self._wait_until_queue_condition('writes pending verification', 
> verification_queue, operator.le, 0, max_wait_s=1200)
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 541, in _wait_until_queue_condition
> raise RuntimeError("Ran out of time waiting for queue size ({}) to be 
> '{}' to {}. Aborting.".format(qsize, opfunc.__name__, required_len))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13266) Bulk loading sometimes is very slow?

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13266:
-
Component/s: Tools

> Bulk loading sometimes is very slow?
> 
>
> Key: CASSANDRA-13266
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13266
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: liangsibin
>Priority: Major
>
> When I bulkload sstable created with CQLSSTableWriter, it's sometimes very 
> slow.  
> CQLSSTableWriter withBufferSizeInMB 32MB
> use 2 nodes write SSTable and bulkload
> 1、Use CQLSSTableWriter create SSTable (60 threads)
> 2、When the directory over 10 rows,bulkload the directory (20 threads)
> the normal bulkload speed is about 70M/s per node,and bulkload 141G SStables 
> per node cost 90 minutes but sometimes is very slow,the same data cost 4 
> hours why?
> here is the code bulkload sstable
> {code:java}
> public class JmxBulkLoader {
>   
> static final Logger LOGGER = LoggerFactory.getLogger(JmxBulkLoader.class);
>   private JMXConnector connector;
>   private StorageServiceMBean storageBean;
>   private Timer timer = new Timer();
>   public JmxBulkLoader(String host, int port) throws Exception {
>   connect(host, port);
>   }
>   private void connect(String host, int port) throws IOException, 
> MalformedObjectNameException {
>   JMXServiceURL jmxUrl = new JMXServiceURL(
>   
> String.format("service:jmx:rmi:///jndi/rmi://%s:%d/jmxrmi", host, port));
>   Map env = new HashMap();
>   connector = JMXConnectorFactory.connect(jmxUrl, env);
>   MBeanServerConnection mbeanServerConn = 
> connector.getMBeanServerConnection();
>   ObjectName name = new 
> ObjectName("org.apache.cassandra.db:type=StorageService");
>   storageBean = JMX.newMBeanProxy(mbeanServerConn, name, 
> StorageServiceMBean.class);
>   }
>   public void close() throws IOException {
>   connector.close();
>   }
>   public void bulkLoad(String path) {
>   LOGGER.info("begin load data to cassandra " + new 
> Path(path).getName());
>   timer.start();
>   storageBean.bulkLoad(path);
>   timer.end();
>   LOGGER.info("bulk load took " + timer.getTimeTakenMillis() + 
> "ms, path: " + new Path(path).getName());
>   }
> }
> {code}
> bulkload thread
> {code:java} 
> public class BulkThread implements Runnable {
>   private String path;
>   private String jmxHost;
>   private int jmxPort;
>   
>   public BulkThread(String path, String jmxHost, int jmxPort) {
>   super();
>   this.path = path;
>   this.jmxHost = jmxHost;
>   this.jmxPort = jmxPort;
>   }
>   @Override
>   public void run() {
>   JmxBulkLoader bulkLoader = null;
>   try {
>   bulkLoader = new JmxBulkLoader(jmxHost, jmxPort);
>   bulkLoader.bulkLoad(path);
>   } catch (Exception e) {
>   e.printStackTrace();
>   } finally {
>   if (bulkLoader != null)
>   try {
>   bulkLoader.close();
>   bulkLoader = null;
>   } catch (IOException e) {
>   e.printStackTrace();
>   }
>   }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13171) Setting -Dcassandra.join_ring=false is ambiguous

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13171:
-
Component/s: Lifecycle
 Configuration

> Setting -Dcassandra.join_ring=false is ambiguous
> 
>
> Key: CASSANDRA-13171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Lifecycle
>Reporter: Joe Olson
>Priority: Major
> Attachments: log.txt, log2.txt
>
>
> Setting -Dcassandra.join_ring=false in /etc/cassandra/jvm.options has 
> questionable results. From the log snippet below, the value is set to false, 
> and read. However, everything seems to happen as if it weren'tgossip 
> occurs, and streaming data via a bulk load comes in.
> (see attached log)
> I really need this node not to join the cluster, because it is behind on its 
> compaction, and will immediately crash (too many files open - OS ulimit is 
> already set to 30 temporarily). I know of no other way to do an "offline 
> compaction"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14709) Global configuration parameter to reject increment repair and allow full repair only

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14709:
-
Component/s: Repair
 Configuration

> Global configuration parameter to reject increment repair and allow full 
> repair only
> 
>
> Key: CASSANDRA-14709
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14709
> Project: Cassandra
>  Issue Type: Wish
>  Components: Configuration, Repair
>Reporter: Thomas Steinmaurer
>Priority: Major
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.0.x
>
>
> We are running Cassandra in AWS and On-Premise at customer sites, currently 
> 2.1 in production with 3.0/3.11 in pre-production stages including loadtest.
> In a migration path from 2.1 to 3.11.x, I’m afraid that at some point in time 
> we end up in incremental repairs being enabled / ran a first time 
> unintentionally, cause:
> a) A lot of online resources / examples do not use the _-full_ command-line 
> option available since 2.2 (?)
> b) Our internal (support) tickets of course also state nodetool repair 
> command without the -full option, as these examples are for 2.1
> Especially for On-Premise customers (with less control than with our AWS 
> deployments), this asks a bit for getting out-of-control once we have 3.11 
> out and nodetool repair being run without the -full command-line option.
> With troubles incremental repair are introducing and incremental being the 
> default since 2.2 (?), what do you think about a JVM system property, 
> cassandra.yaml setting or whatever … to basically let the cluster 
> administrator chose if incremental repairs are allowed or not? I know, such a 
> flag still can be flipped then (by the customer), but as a first safety stage 
> possibly sufficient enough.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14666) Race condition in AbstractReplicationStrategy.getNaturalReplicas

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14666:
-
Component/s: Distributed Metadata

> Race condition in AbstractReplicationStrategy.getNaturalReplicas
> 
>
> Key: CASSANDRA-14666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14666
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Benedict
>Assignee: Benedict
>Priority: Major
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> There is a very narrow and infrequent race window, in which two ring updates 
> occur in a short space of time (or during an interval of no queries):
> - thread A invalidates the cache after the first ring change, snapshots this 
> version of the ring, and begins to calculate its natural endpoints
> - thread B sees the second ring change, and invalidates the cache before 
> thread A completes
> - thread A writes its value to the cache, based on the old ring layout
> Now, a stale view of the endpoints for this token will be persisted in 
> AbstractReplicationStrategy until the next ring change (which may feasibly 
> never occur)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14680) Built-in 2i implementation applies updates non-deterministically

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14680:
-
Component/s: Secondary Indexes

> Built-in 2i implementation applies updates non-deterministically
> 
>
> Key: CASSANDRA-14680
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14680
> Project: Cassandra
>  Issue Type: Bug
>  Components: Secondary Indexes
>Reporter: Chris Lohfink
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Spotted by [~cnlwsu] during CASSANDRA-14664 review, and confirmed by me and 
> [~beobal] separately.
> {{Keyspace.applyInternal()}} generates {{nowInSeconds}} from local time at 
> the time of mutation application - which can happen at quite a delay from 
> that mutation creation (think streaming path, hints, batchlog replay). That 
> {{nowInSeconds}} value is later used by {{CassandraIndex}} {{Indexer}} to 
> determine liveness of cells and also used for some of generated tombstones.
> Depending on when {{Keyspace.applyInternal()}} call happens, you'll see 
> varying results in the internal 2i table, which sounds problematic. The 
> values should be derived from the cells and liveness info in the partition 
> updates exclusively.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14590) Size of fixed-width write values not verified from peers

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14590:
-
Component/s: Streaming and Messaging

> Size of fixed-width write values not verified from peers 
> -
>
> Key: CASSANDRA-14590
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14590
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> There are any number of reasons data arriving on a node might be corrupt in a 
> manner that can ultimately pollute non-corrupt data.  CASSANDRA-14568 is just 
> one example.  In this bug’s case, invalid clusterings were sent to a legacy 
> version peer, which eventually sent them back to a latest version peer.  In 
> either case, verification of the size of the values arriving would have 
> prevented the corruption spreading, or affecting whole-sstable operations 
> containing the values.
>  
> I propose verifying the fixed-width types arriving from peers, and also on 
> serialization.  The former permits rejecting the write with an exception, and 
> preventing the write being ACK’d, or polluting memtables (thus maintaining 
> update atomicity without affecting more records).  The latter will be a 
> guarantee that this corruption cannot make it to an sstable via any other 
> route (e.g. a bug internal to the node)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14516) filter sstables by min/max clustering bounds during reads

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14516:
-
Component/s: Local Write-Read Paths

> filter sstables by min/max clustering bounds during reads
> -
>
> Key: CASSANDRA-14516
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14516
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> In SinglePartitionReadCommand, we don't filter out sstables whose min/max 
> clustering bounds don't intersect with the clustering bounds being queried. 
> This causes us to do extra work on the read path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14547) Transient Replication: Support paxos

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14547:
-
Component/s: Coordination

> Transient Replication: Support paxos
> 
>
> Key: CASSANDRA-14547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14547
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14697) Transient Replication 4.0 pre-release followup work

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14697:
-
Component/s: Core

> Transient Replication 4.0 pre-release followup work
> ---
>
> Key: CASSANDRA-14697
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14697
> Project: Cassandra
>  Issue Type: Task
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> This is an umbrella ticket for linking work done post CASSANDRA-14404.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14499) node-level disk quota

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14499:
-
Component/s: Core
 Configuration

> node-level disk quota
> -
>
> Key: CASSANDRA-14499
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14499
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration, Core
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Major
>
> Operators should be able to specify, via YAML, the amount of usable disk 
> space on a node as a percentage of the total available or as an absolute 
> value. If both are specified, the absolute value should take precedence. This 
> allows operators to reserve space available to the database for background 
> tasks -- primarily compaction. When a node reaches its quota, gossip should 
> be disabled to prevent it taking further writes (which would increase the 
> amount of data stored), being involved in reads (which are likely to be more 
> inconsistent over time), or participating in repair (which may increase the 
> amount of space used on the machine). The node re-enables gossip when the 
> amount of data it stores is below the quota.   
> The proposed option differs from {{min_free_space_per_drive_in_mb}}, which 
> reserves some amount of space on each drive that is not usable by the 
> database.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14472) Too many LEAK DETECTED errors in the logs.

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-14472.
--
Resolution: Information Provided

> Too many LEAK DETECTED errors in the logs.
> --
>
> Key: CASSANDRA-14472
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14472
> Project: Cassandra
>  Issue Type: Bug
>Reporter: venky
>Priority: Major
>
> We are seeing too many leak detected errors in the system log (Cassandra 
> 3.11.0 & dse (5.1.2).
> ```
> ERROR [Reference-Reaper:1] 2018-05-28 21:59:42,783  Ref.java:224 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@733e48f6) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@1977222631:Memory@[7dda86406000..7dda86596000)
>  was not released before the reference was garbage collected
>  
>  
>  
>  
> ERROR [Reference-Reaper:1] 2018-05-28 22:02:13,183  Ref.java:224 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@463fffb6) to class 
> org.apache.cassandra.io.util.FileHandle$Cleanup@304923348:/app/db/data2/odp_raw/rolling_device_usage_hour-4afe7060339611e897fbafb645229968/mc-152546-big-Index.db
>  was not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2018-05-28 22:02:13,183  Ref.java:224 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3071c683) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@847483493:Memory@[7dda8d406000..7dda8d596000)
>  was not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2018-05-28 22:02:13,183  Ref.java:224 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@bf809cc) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@2142654469:[Memory@[0..84),
>  Memory@[0..528)] was not released before the reference was garbage collected
>  ```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14464) stop-server.bat -p ../pid.txt -f command not working on windows 2016

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14464:
-
Component/s: Packaging

> stop-server.bat -p ../pid.txt -f command not working on windows 2016
> 
>
> Key: CASSANDRA-14464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14464
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Shyam Phirke
>Priority: Critical
>
> Steps to reproduce:
> 1. Copy and extract cassandra binaries on windows 2016 machine
> 2. Start cassandra in non-legacy mode
> 3. Check pid of cassandra in task manager and compare it with in pid.txt
> 4. Now stop cassandra using command stop-server.bat -p ../pid.txt -f
> Expected:
> After executing \bin:\> stop-server.bat -p 
> ../pid.txt -f
> cassandra process as in pid.txt should get killed.
>  
> Actual:
> After executing above stop command, the cassandra process as in pid.txt gets 
> killed but a new process gets created with new pid. Also the pid.txt not 
> updated with new pid.
> This new process should not get created.
>  
> Please comment on this issue if more details required.
> I am using cassandra 3.11.2.
>  
> This issue impacting me much since because of this new process getting 
> created my application uninstallation getting impacted.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14456) Repair session fails with buffer overflow

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14456:
-
Component/s: Repair

> Repair session fails with buffer overflow
> -
>
> Key: CASSANDRA-14456
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14456
> Project: Cassandra
>  Issue Type: Bug
>  Components: Repair
> Environment: 6 Node cluster, RF=3. The ks/table is:
>  
> {code:java}
> CREATE KEYSPACE IF NOT EXISTS alex with replication = { 'class': 
> 'NetworkTopologyStrategy', 'DC': 3 };
> CREATE TABLE IF NOT EXISTS alex.test2 (
>  part text,
>  clus int,
>  data text,
>  PRIMARY KEY (part, clus);
> );
> ALTER TABLE alex.test2 WITH compaction = {'class' :  
> 'LeveledCompactionStrategy', 'enabled': 'false'};
> {code}
>  
> Compactions are off. Loaded with random data, then shut down 1 node and kept 
> loading with random data. Then turned the node back on. Run repairs.
>Reporter: Alex Lourie
>Priority: Minor
> Attachments: log.txt
>
>
> When running a repair, a stream session fails with BufferOverflow error. The 
> log excerpt is attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14476:
-
Component/s: Core

> ShortType and ByteType are incorrectly considered variable-length types
> ---
>
> Key: CASSANDRA-14476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14476
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Vladimir Krivopalov
>Priority: Minor
>  Labels: lhf
>
> The AbstractType class has a method valueLengthIfFixed() that returns -1 for 
> data types with a variable length and a positive value for types with a fixed 
> length. This is primarily used for efficient serialization and 
> deserialization. 
>  
> It turns out that there is an inconsistency in types ShortType and ByteType 
> as those are in fact fixed-length types (2 bytes and 1 byte, respectively) 
> but they don't have the valueLengthIfFixed() method overloaded and it returns 
> -1 as if they were of variable length.
>  
> It would be good to fix that at some appropriate point, for example, when 
> introducing a new version of SSTables format, to keep the meaning of the 
> function consistent across data types. Saving some bytes in serialized format 
> is a minor but pleasant bonus.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13742) repair causes huge number of tiny files in incremental backup dirs

2018-11-18 Thread C. Scott Andreas (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691155#comment-16691155
 ] 

C. Scott Andreas commented on CASSANDRA-13742:
--

Hi [~wavelet], sorry for the delay in reply on this issue. This bug tracker is 
primarily used by contributors of the Apache Cassandra project toward 
development of the database itself. If this is still an issue, can you reach 
out to the user's list or public IRC channel? A member of the community may be 
able to help.

Here's a page with information on the best channels for support: 
http://cassandra.apache.org/community/

> repair causes huge number of  tiny files in incremental backup dirs 
> 
>
> Key: CASSANDRA-13742
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13742
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.18/2.1.13
>Reporter: peng xiao
>Priority: Major
>
> Hi,
> We enabled the incremental backup in one DC in our cluster for ETL.But we 
> found that there will be huge number of tiny files generated in the 
> incremental backup dirs that ETL was not able to  handle and parse those 
> files.
> Could you please advise?we are using Cassandra 2.1.18 and for this ETL DC,we 
> are still using 2.1.13.
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13701) Lower default num_tokens

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13701:
-
Component/s: Configuration

> Lower default num_tokens
> 
>
> Key: CASSANDRA-13701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Chris Lohfink
>Priority: Minor
>
> For reasons highlighted in CASSANDRA-7032, the high number of vnodes is not 
> necessary. It is very expensive for operations processes and scanning. Its 
> come up a lot and its pretty standard and known now to always reduce the 
> num_tokens within the community. We should just lower the defaults.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13734) BufferUnderflowException when using uppercase UUID

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13734:
-
Component/s: Local Write-Read Paths

> BufferUnderflowException when using uppercase UUID
> --
>
> Key: CASSANDRA-13734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13734
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Cassandra 2.2.8 running on OSX 10.12.5
> * org.apache.cassandra:cassandra-all:jar:2.2.8
> * com.datastax.cassandra:cassandra-driver-core:jar:3.0.0
> * org.apache.cassandra:cassandra-thrift:jar:2.2.8
>Reporter: Claudia S
>Priority: Major
>
> We have a table with a primary key of type uuid which we query for results in 
> JSON format. When I accidentally caused a query passing a UUID which has an 
> uppercase letter I noticed that this causes a BufferUnderflowException on 
> Cassandra.
> I directly attempted the queries using cqlsh, I can retrieve the entry using 
> standard select but whenever I pass JSON I get a BufferUnderflowException.
> {code:title=cql queries}
> cassandra@cqlsh:event_log_system> SELECT * FROM event WHERE id = 
> 559a4d83-9410-4b69-b459-566b8cf57aaa;
> [RESULT REMOVED]
> (1 rows)
> cassandra@cqlsh:event_log_system> SELECT * FROM event WHERE id = 
> 559a4d83-9410-4b69-b459-566b8cf57AAA;
> [RESULT REMOVED]
> (1 rows)
> cassandra@cqlsh:event_log_system> SELECT JSON * FROM event WHERE id = 
> 559a4d83-9410-4b69-b459-566b8cf57AAA;
> ServerError: java.nio.BufferUnderflowException
> cassandra@cqlsh:event_log_system> SELECT JSON * FROM event WHERE id = 
> 559a4d83-9410-4b69-b459-566b8cf57aaa;
> ServerError: java.nio.BufferUnderflowException
> {code}
> {code:title=log}
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,392 Message.java:506 - 
> Received: QUERY SELECT JSON * FROM event WHERE id = 
> 559a4d83-9410-4b69-b459-566b8cf57AAA;, v=4
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,392 QueryProcessor.java:221 - 
> Process org.apache.cassandra.cql3.statements.SelectStatement@67e6c0c @CL.ONE
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,392 ReadCallback.java:76 - 
> Blockfor is 1; setting up requests to localhost/127.0.0.1
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,393 
> AbstractReadExecutor.java:118 - reading data locally
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,393 SliceQueryFilter.java:269 
> - collecting 0 of 2147483647: :false:0@150126701983
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,393 SliceQueryFilter.java:269 
> - collecting 1 of 2147483647: can_login:false:1@150126701983
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,393 SliceQueryFilter.java:269 
> - collecting 1 of 2147483647: is_superuser:false:1@150126701983
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,393 SliceQueryFilter.java:269 
> - collecting 1 of 2147483647: salted_hash:false:60@150126701983
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,393 StorageProxy.java:1449 - 
> Read: 0 ms.
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,394 ReadCallback.java:76 - 
> Blockfor is 1; setting up requests to localhost/127.0.0.1
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,394 
> AbstractReadExecutor.java:118 - reading data locally
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,394 SliceQueryFilter.java:269 
> - collecting 0 of 2147483647: :false:0@150126701983
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,394 SliceQueryFilter.java:269 
> - collecting 1 of 2147483647: can_login:false:1@150126701983
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,394 SliceQueryFilter.java:269 
> - collecting 1 of 2147483647: is_superuser:false:1@150126701983
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,394 SliceQueryFilter.java:269 
> - collecting 1 of 2147483647: salted_hash:false:60@150126701983
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,395 StorageProxy.java:1449 - 
> Read: 0 ms.
> DEBUG [SharedPool-Worker-1] 2017-07-28 20:40:41,395 SliceQueryPager.java:92 - 
> Querying next page of slice query; new filter: SliceQueryFilter 
> [reversed=false, slices=[[, ]], count=100, toGroup = 0]
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,395 ReadCallback.java:76 - 
> Blockfor is 1; setting up requests to localhost/127.0.0.1
> TRACE [SharedPool-Worker-1] 2017-07-28 20:40:41,395 
> AbstractReadExecutor.java:118 - reading data locally
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,396 SliceQueryFilter.java:269 
> - collecting 0 of 100: :false:0@1501267173323000
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,396 SliceQueryFilter.java:269 
> - collecting 1 of 100: action:false:4@1501267173323000
> TRACE [SharedPool-Worker-2] 2017-07-28 20:40:41,396 SliceQueryFilter.java:269 
> - collecting 1 of 100: 

[jira] [Updated] (CASSANDRA-13667) DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13667:
-
Component/s: CQL

> DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables
> 
>
> Key: CASSANDRA-13667
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13667
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefano Ortolani
>Priority: Minor
>
> As soon as I drop a keyspace or a table, I see _all_ nodes struggling to 
> acknowledge the new schema because of several flushes and compactions 
> happening on _all_ keyspaces and compactions (completely unrelated to the 
> dropped keyspace/table).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13962) should java.io.OutputStream.flush() be called on Commit log operations ?

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13962:
-
Component/s: Local Write-Read Paths

> should java.io.OutputStream.flush() be called on Commit log operations ?
> 
>
> Key: CASSANDRA-13962
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13962
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: cassandra 2.2.8
>Reporter: Ilya Shipitsin
>Priority: Major
>
> we run high loaded cassandra cluster on 2.2.8
> when we reboot node, very often we observe broken commit log, like described 
> here
> https://stackoverflow.com/questions/33304367/cassandra-exiting-due-to-error-while-processing-commit-log-during-initializatio
> I guess, that output is not flushed upon commit log write, I mean  
> java.io.OutputStream.flush()
> google also say, the same issue occur to sstables (they are written in more 
> rare fashion, so we did not observe that practically)
> any idea why output is not flushed ? is it done in purpose ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13742) repair causes huge number of tiny files in incremental backup dirs

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-13742.
--
Resolution: Information Provided

> repair causes huge number of  tiny files in incremental backup dirs 
> 
>
> Key: CASSANDRA-13742
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13742
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.18/2.1.13
>Reporter: peng xiao
>Priority: Major
>
> Hi,
> We enabled the incremental backup in one DC in our cluster for ETL.But we 
> found that there will be huge number of tiny files generated in the 
> incremental backup dirs that ETL was not able to  handle and parse those 
> files.
> Could you please advise?we are using Cassandra 2.1.18 and for this ETL DC,we 
> are still using 2.1.13.
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13736) CASSANDRA-9673 cause atomic batch p99 increase 3x

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13736:
-
Component/s: Coordination

> CASSANDRA-9673 cause atomic batch p99 increase 3x
> -
>
> Key: CASSANDRA-13736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13736
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: xiangzhou xia
>Assignee: xiangzhou xia
>Priority: Major
>
> When we testing atomic batch in production traffic, we found that p99 latency 
> in atomic batch write is 2x-3x worse than 2.2. 
> After debuging, we found that the regression is causing by CASSANDRA-9673. 
> This patch changed consistency level in batchlog store from ONE to TWO. 
> [~iamaleksey] think only block for one batchlog message is a bug in batchlog 
> and change it to block for two in CASSANDRA-9673, I think it's actually a 
> very good optimization to reduce latency. 
> Set the consistency to one will decrease the possibility of slow data node 
> (GC, long message queue, etc) affect the latency of atomic batch.  In our 
> shadow cluster, when we change consistency from two to one, we notice a 2x-3x 
> p99 latency drop in atomic batch.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13076) unexpected leap year differences for years between 0 and 1583

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13076:
-
Component/s: CQL

> unexpected leap year differences for years between 0 and 1583
> -
>
> Key: CASSANDRA-13076
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13076
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native 
> protocol v4
>Reporter: Jens Geyer
>Priority: Major
> Attachments: CQL-Statements.zip, CassandraTicket13076.cs
>
>
> When inserting timestamps into a datetime column that are between year 0 and 
> 1583, there are unexpected differences between the CQL statement and the 
> actual data written into the field. 
> Testcase: Insert 1st of february for each year starting from 0 up to 3000. We 
> see changing the difference each leap year that is a multiple of 100, and 
> finally after the calendar reform of 1582. 
> {code}
> read 30.01.0001 00:00:00 +00:00, difference -2 days
> read 31.01.0101 00:00:00 +00:00, difference -1 days
> read 01.02.0201 00:00:00 +00:00, difference 0 days
> read 02.02.0301 00:00:00 +00:00, difference 1 days
> read 03.02.0501 00:00:00 +00:00, difference 2 days
> read 04.02.0601 00:00:00 +00:00, difference 3 days
> read 05.02.0701 00:00:00 +00:00, difference 4 days
> read 06.02.0901 00:00:00 +00:00, difference 5 days
> read 07.02.1001 00:00:00 +00:00, difference 6 days
> read 08.02.1101 00:00:00 +00:00, difference 7 days
> read 09.02.1301 00:00:00 +00:00, difference 8 days
> read 10.02.1401 00:00:00 +00:00, difference 9 days
> read 11.02.1501 00:00:00 +00:00, difference 10 days
> read 01.02.1583 00:00:00 +00:00, difference 0 days
> {code}
> So what it looks like is that there seems to be an inconsistency between 
> calendar systems. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13134) Cqlsh doesn't support 24-bit console color

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13134:
-
Component/s: Tools

> Cqlsh doesn't support 24-bit console color
> --
>
> Key: CASSANDRA-13134
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13134
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Windows 10 with Anniversary update
>Reporter: Patrick
>Priority: Minor
>
> After updating my machine to the windows anniversary update which now 
> supports 24-bit color, cqlsh is producing mangled output. Prior it was 
> working fine.
> The codepage is 65001 which is UTF. This helped before the update.
> Output from logging in and selecting rows:
> {code}
> C:\Users\xyz\Desktop\Utils\cassandra\bin>chcp 65001
> Active code page: 65001
> C:\Users\xyz\Desktop\Utils\cassandra\bin>cqlsh -u xyz --cqlversion=3.4.0 -C 
> Cassandra-dev01.xyz.local
> Password:
> Connected to [0;1;34mxyz Development[0m at cassandra-dev01.xyz.local:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.11.1485 | CQL spec 3.4.0 | Native protocol v4]
> Use HELP for help.
> xyz@cqlsh> use "xyz";
> xyz@cqlsh:xyz> select * from "DatabaseSetting" limit 1;
>  [0;1;31mName[0m | [0;1;35mValue[0m
> --+---
>  [0;1;33mcc38d7f8-821f-491e-8451-530c42ff61fc[0m |   [0;1;33mxyz[0m
> (1 rows)
> xyz@cqlsh:xyz>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13110) no host available exception

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-13110.
--
Resolution: Information Provided

> no host available exception
> ---
>
> Key: CASSANDRA-13110
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13110
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dhruv
>Priority: Major
>
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: localhost/127.0.0.1:9042 
> (com.datastax.driver.core.exceptions.ServerError: An unexpected error 
> occurred server side on localhost/127.0.0.1:9042: 
> java.nio.BufferUnderflowException))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
>   at 
> info.archinnov.achilles.internals.dsl.AsyncAware.extractCauseFromExecutionException(AsyncAware.java:34)
>   at 
> info.archinnov.achilles.internals.dsl.action.MutationAction.executeWithStats(MutationAction.java:49)
>   at 
> com.prontoitlabs.mobiadz.repository.AdvertiserRepository.updateAdvertiser(AdvertiserRepository.java:122)
>   at 
> com.prontoitlabs.mobiadz.service.impl.AdvertiserServiceImpl.updateAdvertiser(AdvertiserServiceImpl.java:248)
>   at 
> com.prontoitlabs.mobiadz.service.impl.OfferServiceImpl.add(OfferServiceImpl.java:183)
>   at 
> com.prontoitlabs.mobiadz.controller.OfferController.add(OfferController.java:109)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
>   at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
>   at 
> org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
>   at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
>   at 
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
>   at 
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
>   at 
> org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)
>   at 
> org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206)
>   at 
> org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180)
>   at 
> org.glassfish.grizzly.http.server.HttpHandlerChain.doHandle(HttpHandlerChain.java:197)
>   at 
> org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
>   at 
> org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
>   at 
> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
>   at 
> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
>   at 
> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
>   at 
> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
>   at 
> 

[jira] [Updated] (CASSANDRA-13102) test failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13102:
-
Component/s: Testing

> test failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_blogposts
> -
>
> Key: CASSANDRA-13102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Kurt Greaves
>Priority: Major
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/875/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts
> {noformat}
> Error Message
> ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried 
> connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-1c8hQM
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-1c8hQM
> dtest: DEBUG: clearing ssl stores from [/tmp/dtest-1c8hQM] directory
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-C2OXmx
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 1079, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2553, in test_bulk_round_trip_blogposts
> stress_table='stresscql.blogposts')
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2456, in _test_bulk_round_trip
> self.prepare(nodes=nodes, partitioner=partitioner, 
> configuration_options=configuration_options)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 129, in prepare
> self.session = self.patient_cql_connection(self.node1)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 508, in 
> patient_cql_connection
> bypassed_exception=NoHostAvailable
>   File "/home/automaton/cassandra-dtest/dtest.py", line 201, in 
> retry_till_success
> return fun(*args, **kwargs)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 441, in cql_connection
> protocol_version, port=port, ssl_opts=ssl_opts)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 469, in 
> _create_session
> session = cluster.connect(wait_for_all_pools=True)
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 1180, in connect
> self.control_connection.connect()
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 2597, in connect
> self._set_new_connection(self._reconnect_internal())
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 2634, in _reconnect_internal
> raise NoHostAvailable("Unable to connect to any servers", errors)
> '(\'Unable to connect to any servers\', {\'127.0.0.1\': error(111, "Tried 
> connecting to [(\'127.0.0.1\', 9042)]. Last error: Connection 
> refused")})\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-1c8hQM\ndtest: DEBUG: Done setting configuration options:\n{   
> \'initial_token\': None,\n\'num_tokens\': \'32\',\n
> \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n  
>   \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': 
> 1,\n\'truncate_request_timeout_in_ms\': 1,\n
> \'write_request_timeout_in_ms\': 1}\ndtest: DEBUG: removing ccm cluster 
> test at: /tmp/dtest-1c8hQM\ndtest: DEBUG: clearing ssl stores from 
> [/tmp/dtest-1c8hQM] directory\ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-C2OXmx\ndtest: DEBUG: Done setting configuration options:\n{   
> \'initial_token\': None,\n\'num_tokens\': \'32\',\n
> \'phi_convict_threshold\': 5,\n

[jira] [Updated] (CASSANDRA-13055) DoS by StreamReceiveTask, during incremental repair

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13055:
-
Component/s: Repair

> DoS by StreamReceiveTask, during incremental repair
> ---
>
> Key: CASSANDRA-13055
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13055
> Project: Cassandra
>  Issue Type: Bug
>  Components: Repair
>Reporter: Tom van der Woerdt
>Priority: Major
> Attachments: untitled 2.txt
>
>
> There's no limit on how many StreamReceiveTask there can be, and during an 
> incremental repair on a vnode cluster with high replication factors, this can 
> lead to thousands of conccurent StreamReceiveTask threads, effectively DoSing 
> the node.
> I just found one of my nodes with 1000+ loadavg, caused by 1363 concurrent 
> StreamReceiveTask threads.
> That sucks :)
> I think :
> * Cassandra shouldn't allow more than X concurrent StreamReceiveTask threads
> * StreamReceiveTask threads should be at a lower priority, like compaction 
> threads
> Alternative ideas welcome as well, of course.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13096) Snapshots slow down jmx scraping

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13096:
-
Component/s: Metrics

> Snapshots slow down jmx scraping
> 
>
> Key: CASSANDRA-13096
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13096
> Project: Cassandra
>  Issue Type: Bug
>  Components: Metrics
>Reporter: Maxime Fouilleul
>Priority: Major
> Attachments: CPU Load.png, Clear Snapshots.png, JMX Scrape 
> Duration.png
>
>
> Hello,
> We are scraping the jmx metrics through a prometheus exporter and we noticed 
> that some nodes became really long to answer (more than 20 seconds). After 
> some investigations we do not find any hardware problem or overload issues on 
> there "slow" nodes. It happens on different clusters, some with only few giga 
> bytes of dataset and it does not seams to be related to a specific version 
> neither as it happens on 2.1, 2.2 and 3.0 nodes. 
> After some unsuccessful actions, one of our ideas was to clean the snapshots 
> staying on one problematic node:
> {code}
> nodetool clearsnapshot
> {code}
> And the magic happens... as you can see in the attached diagrams, the second 
> we cleared the snapshots, the CPU activity dropped immediatly and the 
> duration to scrape the jmx metrics goes from +20 secs to instantaneous...
> Can you enlighten us on this issue? Once again, it appears on our three 2.1, 
> 2.2 and 3.0 versions, on different volumetry and it is not systematically 
> linked to the snapshots as we have some nodes with the same snapshots volume 
> which are going pretty well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13051) SSTable Corruption - Partition Key fails assertion

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13051:
-
Component/s: Observability

> SSTable Corruption - Partition Key fails assertion
> --
>
> Key: CASSANDRA-13051
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13051
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
> Environment: Cassandra 3.7
> Single DC
> 5 Nodes
> RF 3
> NetworkTopologyStrategy
> OS: Ubuntu
>Reporter: Malte Pickhan
>Priority: Major
>
> When running a repair the following exception is triggered:
> {code}
> java.lang.AssertionError: null  
> at 
> org.apache.cassandra.db.marshal.TimeUUIDType.compareCustom(TimeUUIDType.java:65)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:157) 
> ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.dht.LocalPartitioner$LocalToken.compareTo(LocalPartitioner.java:139)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.dht.LocalPartitioner$LocalToken.compareTo(LocalPartitioner.java:120)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85) 
> ~[apache-cassandra-3.7.jar:3.7]  
> at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:39) 
> ~[apache-cassandra-3.7.jar:3.7]  
> at 
> java.util.concurrent.ConcurrentSkipListMap.cpr(ConcurrentSkipListMap.java:655)
>  ~[na:1.8.0_91]  
> at 
> java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:794)
>  ~[na:1.8.0_91]  
> at 
> java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1546)
>  ~[na:1.8.0_91]  
> at org.apache.cassandra.db.Memtable.getPartition(Memtable.java:355) 
> ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.compaction.CompactionController.maxPurgeableTimestamp(CompactionController.java:221)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.compaction.CompactionIterator$Purger.getMaxPurgeableTimestamp(CompactionIterator.java:304)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.lambda$new$0(PurgeFunction.java:38)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at org.apache.cassandra.db.DeletionPurger.shouldPurge(DeletionPurger.java:33) 
> ~[apache-cassandra-3.7.jar:3.7]  
> at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:386) 
> ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:120) 
> ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-3.7.jar:3.7]  
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]  
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91]  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_91]  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]  
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]  
> {code}
> One thing which would be nice is, to provide an actual message in the 
> assertion in order to avoid this "null" string.  Furthermore it would be 
> great to provide the data which caused the assertion to fail.
> Actually I have no clue, how we triggered this, but I will 

[jira] [Updated] (CASSANDRA-13377) test failure in org.apache.cassandra.service.RemoveTest.testLocalHostId-compression

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13377:
-
Component/s: Testing

> test failure in 
> org.apache.cassandra.service.RemoveTest.testLocalHostId-compression
> ---
>
> Key: CASSANDRA-13377
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13377
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Shuler
>Priority: Major
>  Labels: test-failure
> Attachments: jenkins-cassandra-3.11_testall-124_logs.tar.gz
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/124/testReport/org.apache.cassandra.service/RemoveTest/testLocalHostId_compression
> {noformat}
> Stacktrace
> java.lang.NullPointerException
>   at org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:881)
>   at org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:876)
>   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2275)
>   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1928)
>   at org.apache.cassandra.Util.createInitialRing(Util.java:222)
>   at org.apache.cassandra.service.RemoveTest.setup(RemoveTest.java:89)
> Standard Output
> ERROR [main] 2017-03-23 21:27:05,889 SubstituteLogger.java:250 - SLF4J: stderr
> INFO  [main] 2017-03-23 21:27:06,238 YamlConfigurationLoader.java:89 - 
> Configuration location: 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2017-03-23 21:27:06,241 YamlConfigurationLoader.java:108 - 
> Loading settings from file:/home/automaton/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2017-03-23 21:27:07,506 Config.java:475 - Node 
> configuration:[allocate_tokens_for_keyspace=null; authentica
> ...[truncated 176636 chars]...
> ain] 2017-03-23 21:27:16,054 YamlConfigurationLoader.java:108 - Loading 
> settings from file:/home/automaton/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2017-03-23 21:27:16,059 StorageService.java:2171 - Node 
> /127.0.0.5 state bootstrapping, token [31359799266797610263756179790339965311]
> INFO  [main] 2017-03-23 21:27:16,059 StorageService.java:2184 - Node 
> /127.0.0.5 state jump to bootstrap
> INFO  [main] 2017-03-23 21:27:16,059 MessagingService.java:979 - Waiting for 
> messaging service to quiesce
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13007) Make compaction more testable

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13007:
-
Component/s: Testing
 Compaction

> Make compaction more testable
> -
>
> Key: CASSANDRA-13007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13007
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Testing
>Reporter: Jon Haddad
>Priority: Major
>
> Compaction is written in a manner that makes it difficult to unit test edge 
> cases.  I'm opening this up as a blanket issue, hopefully we can get enough 
> requirements in here to make some decisions to improve the testability and 
> maintainability of compaction code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13049) Too many open files during bootstrapping

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13049:
-
Component/s: Lifecycle

> Too many open files during bootstrapping
> 
>
> Key: CASSANDRA-13049
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13049
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Lifecycle
>Reporter: Simon Zhou
>Assignee: Simon Zhou
>Priority: Major
>
> We just upgraded from 2.2.5 to 3.0.10 and got issue during bootstrapping. So 
> likely this is something made worse along with improving IO performance in 
> Cassandra 3.
> On our side, the issue is that we have lots of small sstables and thus when 
> bootstrapping a new node, it receives lots of files during streaming and 
> Cassandra keeps all of them open for an unpredictable amount of time. 
> Eventually we hit "Too many open files" error and around that time, I can see 
> ~1M open files through lsof and almost all of them are *-Data.db and 
> *-Index.db. Definitely we should use a better compaction strategy to reduce 
> the number of sstables but I see a few possible improvements in Cassandra:
> 1. We use memory map when reading data from sstables. Every time we create a 
> new memory map, there is one more file descriptor open. Memory map improves 
> IO performance when dealing with large files, do we want to set a file size 
> threshold when doing this?
> 2. Whenever we finished receiving a file from peer, we create a 
> SSTableReader/BigTableReader, which includes opening the data file and index 
> file, and keep them open until some time later (unpredictable). See 
> StreamReceiveTask#L110, BigTableWriter#openFinal and 
> SSTableReader#InstanceTidier. Is it better to lazily open the data/index 
> files or close them more often to reclaim the file descriptors?
> I searched all known issue in JIRA and looks like this is a new issue in 
> Cassandra 3. cc [~Stefania] for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12991) Inter-node race condition in validation compaction

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-12991:
-
Component/s: Repair

> Inter-node race condition in validation compaction
> --
>
> Key: CASSANDRA-12991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12991
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Repair
>Reporter: Benjamin Roth
>Priority: Minor
>
> Problem:
> When a validation compaction is triggered by a repair it may happen that due 
> to flying in mutations the merkle trees differ but the data is consistent 
> however.
> Example:
> t = 1: 
> Repair starts, triggers validations
> Node A starts validation
> t = 10001:
> Mutation arrives at Node A
> t = 10002:
> Mutation arrives at Node B
> t = 10003:
> Node B starts validation
> Hashes of node A+B will differ but data is consistent from a view (think of 
> it like a snapshot) t = 1.
> Impact:
> Unnecessary streaming happens. This may not a big impact on low traffic CFs, 
> partitions but on high traffic CFs and maybe very big partitions, this may 
> have a bigger impact and is a waste of resources.
> Possible solution:
> Build hashes based upon a snapshot timestamp.
> This requires SSTables created after that timestamp to be filtered when doing 
> a validation compaction:
> - Cells with timestamp > snapshot time have to be removed
> - Tombstone range markers have to be handled
>  - Bounds have to be removed if delete timestamp > snapshot time
>  - Boundary markers have to be either changed to a bound or completely 
> removed, depending if start and/or end are both affected or not
> Probably this is a known behaviour. Have there been any discussions about 
> this in the past? Did not find an matching issue, so I created this one.
> I am happy about any feedback, whatsoever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13048) Support SASL mechanism negotiation in existing Authenticators

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13048:
-
Component/s: Auth

> Support SASL mechanism negotiation in existing Authenticators
> -
>
> Key: CASSANDRA-13048
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13048
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Auth
>Reporter: Ben Bromhead
>Assignee: Ben Bromhead
>Priority: Minor
>  Labels: client-auth, client-impacting
>
> [CASSANDRA-11471|https://issues.apache.org/jira/browse/CASSANDRA-11471] adds 
> support for SASL mechanism negotitation to the native protocol. Existing 
> Authenticators should follow the SASL negotiation mechanism used in 
> [CASSANDRA-11471|https://issues.apache.org/jira/browse/CASSANDRA-11471] .
> It may make sense to make the SASL negotiation mechanism extensible so other 
> Authenticators can use it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12986) dtest failure in upgrade_internal_auth_test.TestAuthUpgrade.test_upgrade_legacy_table

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-12986:
-
Component/s: Testing

> dtest failure in 
> upgrade_internal_auth_test.TestAuthUpgrade.test_upgrade_legacy_table
> -
>
> Key: CASSANDRA-12986
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12986
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Sean McCarthy
>Priority: Major
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1437/testReport/upgrade_internal_auth_test/TestAuthUpgrade/test_upgrade_legacy_table
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [main] 2016-12-01 03:08:30,985 CassandraDaemon.java:724 - Detected 
> unreadable sstables 
> 

[jira] [Updated] (CASSANDRA-13086) CAS resultset sometimes does not contain value column even though wasApplied is false

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-13086:
-
Component/s: CQL

> CAS resultset sometimes does not contain value column even though wasApplied 
> is false
> -
>
> Key: CASSANDRA-13086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13086
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Christian Spriegel
>Priority: Minor
>  Labels: LWT
>
> Every now and then I see a ResultSet for one of my CAS queries that contain 
> wasApplied=false, but does not contain my value column.
> I just now found another occurrence, which causes the following exception in 
> the driver:
> {code}
> ...
> Caused by: com.mycompany.MyDataaccessException: checkLock(ResultSet[ 
> exhausted: true, Columns[[applied](boolean)]])
> at com.mycompany.MyDAO._checkLock(MyDAO.java:408)
> at com.mycompany.MyDAO._releaseLock(MyDAO.java:314)
> ... 16 more
> Caused by: java.lang.IllegalArgumentException: value is not a column defined 
> in this metadata
> at 
> com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:266)
> at 
> com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:272)
> at 
> com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:81)
> at 
> com.datastax.driver.core.AbstractGettableData.getBytes(AbstractGettableData.java:151)
> at com.mycompany.MyDAO._checkLock(MyDAO.java:383)
> ... 17 more
> {code}
> The query the application was doing:
> delete from "Lock" where lockname=:lockname and id=:id if value=:value;
> I did some debugging recently and was able to track these ResultSets to 
> StorageProxy.cas() to the "CAS precondition does not match current values {}" 
> return statement.
> I saw this happening with Cassandra 3.0.10 and earlier versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-12501) Table read error on migrating from 2.1.9 to 3x

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-12501.
--
Resolution: Cannot Reproduce

> Table read error on migrating from 2.1.9 to 3x
> --
>
> Key: CASSANDRA-12501
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12501
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux ubuntu 14.04
>Reporter: Sushma Pradeep
>Priority: Critical
>
> We are upgrading our db from cassandra 1.2.9 to 3.x. 
> We successfully upgraded from 1.2.9 -> 2.0.17 -> 2.1.9
> However when I upgraded from 2.1.9 -> 3.0.8 I was not able to read one table. 
> Table is described as:
> {code}
> CREATE TABLE xchngsite.settles (
> key ascii,
> column1 bigint,
> column2 ascii,
> "" map,
> value blob,
> PRIMARY KEY (key, column1, column2)
> ) WITH COMPACT STORAGE
> AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 1.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> However I am able to read all other tables. 
> When I run {{select * from table}}, I get below error:
> {code}
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1297, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> And {{tail -f system.log}} says:
> {code}
> WARN  [SharedPool-Worker-2] 2016-08-19 08:50:47,949 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2471)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_77]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.4.jar:3.4]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.4.jar:3.4]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:113) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:105) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:310)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:265)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:245)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  

[jira] [Updated] (CASSANDRA-12540) Password Management: Hardcoded Password

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-12540:
-
Component/s: Auth

> Password Management: Hardcoded Password
> ---
>
> Key: CASSANDRA-12540
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12540
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Auth
>Reporter: Eduardo Aguinaga
>Priority: Major
> Attachments: 12540-trunk.patch
>
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> In the file EncryptionOptions.java there are hard coded passwords on lines 23 
> and 25.
> {code:java}
> EncryptionOptions.java, lines 20-30:
> 20 public abstract class EncryptionOptions
> 21 {
> 22 public String keystore = "conf/.keystore";
> 23 public String keystore_password = "cassandra";
> 24 public String truststore = "conf/.truststore";
> 25 public String truststore_password = "cassandra";
> 26 public String[] cipher_suites = {
> 27 "TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA",
> 28 "TLS_DHE_RSA_WITH_AES_128_CBC_SHA", 
> "TLS_DHE_RSA_WITH_AES_256_CBC_SHA",
> 29 "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", 
> "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA" 
> 30 };
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12495) dtest failure in snapshot_test.TestArchiveCommitlog.test_archive_commitlog_point_in_time_with_active_commitlog_ln

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-12495:
-
Component/s: Testing

> dtest failure in 
> snapshot_test.TestArchiveCommitlog.test_archive_commitlog_point_in_time_with_active_commitlog_ln
> -
>
> Key: CASSANDRA-12495
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12495
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Sean McCarthy
>Priority: Major
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/376/testReport/snapshot_test/TestArchiveCommitlog/test_archive_commitlog_point_in_time_with_active_commitlog_ln/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/snapshot_test.py", line 198, in 
> test_archive_commitlog_point_in_time_with_active_commitlog_ln
> self.run_archive_commitlog(restore_point_in_time=True, 
> archive_active_commitlogs=True, archive_command='ln')
>   File "/home/automaton/cassandra-dtest/snapshot_test.py", line 281, in 
> run_archive_commitlog
> set())
>   File "/usr/lib/python2.7/unittest/case.py", line 522, in assertNotEqual
> raise self.failureException(msg)
> "set([]) == set([])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12519) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-12519:
-
Component/s: Testing

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12519
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12519
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Sean McCarthy
>Priority: Major
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure: 
> http://cassci.datastax.com/job/trunk_offheap_dtest/379/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 209, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12491) dtest failure in upgrade_tests.paging_test.TestPagingDataNodes2RF1_Upgrade_current_2_2_x_To_indev_3_x.static_columns_paging_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-12491:
-
Component/s: Testing

> dtest failure in 
> upgrade_tests.paging_test.TestPagingDataNodes2RF1_Upgrade_current_2_2_x_To_indev_3_x.static_columns_paging_test
> 
>
> Key: CASSANDRA-12491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12491
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Sean McCarthy
>Priority: Major
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1_Upgrade_current_2_2_x_To_indev_3_x/static_columns_paging_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/paging_test.py", line 
> 875, in static_columns_paging_test
> self.assertEqual([0] * 4 + [1] * 4 + [2] * 4 + [3] * 4, sorted([r.a for r 
> in results]))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "/usr/lib/python2.7/unittest/case.py", line 724, in assertSequenceEqual
> self.fail(msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "Lists differ: [0, 0, 0, 0, 1, 1, 1, 1, 2, 2,... != [0, 0, 0, 0, 1, 1, 1, 1, 
> 2, 3,...\n\nFirst differing element 9:\n2\n3\n\nFirst list contains 3 
> additional elements.\nFirst extra element 13:\n3\n\n- [0, 0, 0, 0, 1, 1, 1, 
> 1, 2, 2, 2, 2, 3, 3, 3, 3]\n? -\n\n+ [0, 
> 0, 0, 0, 1, 1, 1, 1, 2, 3, 3, 3, 3]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-12477) BackgroundCompaction causes Node crash (OutOfMemoryError)

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-12477.
--
Resolution: Information Provided

> BackgroundCompaction causes Node crash (OutOfMemoryError)
> -
>
> Key: CASSANDRA-12477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Kuku1
>Priority: Major
> Attachments: debug.log, system.log, system.log
>
>
> After ingesting data, certain nodes of my cluster (2 out of 5) are not able 
> to restart because Compaction fails with the following exception.
> I was running a write-heavy ingestion before things started to break. The 
> data size I was only 20GB but the ingestion speed was rather fast I guess. I 
> ingested with the datastax C* java driver and used writeAsync to pump my 
> BoundStatements to the Cluster. The ingestion client was running on a 
> different node connected via GBit LAN. 
> The nodes were unable to restart Cassandra.
> I am using Cassandra 3.0.8. 
> I was using untouched parameters for the heap size in cassandra-env.sh. 
> After the nodes started failing to restart, I tried increasing MAX_JAVA_HEAP 
> to 36gb and NEW_SIZE to 12gb but the memory will completely be consumed and 
> then the exception will be thrown. 
> {code}
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.8.0_91]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
> ~[na:1.8.0_91]
> at 
> org.apache.cassandra.utils.memory.BufferPool.allocate(BufferPool.java:108) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.access$1000(BufferPool.java:45) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.allocate(BufferPool.java:387)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.access$000(BufferPool.java:314)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:120)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:92) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:87)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.access$100(CompressedRandomAccessReader.java:38)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.createBuffer(CompressedRandomAccessReader.java:275)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:74)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:59)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.build(CompressedRandomAccessReader.java:283)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createReader(CompressedSegmentedFile.java:145)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:133)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1711)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:93)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:46)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:36)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:62)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:580)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:492)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> 

[jira] [Updated] (CASSANDRA-12432) Set ulimit for nproc in debian init script

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-12432:
-
Component/s: Packaging

> Set ulimit for nproc in debian init script
> --
>
> Key: CASSANDRA-12432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12432
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Jared Biel
>Priority: Minor
>
> While using Cassandra 2.2.7 (installed from official package) on Ubuntu 14.04 
> I noticed a warning on startup:
> {noformat}
> WARN  [main] 2016-08-10 21:53:53,219 SigarLibrary.java:174 - Cassandra server 
> running in degraded mode. Is swap disabled? : true,  Address space adequate? 
> : true,  nofile limit adequate? : true, nproc limit adequate? : false
> {noformat}
> I set about researching how that value is set and how to increase it. I found 
> the [Datastax documentation on recommended 
> settings|http://docs.datastax.com/en/landing_page/doc/landing_page/recommendedSettingsLinux.html]
>  and tried to increase the nproc limits according to that doc to no avail. I 
> eventually found a [stackoverflow 
> post|http://superuser.com/questions/454465/make-ulimits-work-with-start-stop-daemon]
>  that states that start-stop-daemon (which the C* init script uses) 
> doesn't/can't use the values specified in the limits.conf files.
> I solved this by adding a {{ulimit -p 32768}} entry to the init script below 
> the other two ulimit commands. Note that the flag is "-p" for dash (default 
> /bin/sh on Ubuntu), but the flag is "-u" on bash. As Debian has had [dash as 
> default|https://wiki.debian.org/Shell] since squeeze (2011-02-06), this 
> should be safe on most Debian based distros.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12437) dtest failure in bootstrap_test.TestBootstrap.local_quorum_bootstrap_test

2018-11-18 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-12437:
-
Component/s: Testing

> dtest failure in bootstrap_test.TestBootstrap.local_quorum_bootstrap_test
> -
>
> Key: CASSANDRA-12437
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12437
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Craig Kodman
>Priority: Major
>  Labels: dtest, windows
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/281/testReport/bootstrap_test/TestBootstrap/local_quorum_bootstrap_test
> {code}
> Stacktrace
>   File "C:\tools\python2\lib\unittest\case.py", line 329, in run
> testMethod()
>   File 
> "D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\bootstrap_test.py",
>  line 389, in local_quorum_bootstrap_test
> 'ops(insert=1)', '-rate', 'threads=50'])
>   File "D:\jenkins\workspace\cassandra-2.2_dtest_win32\ccm\ccmlib\node.py", 
> line 1244, in stress
> return handle_external_tool_process(p, ['stress'] + stress_options)
>   File "D:\jenkins\workspace\cassandra-2.2_dtest_win32\ccm\ccmlib\node.py", 
> line 1955, in handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> 'Subprocess [\'stress\', \'user\', \'profile=d:temp2tmp8sf4da\', 
> \'n=2M\', \'no-warmup\', \'ops(insert=1)\', \'-rate\', \'threads=50\'] exited 
> with non-zero status; exit status: 1; \nstderr: Exception in thread "main" 
> java.io.IOError: java.io.FileNotFoundException: d:\\temp\\2\\tmp8sf4da (The 
> process cannot access the file because it is being used by another 
> process)\r\n\tat 
> org.apache.cassandra.stress.StressProfile.load(StressProfile.java:574)\r\n\tat
>  
> org.apache.cassandra.stress.settings.SettingsCommandUser.(SettingsCommandUser.java:58)\r\n\tat
>  
> org.apache.cassandra.stress.settings.SettingsCommandUser.build(SettingsCommandUser.java:127)\r\n\tat
>  
> org.apache.cassandra.stress.settings.SettingsCommand.get(SettingsCommand.java:195)\r\n\tat
>  
> org.apache.cassandra.stress.settings.StressSettings.get(StressSettings.java:249)\r\n\tat
>  
> org.apache.cassandra.stress.settings.StressSettings.parse(StressSettings.java:220)\r\n\tat
>  org.apache.cassandra.stress.Stress.main(Stress.java:63)\r\nCaused by: 
> java.io.FileNotFoundException: d:\\temp\\2\\tmp8sf4da (The process cannot 
> access the file because it is being used by another process)\r\n\tat 
> java.io.FileInputStream.open0(Native Method)\r\n\tat 
> java.io.FileInputStream.open(FileInputStream.java:195)\r\n\tat 
> java.io.FileInputStream.(FileInputStream.java:138)\r\n\tat 
> java.io.FileInputStream.(FileInputStream.java:93)\r\n\tat 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)\r\n\tat
>  
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)\r\n\tat
>  java.net.URL.openStream(URL.java:1038)\r\n\tat 
> org.apache.cassandra.stress.StressProfile.load(StressProfile.java:560)\r\n\t...
>  6 more\r\n\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\2\\dtest-wsze0r\ndtest: DEBUG: Done setting configuration 
> options:\n{   \'initial_token\': None,\n\'num_tokens\': \'32\',\n
> \'phi_convict_threshold\': 5,\n\'start_rpc\': 
> \'true\'}\n- >> end captured logging << 
> -'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



<    1   2   3   4   5   6   7   8   9   >