[jira] [Updated] (CASSANDRA-11474) cqlsh: COPY FROM should use regular inserts for single statement batches
[ https://issues.apache.org/jira/browse/CASSANDRA-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-11474: - Status: Patch Available (was: In Progress) > cqlsh: COPY FROM should use regular inserts for single statement batches > > > Key: CASSANDRA-11474 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11474 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Stefania >Assignee: Stefania > Labels: lhf > Fix For: 2.2.x, 3.0.x, 3.x > > > I haven't reproduced it with a test yet but, from code inspection, if CQL > rows are larger than {{batch_size_fail_threshold_in_kb}} and this parameter > cannot be changed, then data import will fail. > Users can control the batch size by setting MAXBATCHSIZE. > If a batch contains a single statement, there is no need to use a batch and > we should use normal inserts instead or, alternatively, we should skip the > batch size check for unlogged batches with only one statement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11474) cqlsh: COPY FROM should use regular inserts for single statement batches
[ https://issues.apache.org/jira/browse/CASSANDRA-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229696#comment-15229696 ] Stefania commented on CASSANDRA-11474: -- CI looks good, this is ready for review. > cqlsh: COPY FROM should use regular inserts for single statement batches > > > Key: CASSANDRA-11474 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11474 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Stefania >Assignee: Stefania > Labels: lhf > Fix For: 2.2.x, 3.0.x, 3.x > > > I haven't reproduced it with a test yet but, from code inspection, if CQL > rows are larger than {{batch_size_fail_threshold_in_kb}} and this parameter > cannot be changed, then data import will fail. > Users can control the batch size by setting MAXBATCHSIZE. > If a batch contains a single statement, there is no need to use a batch and > we should use normal inserts instead or, alternatively, we should skip the > batch size check for unlogged batches with only one statement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11513) Result set is not unique on primary key (cql)
[ https://issues.apache.org/jira/browse/CASSANDRA-11513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229605#comment-15229605 ] Joel Knighton commented on CASSANDRA-11513: --- I looked into this because it looked interesting. [CASSANDRA-9986] removed the SliceableUnfilteredRowIterator. In {{SinglePartitionReadCommand.queryMemtablesAndSSTablesInTimestampOrder}} and also {{queryMemtablesAndDiskInternal}}, it switched from using {{ClusteringIndexFilter.filter}} to {{filter.getSlices}} and handing these slices to sstable.iterator. This caused the problem. Before, if after reduceFilter we still needed to find a static and we had no clustering, the filter would make no attempt to read farther than the static row. Now, if clustering is empty, we produce no slices in {{getSlices}}, and so when the sstable gets an iterator, it never sets a slice for the reader and instead reads the whole partition. It seems to me that AbstractSSTableIterator doesn't correctly handle the case of empty slices in general and that we can reproduce this with a condition like {{a > 7 AND a < 5}}. I also think that the index was unrelated and only caused a flush to disk. (There's probably some imprecision here, but that should get someone most of the way there.) > Result set is not unique on primary key (cql) > -- > > Key: CASSANDRA-11513 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11513 > Project: Cassandra > Issue Type: Bug >Reporter: Tianshi Wang > > [cqlsh 5.0.1 | Cassandra 3.4 | CQL spec 3.4.0 | Native protocol v4] > Run followings, > {code} > drop table if exists test0; > CREATE TABLE test0 ( > pk int, > a int, > b text, > s text static, > PRIMARY KEY (pk, a) > ); > insert into test0 (pk,a,b,s) values (0,1,'b1','hello b1'); > insert into test0 (pk,a,b,s) values (0,2,'b2','hello b2'); > insert into test0 (pk,a,b,s) values (0,3,'b3','hello b3'); > create index on test0 (b); > insert into test0 (pk,a,b,s) values (0,2,'b2 again','b2 again'); > {code} > Now select one record based on primary key, we got all three records. > {code} > cqlsh:ops> select * from test0 where pk=0 and a=2; > pk | a | s| b > +---+--+-- > 0 | 1 | b2 again | b1 > 0 | 2 | b2 again | b2 again > 0 | 3 | b2 again | b3 > {code} > {code} > cqlsh:ops> desc test0; > CREATE TABLE ops.test0 ( > pk int, > a int, > b text, > s text static, > PRIMARY KEY (pk, a) > ) WITH CLUSTERING ORDER BY (a ASC) > AND bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > CREATE INDEX test0_b_idx ON ops.test0 (b); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9259) Bulk Reading from Cassandra
[ https://issues.apache.org/jira/browse/CASSANDRA-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229592#comment-15229592 ] Stefania commented on CASSANDRA-9259: - Thanks. I've created CASSANDRA-11520 and CASSANDRA-11521. I should also have added the link to the POC patch: https://github.com/stef1927/cassandra/commits/9259. > Bulk Reading from Cassandra > --- > > Key: CASSANDRA-9259 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9259 > Project: Cassandra > Issue Type: New Feature > Components: Compaction, CQL, Local Write-Read Paths, Streaming and > Messaging, Testing >Reporter: Brian Hess >Assignee: Stefania >Priority: Critical > Fix For: 3.x > > Attachments: bulk-read-benchmark.1.html, > bulk-read-jfr-profiles.1.tar.gz, bulk-read-jfr-profiles.2.tar.gz > > > This ticket is following on from the 2015 NGCC. This ticket is designed to > be a place for discussing and designing an approach to bulk reading. > The goal is to have a bulk reading path for Cassandra. That is, a path > optimized to grab a large portion of the data for a table (potentially all of > it). This is a core element in the Spark integration with Cassandra, and the > speed at which Cassandra can deliver bulk data to Spark is limiting the > performance of Spark-plus-Cassandra operations. This is especially of > importance as Cassandra will (likely) leverage Spark for internal operations > (for example CASSANDRA-8234). > The core CQL to consider is the following: > SELECT a, b, c FROM myKs.myTable WHERE Token(partitionKey) > X AND > Token(partitionKey) <= Y > Here, we choose X and Y to be contained within one token range (perhaps > considering the primary range of a node without vnodes, for example). This > query pushes 50K-100K rows/sec, which is not very fast if we are doing bulk > operations via Spark (or other processing frameworks - ETL, etc). There are > a few causes (e.g., inefficient paging). > There are a few approaches that could be considered. First, we consider a > new "Streaming Compaction" approach. The key observation here is that a bulk > read from Cassandra is a lot like a major compaction, though instead of > outputting a new SSTable we would output CQL rows to a stream/socket/etc. > This would be similar to a CompactionTask, but would strip out some > unnecessary things in there (e.g., some of the indexing, etc). Predicates and > projections could also be encapsulated in this new "StreamingCompactionTask", > for example. > Another approach would be an alternate storage format. For example, we might > employ Parquet (just as an example) to store the same data as in the primary > Cassandra storage (aka SSTables). This is akin to Global Indexes (an > alternate storage of the same data optimized for a particular query). Then, > Cassandra can choose to leverage this alternate storage for particular CQL > queries (e.g., range scans). > These are just 2 suggestions to get the conversation going. > One thing to note is that it will be useful to have this storage segregated > by token range so that when you extract via these mechanisms you do not get > replications-factor numbers of copies of the data. That will certainly be an > issue for some Spark operations (e.g., counting). Thus, we will want > per-token-range storage (even for single disks), so this will likely leverage > CASSANDRA-6696 (though, we'll want to also consider the single disk case). > It is also worth discussing what the success criteria is here. It is > unlikely to be as fast as EDW or HDFS performance (though, that is still a > good goal), but being within some percentage of that performance should be > set as success. For example, 2x as long as doing bulk operations on HDFS > with similar node count/size/etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11521) Implement streaming for bulk read requests
Stefania created CASSANDRA-11521: Summary: Implement streaming for bulk read requests Key: CASSANDRA-11521 URL: https://issues.apache.org/jira/browse/CASSANDRA-11521 Project: Cassandra Issue Type: Sub-task Components: Local Write-Read Paths Reporter: Stefania Assignee: Stefania Fix For: 3.x Allow clients to stream data from a C* host, bypassing the coordination layer and eliminating the need to query individual pages one by one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11520) Implement optimized local read path for CL.ONE
Stefania created CASSANDRA-11520: Summary: Implement optimized local read path for CL.ONE Key: CASSANDRA-11520 URL: https://issues.apache.org/jira/browse/CASSANDRA-11520 Project: Cassandra Issue Type: Sub-task Components: CQL, Local Write-Read Paths Reporter: Stefania Assignee: Stefania Add an option to the CQL SELECT statement to bypass the coordination layer when reading local data at CL.ONE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11437) Make number of cores used for copy tasks visible
[ https://issues.apache.org/jira/browse/CASSANDRA-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-11437: - Reviewer: Jim Witschey Fix Version/s: 3.x Status: Patch Available (was: In Progress) > Make number of cores used for copy tasks visible > > > Key: CASSANDRA-11437 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11437 > Project: Cassandra > Issue Type: Improvement > Components: Testing >Reporter: Jim Witschey >Assignee: Stefania >Priority: Minor > Labels: lhf > Fix For: 3.x > > > As per this conversation with [~Stefania]: > https://github.com/riptano/cassandra-dtest/pull/869#issuecomment-200597829 > we don't currently have a way to verify that the test environment variable > {{CQLSH_COPY_TEST_NUM_CORES}} actually affects the behavior of {{COPY}} in > the intended way. If this were added, we could make our tests of the one-core > edge case a little stricter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11437) Make number of cores used for copy tasks visible
[ https://issues.apache.org/jira/browse/CASSANDRA-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-11437: - Component/s: Testing > Make number of cores used for copy tasks visible > > > Key: CASSANDRA-11437 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11437 > Project: Cassandra > Issue Type: Improvement > Components: Testing >Reporter: Jim Witschey >Assignee: Stefania >Priority: Minor > Labels: lhf > Fix For: 3.x > > > As per this conversation with [~Stefania]: > https://github.com/riptano/cassandra-dtest/pull/869#issuecomment-200597829 > we don't currently have a way to verify that the test environment variable > {{CQLSH_COPY_TEST_NUM_CORES}} actually affects the behavior of {{COPY}} in > the intended way. If this were added, we could make our tests of the one-core > edge case a little stricter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11437) Make number of cores used for copy tasks visible
[ https://issues.apache.org/jira/browse/CASSANDRA-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229584#comment-15229584 ] Stefania commented on CASSANDRA-11437: -- Here is the C* patch for trunk: |[patch|https://github.com/stef1927/cassandra/commits/11437]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11437-dtest/]| Here is the dtest pull request: https://github.com/riptano/cassandra-dtest/pull/917 [~mambocab] would you like to be the reviewer? > Make number of cores used for copy tasks visible > > > Key: CASSANDRA-11437 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11437 > Project: Cassandra > Issue Type: Improvement >Reporter: Jim Witschey >Assignee: Stefania >Priority: Minor > Labels: lhf > > As per this conversation with [~Stefania]: > https://github.com/riptano/cassandra-dtest/pull/869#issuecomment-200597829 > we don't currently have a way to verify that the test environment variable > {{CQLSH_COPY_TEST_NUM_CORES}} actually affects the behavior of {{COPY}} in > the intended way. If this were added, we could make our tests of the one-core > edge case a little stricter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11474) cqlsh: COPY FROM should use regular inserts for single statement batches
[ https://issues.apache.org/jira/browse/CASSANDRA-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229555#comment-15229555 ] Stefania commented on CASSANDRA-11474: -- Patches and CI available here: ||2.2||3.0||trunk|| |[patch|https://github.com/stef1927/cassandra/commits/11474-2.2]|[patch|https://github.com/stef1927/cassandra/commits/11474-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11474]| |[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11474-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11474-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11474-dtest/]| There is a conflict from 2.2 to 3.0 whilst the 3.0 patch merges cleanly into trunk. I should also note that this is a pretty serious limitation of COPY FROM, albeit very unlikely to occur. However, we don't need the fix in 2.1 because {{batch_size_fail_threshold_in_kb}} is only available in 2.2+. Additional dtest to reproduce the problem are available [here|https://github.com/stef1927/cassandra-dtest/tree/11474]. > cqlsh: COPY FROM should use regular inserts for single statement batches > > > Key: CASSANDRA-11474 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11474 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Stefania >Assignee: Stefania > Labels: lhf > Fix For: 2.2.x, 3.0.x, 3.x > > > I haven't reproduced it with a test yet but, from code inspection, if CQL > rows are larger than {{batch_size_fail_threshold_in_kb}} and this parameter > cannot be changed, then data import will fail. > Users can control the batch size by setting MAXBATCHSIZE. > If a batch contains a single statement, there is no need to use a batch and > we should use normal inserts instead or, alternatively, we should skip the > batch size check for unlogged batches with only one statement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-8720: -- Component/s: Tools > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: J.B. Langston > Fix For: 2.1.x, 2.2.x > > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Fix Version/s: 2.2.x 2.1.x > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Fix For: 2.1.x, 2.2.x > > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Status: Patch Available (was: Open) > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Fix For: 2.1.x, 2.2.x > > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11340) Heavy read activity on system_auth tables can cause apparent livelock
[ https://issues.apache.org/jira/browse/CASSANDRA-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229482#comment-15229482 ] Jeff Jirsa commented on CASSANDRA-11340: [~rhatch] - persists for at least an hour, we haven't left it running longer than that. > Heavy read activity on system_auth tables can cause apparent livelock > - > > Key: CASSANDRA-11340 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11340 > Project: Cassandra > Issue Type: Bug >Reporter: Jeff Jirsa >Assignee: Aleksey Yeschenko > > Reproduced in at least 2.1.9. > It appears possible for queries against system_auth tables to trigger > speculative retry, which causes auth to block on traffic going off node. In > some cases, it appears possible for threads to become deadlocked, causing > load on the nodes to increase sharply. This happens even in clusters with RF > of system_auth == N, as all requests being served locally puts the bar for > 99% SR pretty low. > Incomplete stack trace below, but we haven't yet figured out what exactly is > blocking: > {code} > Thread 82291: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.parkNanos(long) @bci=11, line=338 > (Compiled frame) > - > org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUntil(long) > @bci=28, line=307 (Compiled frame) > - org.apache.cassandra.utils.concurrent.SimpleCondition.await(long, > java.util.concurrent.TimeUnit) @bci=76, line=63 (Compiled frame) > - org.apache.cassandra.service.ReadCallback.await(long, > java.util.concurrent.TimeUnit) @bci=25, line=92 (Compiled frame) > - > org.apache.cassandra.service.AbstractReadExecutor$SpeculatingReadExecutor.maybeTryAdditionalReplicas() > @bci=39, line=281 (Compiled frame) > - org.apache.cassandra.service.StorageProxy.fetchRows(java.util.List, > org.apache.cassandra.db.ConsistencyLevel) @bci=175, line=1338 (Compiled frame) > - org.apache.cassandra.service.StorageProxy.readRegular(java.util.List, > org.apache.cassandra.db.ConsistencyLevel) @bci=9, line=1274 (Compiled frame) > - org.apache.cassandra.service.StorageProxy.read(java.util.List, > org.apache.cassandra.db.ConsistencyLevel, > org.apache.cassandra.service.ClientState) @bci=57, line=1199 (Compiled frame) > - > org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.pager.Pageable, > org.apache.cassandra.cql3.QueryOptions, int, long, > org.apache.cassandra.service.QueryState) @bci=35, line=272 (Compiled frame) > - > org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.QueryState, > org.apache.cassandra.cql3.QueryOptions) @bci=105, line=224 (Compiled frame) > - org.apache.cassandra.auth.Auth.selectUser(java.lang.String) @bci=27, > line=265 (Compiled frame) > - org.apache.cassandra.auth.Auth.isExistingUser(java.lang.String) @bci=1, > line=86 (Compiled frame) > - > org.apache.cassandra.service.ClientState.login(org.apache.cassandra.auth.AuthenticatedUser) > @bci=11, line=206 (Compiled frame) > - > org.apache.cassandra.transport.messages.AuthResponse.execute(org.apache.cassandra.service.QueryState) > @bci=58, line=82 (Compiled frame) > - > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext, > org.apache.cassandra.transport.Message$Request) @bci=75, line=439 (Compiled > frame) > - > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext, > java.lang.Object) @bci=6, line=335 (Compiled frame) > - > io.netty.channel.SimpleChannelInboundHandler.channelRead(io.netty.channel.ChannelHandlerContext, > java.lang.Object) @bci=17, line=105 (Compiled frame) > - > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(java.lang.Object) > @bci=9, line=333 (Compiled frame) > - > io.netty.channel.AbstractChannelHandlerContext.access$700(io.netty.channel.AbstractChannelHandlerContext, > java.lang.Object) @bci=2, line=32 (Compiled frame) > - io.netty.channel.AbstractChannelHandlerContext$8.run() @bci=8, line=324 > (Compiled frame) > - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=511 > (Compiled frame) > - > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run() > @bci=5, line=164 (Compiled frame) > - org.apache.cassandra.concurrent.SEPWorker.run() @bci=87, line=105 > (Interpreted frame) > - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame) > {code} > In a cluster with many connected clients (potentially thousands), a > reconnection flood (for example, restarting all at once) is likely to trigger > this bug. However, it is unlikely to be seen
[jira] [Assigned] (CASSANDRA-11470) dtest failure in materialized_views_test.TestMaterializedViews.base_replica_repair_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania reassigned CASSANDRA-11470: Assignee: Stefania > dtest failure in > materialized_views_test.TestMaterializedViews.base_replica_repair_test > --- > > Key: CASSANDRA-11470 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11470 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > Fix For: 3.x > > Attachments: node1.log, node2.log, node2_debug.log, node3.log, > node3_debug.log > > > base_replica_repair_test has failed on trunk with the following exception in > the log of node2: > {code} > ERROR [main] 2016-03-31 08:48:46,949 CassandraDaemon.java:708 - Exception > encountered during startup > java.lang.RuntimeException: Failed to list files in > /mnt/tmp/dtest-du964e/test/node2/data0/system_schema/views-9786ac1cdd583201a7cdad556410c985 > at > org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:53) > ~[main/:na] > at > org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:547) > ~[main/:na] > at > org.apache.cassandra.db.Directories$SSTableLister.filter(Directories.java:725) > ~[main/:na] > at > org.apache.cassandra.db.Directories$SSTableLister.list(Directories.java:690) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:567) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:555) > ~[main/:na] > at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:383) > ~[main/:na] > at org.apache.cassandra.db.Keyspace.(Keyspace.java:320) > ~[main/:na] > at org.apache.cassandra.db.Keyspace.open(Keyspace.java:130) > ~[main/:na] > at org.apache.cassandra.db.Keyspace.open(Keyspace.java:107) > ~[main/:na] > at > org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:139) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:864) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:811) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:799) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:505) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:242) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:286) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:294) > ~[main/:na] > at > org.apache.cassandra.schema.SchemaKeyspace.query(SchemaKeyspace.java:1246) > ~[main/:na] > at > org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:875) > ~[main/:na] > at > org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867) > ~[main/:na] > at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) > ~[main/:na] > at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:691) > [main/:na] > Caused by: java.lang.RuntimeException: Failed to list directory files in > /mnt/tmp/dtest-du964e/test/node2/data0/system_schema/views-9786ac1cdd583201a7cdad556410c985, > inconsistent disk state for transaction > [ma_txn_flush_58db56b0-f71d-11e5-bf68-03a01adb9f11.log in > /mnt/tmp/dtest-du964e/test/node2/data0/system_schema/views-9786ac1cdd583201a7cdad556410c985] > at > org.apache.cassandra.db.lifecycle.LogAwareFileLister.classifyFiles(LogAwareFileLister.java:149) > ~[main/:na] > at > org.apache.cassandra.db.lifecycle.LogAwareFileLister.classifyFiles(LogAwareFileLister.java:103) > ~[main/:na] > at > org.apache.cassandra.db.lifecycle.LogAwareFileLister$$Lambda$48/35984028.accept(Unknown > Source) ~[na:na] > at > java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) > ~[na:1.8.0_45] > at >
[jira] [Commented] (CASSANDRA-10624) Support UDT in CQLSSTableWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229474#comment-15229474 ] Stefania commented on CASSANDRA-10624: -- LGTM, this is ready to commit. The failures on cassci are unrelated: same failures have occurred on trunk as well, and all failing tests pass locally. Great job! > Support UDT in CQLSSTableWriter > --- > > Key: CASSANDRA-10624 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10624 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Sylvain Lebresne >Assignee: Alex Petrov > Fix For: 3.x > > Attachments: 0001-Add-support-for-UDTs-to-CQLSStableWriter.patch, > 0001-Support-UDTs-in-CQLSStableWriterV2.patch > > > As far as I can tell, there is not way to use a UDT with {{CQLSSTableWriter}} > since there is no way to declare it and thus {{CQLSSTableWriter.Builder}} > knows of no UDT when parsing the {{CREATE TABLE}} statement passed. > In terms of API, I think the simplest would be to allow to pass types to the > builder in the same way we pass the table definition. So something like: > {noformat} > String type = "CREATE TYPE myKs.vertex (x int, y int, z int)"; > String schema = "CREATE TABLE myKs.myTable (" > + " k int PRIMARY KEY," > + " s set" > + ")"; > String insert = ...; > CQLSSTableWriter writer = CQLSSTableWriter.builder() > .inDirectory("path/to/directory") > .withType(type) > .forTable(schema) > .using(insert).build(); > {noformat} > I'll note that implementation wise, this might be a bit simpler after the > changes of CASSANDRA-10365 (as it makes it easy to passe specific types > during the preparation of the create statement). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10624) Support UDT in CQLSSTableWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-10624: - Status: Ready to Commit (was: Patch Available) > Support UDT in CQLSSTableWriter > --- > > Key: CASSANDRA-10624 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10624 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Sylvain Lebresne >Assignee: Alex Petrov > Fix For: 3.x > > Attachments: 0001-Add-support-for-UDTs-to-CQLSStableWriter.patch, > 0001-Support-UDTs-in-CQLSStableWriterV2.patch > > > As far as I can tell, there is not way to use a UDT with {{CQLSSTableWriter}} > since there is no way to declare it and thus {{CQLSSTableWriter.Builder}} > knows of no UDT when parsing the {{CREATE TABLE}} statement passed. > In terms of API, I think the simplest would be to allow to pass types to the > builder in the same way we pass the table definition. So something like: > {noformat} > String type = "CREATE TYPE myKs.vertex (x int, y int, z int)"; > String schema = "CREATE TABLE myKs.myTable (" > + " k int PRIMARY KEY," > + " s set" > + ")"; > String insert = ...; > CQLSSTableWriter writer = CQLSSTableWriter.builder() > .inDirectory("path/to/directory") > .withType(type) > .forTable(schema) > .using(insert).build(); > {noformat} > I'll note that implementation wise, this might be a bit simpler after the > changes of CASSANDRA-10365 (as it makes it easy to passe specific types > during the preparation of the create statement). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Attachment: (was: 8720.txt) > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229293#comment-15229293 ] Brandon Williams edited comment on CASSANDRA-8720 at 4/7/16 12:51 AM: -- Here's a simple patch that suffixes sstablekeys' output with the size of the partition in that sstable. Not ideal, but it's a building block that can get you there. was (Author: brandon.williams): Here's a simple patch that prefixes sstablekeys' output with the size of the partition in that sstable (prefix because you probably want to pipe this to sort.) Not ideal, but it's a building block that can get you there. > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Attachment: 8720.txt > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Attachment: (was: 8720.txt) > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Attachment: 8720.txt > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Attachment: (was: 8720.txt) > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Attachment: 8720.txt > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11340) Heavy read activity on system_auth tables can cause apparent livelock
[ https://issues.apache.org/jira/browse/CASSANDRA-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229370#comment-15229370 ] Russ Hatch commented on CASSANDRA-11340: [~jjirsa] Does the elevated load persist indefinitely, or eventually settle back down? > Heavy read activity on system_auth tables can cause apparent livelock > - > > Key: CASSANDRA-11340 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11340 > Project: Cassandra > Issue Type: Bug >Reporter: Jeff Jirsa >Assignee: Aleksey Yeschenko > > Reproduced in at least 2.1.9. > It appears possible for queries against system_auth tables to trigger > speculative retry, which causes auth to block on traffic going off node. In > some cases, it appears possible for threads to become deadlocked, causing > load on the nodes to increase sharply. This happens even in clusters with RF > of system_auth == N, as all requests being served locally puts the bar for > 99% SR pretty low. > Incomplete stack trace below, but we haven't yet figured out what exactly is > blocking: > {code} > Thread 82291: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.parkNanos(long) @bci=11, line=338 > (Compiled frame) > - > org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUntil(long) > @bci=28, line=307 (Compiled frame) > - org.apache.cassandra.utils.concurrent.SimpleCondition.await(long, > java.util.concurrent.TimeUnit) @bci=76, line=63 (Compiled frame) > - org.apache.cassandra.service.ReadCallback.await(long, > java.util.concurrent.TimeUnit) @bci=25, line=92 (Compiled frame) > - > org.apache.cassandra.service.AbstractReadExecutor$SpeculatingReadExecutor.maybeTryAdditionalReplicas() > @bci=39, line=281 (Compiled frame) > - org.apache.cassandra.service.StorageProxy.fetchRows(java.util.List, > org.apache.cassandra.db.ConsistencyLevel) @bci=175, line=1338 (Compiled frame) > - org.apache.cassandra.service.StorageProxy.readRegular(java.util.List, > org.apache.cassandra.db.ConsistencyLevel) @bci=9, line=1274 (Compiled frame) > - org.apache.cassandra.service.StorageProxy.read(java.util.List, > org.apache.cassandra.db.ConsistencyLevel, > org.apache.cassandra.service.ClientState) @bci=57, line=1199 (Compiled frame) > - > org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.pager.Pageable, > org.apache.cassandra.cql3.QueryOptions, int, long, > org.apache.cassandra.service.QueryState) @bci=35, line=272 (Compiled frame) > - > org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.QueryState, > org.apache.cassandra.cql3.QueryOptions) @bci=105, line=224 (Compiled frame) > - org.apache.cassandra.auth.Auth.selectUser(java.lang.String) @bci=27, > line=265 (Compiled frame) > - org.apache.cassandra.auth.Auth.isExistingUser(java.lang.String) @bci=1, > line=86 (Compiled frame) > - > org.apache.cassandra.service.ClientState.login(org.apache.cassandra.auth.AuthenticatedUser) > @bci=11, line=206 (Compiled frame) > - > org.apache.cassandra.transport.messages.AuthResponse.execute(org.apache.cassandra.service.QueryState) > @bci=58, line=82 (Compiled frame) > - > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext, > org.apache.cassandra.transport.Message$Request) @bci=75, line=439 (Compiled > frame) > - > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext, > java.lang.Object) @bci=6, line=335 (Compiled frame) > - > io.netty.channel.SimpleChannelInboundHandler.channelRead(io.netty.channel.ChannelHandlerContext, > java.lang.Object) @bci=17, line=105 (Compiled frame) > - > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(java.lang.Object) > @bci=9, line=333 (Compiled frame) > - > io.netty.channel.AbstractChannelHandlerContext.access$700(io.netty.channel.AbstractChannelHandlerContext, > java.lang.Object) @bci=2, line=32 (Compiled frame) > - io.netty.channel.AbstractChannelHandlerContext$8.run() @bci=8, line=324 > (Compiled frame) > - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=511 > (Compiled frame) > - > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run() > @bci=5, line=164 (Compiled frame) > - org.apache.cassandra.concurrent.SEPWorker.run() @bci=87, line=105 > (Interpreted frame) > - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame) > {code} > In a cluster with many connected clients (potentially thousands), a > reconnection flood (for example, restarting all at once) is likely to trigger > this bug. However, it is unlikely to be seen in
[jira] [Comment Edited] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229293#comment-15229293 ] Brandon Williams edited comment on CASSANDRA-8720 at 4/6/16 11:30 PM: -- Here's a simple patch that prefixes sstablekeys' output with the size of the partition in that sstable (prefix because you probably want to pipe this to sort.) Not ideal, but it's a building block that can get you there. was (Author: brandon.williams): Here's a simple patch that prefixes sstablekeys' output with the size of the partition (prefix because you probably want to pipe this to sort.) > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11519) Add support for IBM POWER
[ https://issues.apache.org/jira/browse/CASSANDRA-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rei Odaira updated CASSANDRA-11519: --- Status: Patch Available (was: Open) > Add support for IBM POWER > - > > Key: CASSANDRA-11519 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11519 > Project: Cassandra > Issue Type: Improvement > Components: Core > Environment: POWER architecture >Reporter: Rei Odaira >Priority: Minor > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 11519-2.1.txt, 11519-3.0.txt > > > Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in > org.apache.cassandra.utils.FastByteOperations, > org.apache.cassandra.utils.memory.MemoryUtil, and > org.apache.cassandra.io.util.Memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11519) Add support for IBM POWER
[ https://issues.apache.org/jira/browse/CASSANDRA-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rei Odaira updated CASSANDRA-11519: --- Attachment: 11519-2.1.txt 11519-3.0.txt Description: Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in org.apache.cassandra.utils.FastByteOperations, org.apache.cassandra.utils.memory.MemoryUtil, and org.apache.cassandra.io.util.Memory. (was: Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in org.apache.cassandra.utils.FastByteOperations, org.apache.cassandra.utils.memory.MemoryUtil, and org.apache.cassandra.io.util.Memory. Will provide patches soon.) > Add support for IBM POWER > - > > Key: CASSANDRA-11519 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11519 > Project: Cassandra > Issue Type: Improvement > Components: Core > Environment: POWER architecture >Reporter: Rei Odaira >Priority: Minor > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 11519-2.1.txt, 11519-3.0.txt > > > Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in > org.apache.cassandra.utils.FastByteOperations, > org.apache.cassandra.utils.memory.MemoryUtil, and > org.apache.cassandra.io.util.Memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11473) Clustering column value is zeroed out in some query results
[ https://issues.apache.org/jira/browse/CASSANDRA-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229307#comment-15229307 ] Jason Kania commented on CASSANDRA-11473: - Just to add to the comment, we always set the value of that timestamp to zero or set a value that is current as in 2015-2016 so the value of the column makes no sense from our own understanding of our possible data values. > Clustering column value is zeroed out in some query results > --- > > Key: CASSANDRA-11473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11473 > Project: Cassandra > Issue Type: Bug > Environment: debian jessie patch current with Cassandra 3.0.4 >Reporter: Jason Kania >Assignee: Tyler Hobbs > > As per a discussion on the mailing list, > http://www.mail-archive.com/user@cassandra.apache.org/msg46902.html, we are > encountering inconsistent query results when the following query is run: > {noformat} > select "subscriberId","sensorUnitId","sensorId","time" from > "sensorReadingIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND > "sensorId"=0 ORDER BY "time" LIMIT 10; > {noformat} > Invalid Query Results > {noformat} > subscriberIdsensorUnitIdsensorIdtime > JASKAN002015-05-24 2:09 > JASKAN001969-12-31 19:00 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:11 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > {noformat} > Valid Query Results > {noformat} > subscriberIdsensorUnitIdsensorIdtime > JASKAN002015-05-24 2:09 > JASKAN002015-05-24 2:09 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:11 > JASKAN002015-05-24 2:13 > JASKAN002015-05-24 2:13 > JASKAN002015-05-24 2:14 > {noformat} > Running the following yields no rows indicating that the 1969... timestamp is > invalid. > {noformat} > select "subscriberId","sensorUnitId","sensorId","time" FROM > "edgeTransitionIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND > "sensorId"=0 and time='1969-12-31 19:00:00-0500'; > {noformat} > The schema is as follows: > {noformat} > CREATE TABLE sensorReading."sensorReadingIndex" ( > "subscriberId" text, > "sensorUnitId" int, > "sensorId" int, > time timestamp, > "classId" int, > correlation float, > PRIMARY KEY (("subscriberId", "sensorUnitId", "sensorId"), time) > ) WITH CLUSTERING ORDER BY (time ASC) > AND bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > CREATE INDEX classSecondaryIndex ON sensorReading."sensorReadingIndex" > ("classId"); > {noformat} > We were asked to provide our sstables as well but these are very large and > would require some data obfuscation. We are able to run code or scripts > against the data on our servrers if that is option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-11515: Resolution: Fixed Fix Version/s: (was: 3.0.x) (was: 2.2.x) (was: 3.x) 3.0.6 3.6 2.2.6 Status: Resolved (was: Ready to Commit) Committed. As discussed offline, not going to bother w/CCM fix for now since it has the trivial workaround of CCM_CONFIG_DIR. > C* won't launch with whitespace in path on Windows > -- > > Key: CASSANDRA-11515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Trivial > Labels: windows > Fix For: 2.2.6, 3.6, 3.0.6 > > Attachments: fixWhiteSpace.patch > > > In a directory named 'test space', I see the following on launch: > {noformat} > Error: Could not find or load main class space\cassandra.logs.gc.log > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7423) Allow updating individual subfields of UDT
[ https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229301#comment-15229301 ] Tyler Hobbs commented on CASSANDRA-7423: bq. can you add an entry to the CQL doc changelog and include a link to the ticket in that entry I did have an entry, but I've added the ticket number there as well. bq. whereas I would have expected on the following line (or even better: a b ) Hmm, I can't reproduce getting on the same line. Can we investigate that in another ticket? However, I have taken your suggestion of autocompleting the UDT fields where possible (update assignments and conditions). > Allow updating individual subfields of UDT > -- > > Key: CASSANDRA-7423 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7423 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Tyler Hobbs > Labels: client-impacting, cql, docs-impacting > Fix For: 3.x > > > Since user defined types were implemented in CASSANDRA-5590 as blobs (you > have to rewrite the entire type in order to make any modifications), they > can't be safely used without LWT for any operation that wants to modify a > subset of the UDT's fields by any client process that is not authoritative > for the entire blob. > When trying to use UDTs to model complex records (particularly with nesting), > this is not an exceptional circumstance, this is the totally expected normal > situation. > The use of UDTs for anything non-trivial is harmful to either performance or > consistency or both. > edit: to clarify, i believe that most potential uses of UDTs should be > considered anti-patterns until/unless we have field-level r/w access to > individual elements of the UDT, with individual timestamps and standard LWW > semantics -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/6] cassandra git commit: Fix launch with whitespace in path on Windows
Fix launch with whitespace in path on Windows Patch by jmckenzie; reviewed by pmotta for CASSANDRA-11515 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/96c53e0a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/96c53e0a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/96c53e0a Branch: refs/heads/cassandra-3.0 Commit: 96c53e0a5e73046acb77e2ac2a3aa9d9ef64fc65 Parents: a33038b Author: Josh McKenzieAuthored: Wed Apr 6 18:37:08 2016 -0400 Committer: Josh McKenzie Committed: Wed Apr 6 18:37:08 2016 -0400 -- conf/cassandra-env.ps1 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/96c53e0a/conf/cassandra-env.ps1 -- diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1 index aff0d9e..321a9ca 100644 --- a/conf/cassandra-env.ps1 +++ b/conf/cassandra-env.ps1 @@ -425,7 +425,7 @@ Function SetCassandraEnvironment $env:JVM_OPTS="$env:JVM_OPTS -XX:+PrintPromotionFailure" # $env:JVM_OPTS="$env:JVM_OPTS -XX:PrintFLSStatistics=1" -$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log" +$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log""" $env:JVM_OPTS="$env:JVM_OPTS -XX:+UseGCLogFileRotation" $env:JVM_OPTS="$env:JVM_OPTS -XX:NumberOfGCLogFiles=10" $env:JVM_OPTS="$env:JVM_OPTS -XX:GCLogFileSize=10M"
[1/6] cassandra git commit: Fix launch with whitespace in path on Windows
Repository: cassandra Updated Branches: refs/heads/cassandra-2.2 a33038be2 -> 96c53e0a5 refs/heads/cassandra-3.0 424593205 -> 04a75a634 refs/heads/trunk 1a73af768 -> bd633377a Fix launch with whitespace in path on Windows Patch by jmckenzie; reviewed by pmotta for CASSANDRA-11515 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/96c53e0a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/96c53e0a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/96c53e0a Branch: refs/heads/cassandra-2.2 Commit: 96c53e0a5e73046acb77e2ac2a3aa9d9ef64fc65 Parents: a33038b Author: Josh McKenzieAuthored: Wed Apr 6 18:37:08 2016 -0400 Committer: Josh McKenzie Committed: Wed Apr 6 18:37:08 2016 -0400 -- conf/cassandra-env.ps1 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/96c53e0a/conf/cassandra-env.ps1 -- diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1 index aff0d9e..321a9ca 100644 --- a/conf/cassandra-env.ps1 +++ b/conf/cassandra-env.ps1 @@ -425,7 +425,7 @@ Function SetCassandraEnvironment $env:JVM_OPTS="$env:JVM_OPTS -XX:+PrintPromotionFailure" # $env:JVM_OPTS="$env:JVM_OPTS -XX:PrintFLSStatistics=1" -$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log" +$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log""" $env:JVM_OPTS="$env:JVM_OPTS -XX:+UseGCLogFileRotation" $env:JVM_OPTS="$env:JVM_OPTS -XX:NumberOfGCLogFiles=10" $env:JVM_OPTS="$env:JVM_OPTS -XX:GCLogFileSize=10M"
[3/6] cassandra git commit: Fix launch with whitespace in path on Windows
Fix launch with whitespace in path on Windows Patch by jmckenzie; reviewed by pmotta for CASSANDRA-11515 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/96c53e0a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/96c53e0a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/96c53e0a Branch: refs/heads/trunk Commit: 96c53e0a5e73046acb77e2ac2a3aa9d9ef64fc65 Parents: a33038b Author: Josh McKenzieAuthored: Wed Apr 6 18:37:08 2016 -0400 Committer: Josh McKenzie Committed: Wed Apr 6 18:37:08 2016 -0400 -- conf/cassandra-env.ps1 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/96c53e0a/conf/cassandra-env.ps1 -- diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1 index aff0d9e..321a9ca 100644 --- a/conf/cassandra-env.ps1 +++ b/conf/cassandra-env.ps1 @@ -425,7 +425,7 @@ Function SetCassandraEnvironment $env:JVM_OPTS="$env:JVM_OPTS -XX:+PrintPromotionFailure" # $env:JVM_OPTS="$env:JVM_OPTS -XX:PrintFLSStatistics=1" -$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log" +$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log""" $env:JVM_OPTS="$env:JVM_OPTS -XX:+UseGCLogFileRotation" $env:JVM_OPTS="$env:JVM_OPTS -XX:NumberOfGCLogFiles=10" $env:JVM_OPTS="$env:JVM_OPTS -XX:GCLogFileSize=10M"
[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd633377 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd633377 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd633377 Branch: refs/heads/trunk Commit: bd633377a9b3fd4e0660e000cc31980ab3441ae1 Parents: 1a73af7 04a75a6 Author: Josh McKenzieAuthored: Wed Apr 6 18:57:43 2016 -0400 Committer: Josh McKenzie Committed: Wed Apr 6 18:57:43 2016 -0400 -- conf/cassandra-env.ps1 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd633377/conf/cassandra-env.ps1 --
[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/04a75a63 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/04a75a63 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/04a75a63 Branch: refs/heads/trunk Commit: 04a75a6344433c89531f1e5435566e8bf8d45286 Parents: 4245932 96c53e0 Author: Josh McKenzieAuthored: Wed Apr 6 18:44:48 2016 -0400 Committer: Josh McKenzie Committed: Wed Apr 6 18:45:09 2016 -0400 -- conf/cassandra-env.ps1 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/04a75a63/conf/cassandra-env.ps1 -- diff --cc conf/cassandra-env.ps1 index 5eefb04,321a9ca..a322a4d --- a/conf/cassandra-env.ps1 +++ b/conf/cassandra-env.ps1 @@@ -332,57 -331,6 +332,57 @@@ Function SetCassandraEnvironmen CalculateHeapSizes ParseJVMInfo + +#GC log path has to be defined here since it needs to find CASSANDRA_HOME - $env:JVM_OPTS="$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log" ++$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log""" + +# Read user-defined JVM options from jvm.options file +$content = Get-Content "$env:CASSANDRA_CONF\jvm.options" +for ($i = 0; $i -lt $content.Count; $i++) +{ +$line = $content[$i] +if ($line.StartsWith("-")) +{ +$env:JVM_OPTS = "$env:JVM_OPTS $line" +} +} + +$defined_xmn = $env:JVM_OPTS -like '*Xmn*' +$defined_xmx = $env:JVM_OPTS -like '*Xmx*' +$defined_xms = $env:JVM_OPTS -like '*Xms*' +$using_cms = $env:JVM_OPTS -like '*UseConcMarkSweepGC*' + +# We only set -Xms and -Xmx if they were not defined on jvm.options file +# If defined, both Xmx and Xms should be defined together. +if (($defined_xmx -eq $false) -and ($defined_xms -eq $false)) +{ +$env:JVM_OPTS="$env:JVM_OPTS -Xms$env:MAX_HEAP_SIZE" +$env:JVM_OPTS="$env:JVM_OPTS -Xmx$env:MAX_HEAP_SIZE" +} +elseif (($defined_xmx -eq $false) -or ($defined_xms -eq $false)) +{ +echo "Please set or unset -Xmx and -Xms flags in pairs on jvm.options file." +exit +} + +# We only set -Xmn flag if it was not defined in jvm.options file +# and if the CMS GC is being used +# If defined, both Xmn and Xmx should be defined together. +if (($defined_xmn -eq $true) -and ($defined_xmx -eq $false)) +{ +echo "Please set or unset -Xmx and -Xmn flags in pairs on jvm.options file." +exit +} +elseif (($defined_xmn -eq $false) -and ($using_cms -eq $true)) +{ +$env:JVM_OPTS="$env:JVM_OPTS -Xmn$env:HEAP_NEWSIZE" +} + +if (($env:JVM_ARCH -eq "64-Bit") -and ($using_cms -eq $true)) +{ +$env:JVM_OPTS="$env:JVM_OPTS -XX:+UseCondCardMark" +} + # Add sigar env - see Cassandra-7838 $env:JVM_OPTS = "$env:JVM_OPTS -Djava.library.path=""$env:CASSANDRA_HOME\lib\sigar-bin"""
[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/04a75a63 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/04a75a63 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/04a75a63 Branch: refs/heads/cassandra-3.0 Commit: 04a75a6344433c89531f1e5435566e8bf8d45286 Parents: 4245932 96c53e0 Author: Josh McKenzieAuthored: Wed Apr 6 18:44:48 2016 -0400 Committer: Josh McKenzie Committed: Wed Apr 6 18:45:09 2016 -0400 -- conf/cassandra-env.ps1 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/04a75a63/conf/cassandra-env.ps1 -- diff --cc conf/cassandra-env.ps1 index 5eefb04,321a9ca..a322a4d --- a/conf/cassandra-env.ps1 +++ b/conf/cassandra-env.ps1 @@@ -332,57 -331,6 +332,57 @@@ Function SetCassandraEnvironmen CalculateHeapSizes ParseJVMInfo + +#GC log path has to be defined here since it needs to find CASSANDRA_HOME - $env:JVM_OPTS="$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log" ++$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log""" + +# Read user-defined JVM options from jvm.options file +$content = Get-Content "$env:CASSANDRA_CONF\jvm.options" +for ($i = 0; $i -lt $content.Count; $i++) +{ +$line = $content[$i] +if ($line.StartsWith("-")) +{ +$env:JVM_OPTS = "$env:JVM_OPTS $line" +} +} + +$defined_xmn = $env:JVM_OPTS -like '*Xmn*' +$defined_xmx = $env:JVM_OPTS -like '*Xmx*' +$defined_xms = $env:JVM_OPTS -like '*Xms*' +$using_cms = $env:JVM_OPTS -like '*UseConcMarkSweepGC*' + +# We only set -Xms and -Xmx if they were not defined on jvm.options file +# If defined, both Xmx and Xms should be defined together. +if (($defined_xmx -eq $false) -and ($defined_xms -eq $false)) +{ +$env:JVM_OPTS="$env:JVM_OPTS -Xms$env:MAX_HEAP_SIZE" +$env:JVM_OPTS="$env:JVM_OPTS -Xmx$env:MAX_HEAP_SIZE" +} +elseif (($defined_xmx -eq $false) -or ($defined_xms -eq $false)) +{ +echo "Please set or unset -Xmx and -Xms flags in pairs on jvm.options file." +exit +} + +# We only set -Xmn flag if it was not defined in jvm.options file +# and if the CMS GC is being used +# If defined, both Xmn and Xmx should be defined together. +if (($defined_xmn -eq $true) -and ($defined_xmx -eq $false)) +{ +echo "Please set or unset -Xmx and -Xmn flags in pairs on jvm.options file." +exit +} +elseif (($defined_xmn -eq $false) -and ($using_cms -eq $true)) +{ +$env:JVM_OPTS="$env:JVM_OPTS -Xmn$env:HEAP_NEWSIZE" +} + +if (($env:JVM_ARCH -eq "64-Bit") -and ($using_cms -eq $true)) +{ +$env:JVM_OPTS="$env:JVM_OPTS -XX:+UseCondCardMark" +} + # Add sigar env - see Cassandra-7838 $env:JVM_OPTS = "$env:JVM_OPTS -Djava.library.path=""$env:CASSANDRA_HOME\lib\sigar-bin"""
[jira] [Updated] (CASSANDRA-8720) Provide tools for finding wide row/partition keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8720: Attachment: 8720.txt Here's a simple patch that prefixes sstablekeys' output with the size of the partition (prefix because you probably want to pipe this to sort.) > Provide tools for finding wide row/partition keys > - > > Key: CASSANDRA-8720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8720 > Project: Cassandra > Issue Type: Improvement >Reporter: J.B. Langston > Attachments: 8720.txt > > > Multiple users have requested some sort of tool to help identify wide row > keys. They get into a situation where they know a wide row/partition has been > inserted and it's causing problems for them but they have no idea what the > row key is in order to remove it. > Maintaining the widest row key currently encountered and displaying it in > cfstats would be one possible approach. > Another would be an offline tool (possibly an enhancement to sstablekeys) to > show the number of columns/bytes per key in each sstable. If a tool to > aggregate the information at a CF-level could be provided that would be a > bonus, but it shouldn't be too hard to write a script wrapper to aggregate > them if not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11513) Result set is not unique on primary key (cql)
[ https://issues.apache.org/jira/browse/CASSANDRA-11513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229241#comment-15229241 ] Joel Knighton commented on CASSANDRA-11513: --- I bisected this, and it looks like this problem was introduced in [CASSANDRA-9986]. > Result set is not unique on primary key (cql) > -- > > Key: CASSANDRA-11513 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11513 > Project: Cassandra > Issue Type: Bug >Reporter: Tianshi Wang > > [cqlsh 5.0.1 | Cassandra 3.4 | CQL spec 3.4.0 | Native protocol v4] > Run followings, > {code} > drop table if exists test0; > CREATE TABLE test0 ( > pk int, > a int, > b text, > s text static, > PRIMARY KEY (pk, a) > ); > insert into test0 (pk,a,b,s) values (0,1,'b1','hello b1'); > insert into test0 (pk,a,b,s) values (0,2,'b2','hello b2'); > insert into test0 (pk,a,b,s) values (0,3,'b3','hello b3'); > create index on test0 (b); > insert into test0 (pk,a,b,s) values (0,2,'b2 again','b2 again'); > {code} > Now select one record based on primary key, we got all three records. > {code} > cqlsh:ops> select * from test0 where pk=0 and a=2; > pk | a | s| b > +---+--+-- > 0 | 1 | b2 again | b1 > 0 | 2 | b2 again | b2 again > 0 | 3 | b2 again | b3 > {code} > {code} > cqlsh:ops> desc test0; > CREATE TABLE ops.test0 ( > pk int, > a int, > b text, > s text static, > PRIMARY KEY (pk, a) > ) WITH CLUSTERING ORDER BY (a ASC) > AND bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > CREATE INDEX test0_b_idx ON ops.test0 (b); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11516) Make max number of streams configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sebastian Estevez updated CASSANDRA-11516: -- Description: Today we default to num cores. In large boxes (many cores), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with. Usually we tackle issues like this with the streaming throughput levers but in this case the problem is CPU consumption by StreamReceiverTasks specifically in the IntervalTree build -- https://github.com/apache/cassandra/blob/cassandra-2.1.12/src/java/org/apache/cassandra/utils/IntervalTree.java#L257 We need a max number of parallel streams lever to hanlde this. was: Today we default to num cores. In large boxes (many cores), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with. Usually we tackle issues like this with the streaming throughput levers but in this case the problem is CPU consumption by StreamReceiverTasks specifically in the IntervalTree build -- https://github.com/apache/cassandra/blob/cassandra-2.1.12/src/java/org/apache/cassandra/utils/IntervalTree.java#L257 > Make max number of streams configurable > --- > > Key: CASSANDRA-11516 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11516 > Project: Cassandra > Issue Type: New Feature >Reporter: Sebastian Estevez > > Today we default to num cores. In large boxes (many cores), this is > suboptimal as it can generate huge amounts of garbage that GC can't keep up > with. > Usually we tackle issues like this with the streaming throughput levers but > in this case the problem is CPU consumption by StreamReceiverTasks > specifically in the IntervalTree build -- > https://github.com/apache/cassandra/blob/cassandra-2.1.12/src/java/org/apache/cassandra/utils/IntervalTree.java#L257 > We need a max number of parallel streams lever to hanlde this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11473) Clustering column value is zeroed out in some query results
[ https://issues.apache.org/jira/browse/CASSANDRA-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229218#comment-15229218 ] Tyler Hobbs commented on CASSANDRA-11473: - It looks like the clustering value is not actually being zeroed out, it simply only had data in the milliseconds portion of the timestamp, which cqlsh wasn't showing by default. Instead, it's looking more like this is a ser/deser problem related to a dropped column. > Clustering column value is zeroed out in some query results > --- > > Key: CASSANDRA-11473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11473 > Project: Cassandra > Issue Type: Bug > Environment: debian jessie patch current with Cassandra 3.0.4 >Reporter: Jason Kania > > As per a discussion on the mailing list, > http://www.mail-archive.com/user@cassandra.apache.org/msg46902.html, we are > encountering inconsistent query results when the following query is run: > {noformat} > select "subscriberId","sensorUnitId","sensorId","time" from > "sensorReadingIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND > "sensorId"=0 ORDER BY "time" LIMIT 10; > {noformat} > Invalid Query Results > {noformat} > subscriberIdsensorUnitIdsensorIdtime > JASKAN002015-05-24 2:09 > JASKAN001969-12-31 19:00 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:11 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > {noformat} > Valid Query Results > {noformat} > subscriberIdsensorUnitIdsensorIdtime > JASKAN002015-05-24 2:09 > JASKAN002015-05-24 2:09 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:11 > JASKAN002015-05-24 2:13 > JASKAN002015-05-24 2:13 > JASKAN002015-05-24 2:14 > {noformat} > Running the following yields no rows indicating that the 1969... timestamp is > invalid. > {noformat} > select "subscriberId","sensorUnitId","sensorId","time" FROM > "edgeTransitionIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND > "sensorId"=0 and time='1969-12-31 19:00:00-0500'; > {noformat} > The schema is as follows: > {noformat} > CREATE TABLE sensorReading."sensorReadingIndex" ( > "subscriberId" text, > "sensorUnitId" int, > "sensorId" int, > time timestamp, > "classId" int, > correlation float, > PRIMARY KEY (("subscriberId", "sensorUnitId", "sensorId"), time) > ) WITH CLUSTERING ORDER BY (time ASC) > AND bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > CREATE INDEX classSecondaryIndex ON sensorReading."sensorReadingIndex" > ("classId"); > {noformat} > We were asked to provide our sstables as well but these are very large and > would require some data obfuscation. We are able to run code or scripts > against the data on our servrers if that is option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11516) Make max number of streams configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sebastian Estevez updated CASSANDRA-11516: -- Description: Today we default to num cores. In large boxes (many cores), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with. Usually we tackle issues like this with the streaming throughput levers but in this case the problem is CPU consumption by StreamReceiverTasks specifically in the IntervalTree build -- https://github.com/apache/cassandra/blob/cassandra-2.1.12/src/java/org/apache/cassandra/utils/IntervalTree.java#L257 was:Today we default to num cores. In large boxes (many cores), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with. > Make max number of streams configurable > --- > > Key: CASSANDRA-11516 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11516 > Project: Cassandra > Issue Type: New Feature >Reporter: Sebastian Estevez > > Today we default to num cores. In large boxes (many cores), this is > suboptimal as it can generate huge amounts of garbage that GC can't keep up > with. > Usually we tackle issues like this with the streaming throughput levers but > in this case the problem is CPU consumption by StreamReceiverTasks > specifically in the IntervalTree build -- > https://github.com/apache/cassandra/blob/cassandra-2.1.12/src/java/org/apache/cassandra/utils/IntervalTree.java#L257 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11473) Clustering column value is zeroed out in some query results
[ https://issues.apache.org/jira/browse/CASSANDRA-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs reassigned CASSANDRA-11473: --- Assignee: Tyler Hobbs > Clustering column value is zeroed out in some query results > --- > > Key: CASSANDRA-11473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11473 > Project: Cassandra > Issue Type: Bug > Environment: debian jessie patch current with Cassandra 3.0.4 >Reporter: Jason Kania >Assignee: Tyler Hobbs > > As per a discussion on the mailing list, > http://www.mail-archive.com/user@cassandra.apache.org/msg46902.html, we are > encountering inconsistent query results when the following query is run: > {noformat} > select "subscriberId","sensorUnitId","sensorId","time" from > "sensorReadingIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND > "sensorId"=0 ORDER BY "time" LIMIT 10; > {noformat} > Invalid Query Results > {noformat} > subscriberIdsensorUnitIdsensorIdtime > JASKAN002015-05-24 2:09 > JASKAN001969-12-31 19:00 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:10 > JASKAN002016-01-21 2:11 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > JASKAN002016-01-21 2:22 > {noformat} > Valid Query Results > {noformat} > subscriberIdsensorUnitIdsensorIdtime > JASKAN002015-05-24 2:09 > JASKAN002015-05-24 2:09 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:10 > JASKAN002015-05-24 2:11 > JASKAN002015-05-24 2:13 > JASKAN002015-05-24 2:13 > JASKAN002015-05-24 2:14 > {noformat} > Running the following yields no rows indicating that the 1969... timestamp is > invalid. > {noformat} > select "subscriberId","sensorUnitId","sensorId","time" FROM > "edgeTransitionIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND > "sensorId"=0 and time='1969-12-31 19:00:00-0500'; > {noformat} > The schema is as follows: > {noformat} > CREATE TABLE sensorReading."sensorReadingIndex" ( > "subscriberId" text, > "sensorUnitId" int, > "sensorId" int, > time timestamp, > "classId" int, > correlation float, > PRIMARY KEY (("subscriberId", "sensorUnitId", "sensorId"), time) > ) WITH CLUSTERING ORDER BY (time ASC) > AND bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > CREATE INDEX classSecondaryIndex ON sensorReading."sensorReadingIndex" > ("classId"); > {noformat} > We were asked to provide our sstables as well but these are very large and > would require some data obfuscation. We are able to run code or scripts > against the data on our servrers if that is option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11516) Make max number of streams configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sebastian Estevez updated CASSANDRA-11516: -- Description: Today we default to num cores. In large boxes (many cores), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with. (was: Today we default to num cores. In large boxes (many cores, etc.), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with.) > Make max number of streams configurable > --- > > Key: CASSANDRA-11516 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11516 > Project: Cassandra > Issue Type: New Feature >Reporter: Sebastian Estevez > > Today we default to num cores. In large boxes (many cores), this is > suboptimal as it can generate huge amounts of garbage that GC can't keep up > with. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11516) Make max number of streams configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sebastian Estevez updated CASSANDRA-11516: -- Description: Today we default to num cores. In large boxes (many cores, etc.), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with. (was: Today we default to num cores. In large boxes (40 cores, etc.), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with.) > Make max number of streams configurable > --- > > Key: CASSANDRA-11516 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11516 > Project: Cassandra > Issue Type: New Feature >Reporter: Sebastian Estevez > > Today we default to num cores. In large boxes (many cores, etc.), this is > suboptimal as it can generate huge amounts of garbage that GC can't keep up > with. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7017) allow per-partition LIMIT clause in cql
[ https://issues.apache.org/jira/browse/CASSANDRA-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229132#comment-15229132 ] Alex Petrov commented on CASSANDRA-7017: There was a slight problem (validation message was not changed in test), which got caught by CI. I've updated the branch and ran tests (yay, got a ci!). |[trunk|https://github.com/ifesdjeen/cassandra/tree/7017-trunk]|[dtest|http://cassci.datastax.com/job/ifesdjeen-7017-trunk-dtest/1/]|[testall|http://cassci.datastax.com/job/ifesdjeen-7017-trunk-testall/1/]| > allow per-partition LIMIT clause in cql > --- > > Key: CASSANDRA-7017 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7017 > Project: Cassandra > Issue Type: Improvement >Reporter: Jonathan Halliday >Assignee: Alex Petrov > Labels: cql > Fix For: 3.x > > Attachments: 0001-Allow-per-partition-limit-in-SELECT-queries.patch, > 0001-Allow-per-partition-limit-in-SELECT-queriesV2.patch, > 0001-CASSANDRA-7017.patch > > > somewhat related to static columns (#6561) and slicing (#4851), it is > desirable to apply a LIMIT on a per-partition rather than per-query basis, > such as to retrieve the top (most recent, etc) N clustered values for each > partition key, e.g. > -- for each league, keep a ranked list of users > create table scores (league text, score int, player text, primary key(league, > score, player) ); > -- get the top 3 teams in each league: > select * from scores staticlimit 3; > this currently requires issuing one query per partition key, which is tedious > if all the key partition key values are known and impossible if they aren't. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
[ https://issues.apache.org/jira/browse/CASSANDRA-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228936#comment-15228936 ] Ariel Weisberg edited comment on CASSANDRA-11517 at 4/6/16 9:16 PM: |[trunk code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-11517-trunk?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-dtest/1/]| |[3.0 code|https://github.com/apache/cassandra/compare/cassandra-3.0...aweisberg:CASSANDRA-11517-3.0?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-3.0-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-3.0-dtest/1/]| Not proof of any real performance benefit in context, but the unit test runs in 250 milliseconds with the CAS loop and 1.4 seconds without the CAS loop. was (Author: aweisberg): |[trunk code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-11517-trunk?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-dtest/1/]| |[3.0 code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-11517-3.0?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-3.0-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-3.0-dtest/1/]| Not proof of any real performance benefit in context, but the unit test runs in 250 milliseconds with the CAS loop and 1.4 seconds without the CAS loop. > o.a.c.utils.UUIDGen could handle contention better > -- > > Key: CASSANDRA-11517 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Minor > Fix For: 3.0.x, 3.x > > > I noticed this profiling a query handler implementation that uses UUIDGen to > get handles to track queries for logging purposes. > Under contention threads are being unscheduled instead of spinning until the > lock is available. I would have expected intrinsic locks to be able to adapt > to this based on profiling information. > Either way it's seems pretty straightforward to rewrite this to use a CAS > loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11518) o.a.c.utils.UUIDGen clock generation is not very high in entropy
[ https://issues.apache.org/jira/browse/CASSANDRA-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229131#comment-15229131 ] Ariel Weisberg commented on CASSANDRA-11518: |[trunk code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-11518-trunk?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11518-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11518-trunk-dtest/]| |[3.0 code|https://github.com/apache/cassandra/compare/cassandra-3.0...aweisberg:CASSANDRA-11518-3.0?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11518-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11518-3.0-dtest/]| > o.a.c.utils.UUIDGen clock generation is not very high in entropy > > > Key: CASSANDRA-11518 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11518 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Trivial > Fix For: 3.0.x, 3.x > > > makeClockSeqAndNode uses {{java.util.Random}} to generate the clock. > {{Random}} only has 48-bits of internal state so it's not going to generate > the best bits for clock and in addition to that it uses a collision prone > seed that sort of defeats the purpose of clock sequence. > A better approach to get the most out of those 14-bits would be to use > {{SecureRandom}} with something like SHA1PRNG. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10624) Support UDT in CQLSSTableWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228006#comment-15228006 ] Alex Petrov edited comment on CASSANDRA-10624 at 4/6/16 9:11 PM: - [~Stefania] thanks a lot for the review and kind words! I've incorporated your changes and suggestions, and went a step further and reused `types` straight from {{KeyspaceMetadata}} instead of keeping them within builder. Also, simplified code in {{getTableMetadata}} (removed unnecessary casts). I've pushed it again to the same branch (link below). * {{getTableMetadata}} is now a part of the {{forTable}}, with removed commend and added check for statement type * mispositioned curly brackets are fixed by now. Sorry about that: old habit, forgot to auto-format. Sorry about lack of CI for my changes: I've already requested it, it's in process. |*branch*|*testall*|*dtest*| |[trunk|https://github.com/ifesdjeen/cassandra/tree/10624-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10624-trunk-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10624-trunk-dtest/]| was (Author: ifesdjeen): [~Stefania] thanks a lot for the review and kind words! I've incorporated your changes and suggestions, and went a step further and reused `types` straight from {{KeyspaceMetadata}} instead of keeping them within builder. Also, simplified code in {{getTableMetadata}} (removed unnecessary casts). I've pushed it again to the same branch (link below). * {{getTableMetadata}} is now a part of the {{forTable}}, with removed commend and added check for statement type * mispositioned curly brackets are fixed by now. Sorry about that: old habit, forgot to auto-format. Sorry about lack of CI for my changes: I've already requested it, it's in process. |[trunk|https://github.com/ifesdjeen/cassandra/tree/10624-trunk]| > Support UDT in CQLSSTableWriter > --- > > Key: CASSANDRA-10624 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10624 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Sylvain Lebresne >Assignee: Alex Petrov > Fix For: 3.x > > Attachments: 0001-Add-support-for-UDTs-to-CQLSStableWriter.patch, > 0001-Support-UDTs-in-CQLSStableWriterV2.patch > > > As far as I can tell, there is not way to use a UDT with {{CQLSSTableWriter}} > since there is no way to declare it and thus {{CQLSSTableWriter.Builder}} > knows of no UDT when parsing the {{CREATE TABLE}} statement passed. > In terms of API, I think the simplest would be to allow to pass types to the > builder in the same way we pass the table definition. So something like: > {noformat} > String type = "CREATE TYPE myKs.vertex (x int, y int, z int)"; > String schema = "CREATE TABLE myKs.myTable (" > + " k int PRIMARY KEY," > + " s set" > + ")"; > String insert = ...; > CQLSSTableWriter writer = CQLSSTableWriter.builder() > .inDirectory("path/to/directory") > .withType(type) > .forTable(schema) > .using(insert).build(); > {noformat} > I'll note that implementation wise, this might be a bit simpler after the > changes of CASSANDRA-10365 (as it makes it easy to passe specific types > during the preparation of the create statement). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10624) Support UDT in CQLSSTableWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228006#comment-15228006 ] Alex Petrov edited comment on CASSANDRA-10624 at 4/6/16 9:12 PM: - [~Stefania] thanks a lot for the review and kind words! I've incorporated your changes and suggestions, and went a step further and reused `types` straight from {{KeyspaceMetadata}} instead of keeping them within builder. Also, simplified code in {{getTableMetadata}} (removed unnecessary casts). I've pushed it again to the same branch (link below). * {{getTableMetadata}} is now a part of the {{forTable}}, with removed commend and added check for statement type * mispositioned curly brackets are fixed by now. Sorry about that: old habit, forgot to auto-format. |*branch*|*testall*|*dtest*| |[trunk|https://github.com/ifesdjeen/cassandra/tree/10624-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10624-trunk-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10624-trunk-dtest/]| was (Author: ifesdjeen): [~Stefania] thanks a lot for the review and kind words! I've incorporated your changes and suggestions, and went a step further and reused `types` straight from {{KeyspaceMetadata}} instead of keeping them within builder. Also, simplified code in {{getTableMetadata}} (removed unnecessary casts). I've pushed it again to the same branch (link below). * {{getTableMetadata}} is now a part of the {{forTable}}, with removed commend and added check for statement type * mispositioned curly brackets are fixed by now. Sorry about that: old habit, forgot to auto-format. Sorry about lack of CI for my changes: I've already requested it, it's in process. |*branch*|*testall*|*dtest*| |[trunk|https://github.com/ifesdjeen/cassandra/tree/10624-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10624-trunk-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10624-trunk-dtest/]| > Support UDT in CQLSSTableWriter > --- > > Key: CASSANDRA-10624 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10624 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Sylvain Lebresne >Assignee: Alex Petrov > Fix For: 3.x > > Attachments: 0001-Add-support-for-UDTs-to-CQLSStableWriter.patch, > 0001-Support-UDTs-in-CQLSStableWriterV2.patch > > > As far as I can tell, there is not way to use a UDT with {{CQLSSTableWriter}} > since there is no way to declare it and thus {{CQLSSTableWriter.Builder}} > knows of no UDT when parsing the {{CREATE TABLE}} statement passed. > In terms of API, I think the simplest would be to allow to pass types to the > builder in the same way we pass the table definition. So something like: > {noformat} > String type = "CREATE TYPE myKs.vertex (x int, y int, z int)"; > String schema = "CREATE TABLE myKs.myTable (" > + " k int PRIMARY KEY," > + " s set" > + ")"; > String insert = ...; > CQLSSTableWriter writer = CQLSSTableWriter.builder() > .inDirectory("path/to/directory") > .withType(type) > .forTable(schema) > .using(insert).build(); > {noformat} > I'll note that implementation wise, this might be a bit simpler after the > changes of CASSANDRA-10365 (as it makes it easy to passe specific types > during the preparation of the create statement). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11518) o.a.c.utils.UUIDGen clock generation is not very high in entropy
[ https://issues.apache.org/jira/browse/CASSANDRA-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-11518: --- Fix Version/s: 3.0.x > o.a.c.utils.UUIDGen clock generation is not very high in entropy > > > Key: CASSANDRA-11518 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11518 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Trivial > Fix For: 3.0.x, 3.x > > > makeClockSeqAndNode uses {{java.util.Random}} to generate the clock. > {{Random}} only has 48-bits of internal state so it's not going to generate > the best bits for clock and in addition to that it uses a collision prone > seed that sort of defeats the purpose of clock sequence. > A better approach to get the most out of those 14-bits would be to use > {{SecureRandom}} with something like SHA1PRNG. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
[ https://issues.apache.org/jira/browse/CASSANDRA-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-11517: --- Fix Version/s: 3.0.x > o.a.c.utils.UUIDGen could handle contention better > -- > > Key: CASSANDRA-11517 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Minor > Fix For: 3.0.x, 3.x > > > I noticed this profiling a query handler implementation that uses UUIDGen to > get handles to track queries for logging purposes. > Under contention threads are being unscheduled instead of spinning until the > lock is available. I would have expected intrinsic locks to be able to adapt > to this based on profiling information. > Either way it's seems pretty straightforward to rewrite this to use a CAS > loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
[ https://issues.apache.org/jira/browse/CASSANDRA-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228936#comment-15228936 ] Ariel Weisberg edited comment on CASSANDRA-11517 at 4/6/16 8:53 PM: |[trunk code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-11517-trunk?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-dtest/1/]| |[3.0 code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-11517-3.0?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-3.0-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-3.0-dtest/1/]| Not proof of any real performance benefit in context, but the unit test runs in 250 milliseconds with the CAS loop and 1.4 seconds without the CAS loop. was (Author: aweisberg): |[trunk code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-11517-trunk?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-dtest/1/]| Not proof of any real performance benefit in context, but the unit test runs in 250 milliseconds with the CAS loop and 1.4 seconds without the CAS loop. > o.a.c.utils.UUIDGen could handle contention better > -- > > Key: CASSANDRA-11517 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Minor > Fix For: 3.0.x, 3.x > > > I noticed this profiling a query handler implementation that uses UUIDGen to > get handles to track queries for logging purposes. > Under contention threads are being unscheduled instead of spinning until the > lock is available. I would have expected intrinsic locks to be able to adapt > to this based on profiling information. > Either way it's seems pretty straightforward to rewrite this to use a CAS > loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7826) support non-frozen, nested collections
[ https://issues.apache.org/jira/browse/CASSANDRA-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229021#comment-15229021 ] Alex Petrov commented on CASSANDRA-7826: [~snazy] that's great! I'm currently gathering enough information to at least understand what it'd take to solve it. It certainly would be required to have accessors in order to have nested update ops, such as (based on the syntax mentioned in [#7396|https://issues.apache.org/jira/browse/CASSANDRA-7396] {{UPDATE tbl SET a = a[0] + [ 'a', 'b' ] WHERE k = 0}} (in case with list nested within the list) and so on. Do you think it's a good idea to take a closer look at / maybe comment on the branch? Or is it better to wait for a final version? > support non-frozen, nested collections > -- > > Key: CASSANDRA-7826 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7826 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Alex Petrov > Labels: ponies > Fix For: 3.x > > > The inability to nest collections is one of the bigger data modelling > limitations we have right now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11519) Add support for IBM POWER
Rei Odaira created CASSANDRA-11519: -- Summary: Add support for IBM POWER Key: CASSANDRA-11519 URL: https://issues.apache.org/jira/browse/CASSANDRA-11519 Project: Cassandra Issue Type: Improvement Components: Core Environment: POWER architecture Reporter: Rei Odaira Priority: Minor Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in org.apache.cassandra.utils.FastByteOperations, org.apache.cassandra.utils.memory.MemoryUtil, and org.apache.cassandra.io.util.Memory. Will provide patches soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229005#comment-15229005 ] Joshua McKenzie edited comment on CASSANDRA-11515 at 4/6/16 8:10 PM: - Sorry, patch should apply on 2.2. Figured it was trivial enough that I can just manually put it in on 3.0+ rather than going through all the song-and-dance of different versions here on the ticket. Are you testing w/a user account with a space in the name? Because that's where ccm is dying. :) was (Author: joshuamckenzie): Sorry, patch should target 2.2. Figured it was trivial enough that I can just manually put it in on 3.0+ rather than going through all the song-and-dance of different versions here on the ticket. Are you testing w/a user account with a space in the name? Because that's where ccm is dying. :) > C* won't launch with whitespace in path on Windows > -- > > Key: CASSANDRA-11515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Trivial > Labels: windows > Fix For: 2.2.x, 3.0.x, 3.x > > Attachments: fixWhiteSpace.patch > > > In a directory named 'test space', I see the following on launch: > {noformat} > Error: Could not find or load main class space\cassandra.logs.gc.log > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229005#comment-15229005 ] Joshua McKenzie commented on CASSANDRA-11515: - Sorry, patch should target 2.2. Figured it was trivial enough that I can just manually put it in on 3.0+ rather than going through all the song-and-dance of different versions here on the ticket. Are you testing w/a user account with a space in the name? Because that's where ccm is dying. :) > C* won't launch with whitespace in path on Windows > -- > > Key: CASSANDRA-11515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Trivial > Labels: windows > Fix For: 2.2.x, 3.0.x, 3.x > > Attachments: fixWhiteSpace.patch > > > In a directory named 'test space', I see the following on launch: > {noformat} > Error: Could not find or load main class space\cassandra.logs.gc.log > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10984) Cassandra should not depend on netty-all
[ https://issues.apache.org/jira/browse/CASSANDRA-10984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228988#comment-15228988 ] Alex Petrov commented on CASSANDRA-10984: - {{cassandra-driver-core}} is a bit different issue. I'd fire up the ticket here: https://datastax-oss.atlassian.net/projects/JAVA/issues Although, I have to say that java-driver is not using {{netty-all}}, they use individual deps, so looks like in this case Dataflow might use {{all}} :) (wild guess) > Cassandra should not depend on netty-all > > > Key: CASSANDRA-10984 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10984 > Project: Cassandra > Issue Type: Improvement >Reporter: James Roper >Assignee: Alex Petrov >Priority: Minor > Attachments: > 0001-Use-separate-netty-depencencies-instead-of-netty-all.patch, > 0001-with-binaries.patch > > > netty-all is a jar that bundles all the individual netty dependencies for > convenience together for people trying out netty to get started quickly. > Serious projects like Cassandra should never ever ever use it, since it's a > recipe for classpath disasters. > To illustrate, I'm running Cassandra embedded in an app, and I get this error: > {noformat} > [JVM-1] java.lang.NoSuchMethodError: > io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter; > [JVM-1] at io.netty.buffer.PoolArena.(PoolArena.java:64) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > org.apache.cassandra.transport.CBUtil.(CBUtil.java:56) > ~[cassandra-all-3.0.0.jar:3.0.0] > [JVM-1] at org.apache.cassandra.transport.Server.start(Server.java:134) > ~[cassandra-all-3.0.0.jar:3.0.0] > {noformat} > {{PlatformDependent}} comes from netty-common, of which version 4.0.33 is on > the classpath, but it's also provided by netty-all, which has version 4.0.23 > brought in by cassandra. By a fluke of classpath ordering, the classloader > has loaded the netty buffer classes from netty-buffer 4.0.33, but the > PlatformDependent class from netty-all 4.0.23, and these two versions are not > binary compatible, hence the linkage error. > Essentially to avoid these problems in serious projects, anyone that ever > brings in cassandra is going to have to exclude the netty dependency from it, > which is error prone, and when you get it wrong, due to the nature of > classpath ordering bugs, it might not be till you deploy to production that > you actually find out there's a problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-11515: Status: Ready to Commit (was: Patch Available) > C* won't launch with whitespace in path on Windows > -- > > Key: CASSANDRA-11515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Trivial > Labels: windows > Fix For: 2.2.x, 3.0.x, 3.x > > Attachments: fixWhiteSpace.patch > > > In a directory named 'test space', I see the following on launch: > {noformat} > Error: Could not find or load main class space\cassandra.logs.gc.log > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228985#comment-15228985 ] Paulo Motta commented on CASSANDRA-11515: - The patch did not apply, but applied manually and works even with ccm. You'll probably need to recreate your ccm cluster after it. So, +1. > C* won't launch with whitespace in path on Windows > -- > > Key: CASSANDRA-11515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Trivial > Labels: windows > Fix For: 2.2.x, 3.0.x, 3.x > > Attachments: fixWhiteSpace.patch > > > In a directory named 'test space', I see the following on launch: > {noformat} > Error: Could not find or load main class space\cassandra.logs.gc.log > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10984) Cassandra should not depend on netty-all
[ https://issues.apache.org/jira/browse/CASSANDRA-10984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228978#comment-15228978 ] Alex Petrov commented on CASSANDRA-10984: - I mostly meant that if there's a class name collision, using {{netty-all}} would quoting OP "put you at grace of classloader", since {{netty-all}} shades all deps, and using individual deps allows user overriding (assuming APIs are kept intact within C* / dependant version). But I agree about the effort... > Cassandra should not depend on netty-all > > > Key: CASSANDRA-10984 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10984 > Project: Cassandra > Issue Type: Improvement >Reporter: James Roper >Assignee: Alex Petrov >Priority: Minor > Attachments: > 0001-Use-separate-netty-depencencies-instead-of-netty-all.patch, > 0001-with-binaries.patch > > > netty-all is a jar that bundles all the individual netty dependencies for > convenience together for people trying out netty to get started quickly. > Serious projects like Cassandra should never ever ever use it, since it's a > recipe for classpath disasters. > To illustrate, I'm running Cassandra embedded in an app, and I get this error: > {noformat} > [JVM-1] java.lang.NoSuchMethodError: > io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter; > [JVM-1] at io.netty.buffer.PoolArena.(PoolArena.java:64) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128) > ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final] > [JVM-1] at > org.apache.cassandra.transport.CBUtil.(CBUtil.java:56) > ~[cassandra-all-3.0.0.jar:3.0.0] > [JVM-1] at org.apache.cassandra.transport.Server.start(Server.java:134) > ~[cassandra-all-3.0.0.jar:3.0.0] > {noformat} > {{PlatformDependent}} comes from netty-common, of which version 4.0.33 is on > the classpath, but it's also provided by netty-all, which has version 4.0.23 > brought in by cassandra. By a fluke of classpath ordering, the classloader > has loaded the netty buffer classes from netty-buffer 4.0.33, but the > PlatformDependent class from netty-all 4.0.23, and these two versions are not > binary compatible, hence the linkage error. > Essentially to avoid these problems in serious projects, anyone that ever > brings in cassandra is going to have to exclude the netty dependency from it, > which is error prone, and when you get it wrong, due to the nature of > classpath ordering bugs, it might not be till you deploy to production that > you actually find out there's a problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11514) trunk compaction performance regression
[ https://issues.apache.org/jira/browse/CASSANDRA-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228948#comment-15228948 ] Michael Shuler commented on CASSANDRA-11514: Automatically on cstar_perf, unfortunately not. It may be possible feeding jobs with git sha's. > trunk compaction performance regression > --- > > Key: CASSANDRA-11514 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11514 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: cstar_perf >Reporter: Michael Shuler > Labels: performance > Fix For: 3.x > > Attachments: trunk-compaction_dtcs-op_rate.png, > trunk-compaction_lcs-op_rate.png > > > It appears that a commit between Mar 29-30 has resulted in a drop in > compaction performance. I attempted to get a log list of commits to post > here, but > {noformat} > git log trunk@{2016-03-29}..trunk@{2016-03-31} > {noformat} > appears to be incomplete, since reading through {{git log}} I see netty and > och were upgraded during this time period. > !trunk-compaction_dtcs-op_rate.png! > !trunk-compaction_lcs-op_rate.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8494) incremental bootstrap
[ https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228943#comment-15228943 ] Jeremiah Jordan commented on CASSANDRA-8494: Need to be careful here. We got rid of taketoken in CASSANDRA-7601 because there were a lot of pointy things with it that caused issues, and this ticket wants to implement it again :). > incremental bootstrap > - > > Key: CASSANDRA-8494 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8494 > Project: Cassandra > Issue Type: New Feature >Reporter: Jon Haddad >Assignee: Yuki Morishita >Priority: Minor > Labels: dense-storage > Fix For: 3.x > > > Current bootstrapping involves (to my knowledge) picking tokens and streaming > data before the node is available for requests. This can be problematic with > "fat nodes", since it may require 20TB of data to be streamed over before the > machine can be useful. This can result in a massive window of time before > the machine can do anything useful. > As a potential approach to mitigate the huge window of time before a node is > available, I suggest modifying the bootstrap process to only acquire a single > initial token before being marked UP. This would likely be a configuration > parameter "incremental_bootstrap" or something similar. > After the node is bootstrapped with this one token, it could go into UP > state, and could then acquire additional tokens (one or a handful at a time), > which would be streamed over while the node is active and serving requests. > The benefit here is that with the default 256 tokens a node could become an > active part of the cluster with less than 1% of it's final data streamed over. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11514) trunk compaction performance regression
[ https://issues.apache.org/jira/browse/CASSANDRA-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228935#comment-15228935 ] Sylvain Lebresne commented on CASSANDRA-11514: -- Would it be possible to bisect the culprit? > trunk compaction performance regression > --- > > Key: CASSANDRA-11514 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11514 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: cstar_perf >Reporter: Michael Shuler > Labels: performance > Fix For: 3.x > > Attachments: trunk-compaction_dtcs-op_rate.png, > trunk-compaction_lcs-op_rate.png > > > It appears that a commit between Mar 29-30 has resulted in a drop in > compaction performance. I attempted to get a log list of commits to post > here, but > {noformat} > git log trunk@{2016-03-29}..trunk@{2016-03-31} > {noformat} > appears to be incomplete, since reading through {{git log}} I see netty and > och were upgraded during this time period. > !trunk-compaction_dtcs-op_rate.png! > !trunk-compaction_lcs-op_rate.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
[ https://issues.apache.org/jira/browse/CASSANDRA-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228936#comment-15228936 ] Ariel Weisberg commented on CASSANDRA-11517: |[trunk code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-11517-trunk?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-11517-trunk-dtest/1/]| Not proof of any real performance benefit in context, but the unit test runs in 250 milliseconds with the CAS loop and 1.4 seconds without the CAS loop. > o.a.c.utils.UUIDGen could handle contention better > -- > > Key: CASSANDRA-11517 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Minor > Fix For: 3.x > > > I noticed this profiling a query handler implementation that uses UUIDGen to > get handles to track queries for logging purposes. > Under contention threads are being unscheduled instead of spinning until the > lock is available. I would have expected intrinsic locks to be able to adapt > to this based on profiling information. > Either way it's seems pretty straightforward to rewrite this to use a CAS > loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
[ https://issues.apache.org/jira/browse/CASSANDRA-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-11517: --- Description: I noticed this profiling a query handler implementation that uses UUIDGen to get handles to track queries for logging purposes. Under contention threads are being unscheduled instead of spinning until the lock is available. I would have expected intrinsic locks to be able to adapt to this based on profiling information. Either way it's seems pretty straightforward to rewrite this to use a CAS loop and test that it generally produces unique values. was: I noticed this profiling a query handler implementation that uses UUIDGen to get handles to track queries for logging purposes. Under contention threads are being unscheduled instead of spinning until the lock is available. I would have expected intrinsic locks to be able to adapt to this based on profiling information. Either way it's pretty seems straightforward to rewrite this to use a CAS loop and test that it generally produces unique values. > o.a.c.utils.UUIDGen could handle contention better > -- > > Key: CASSANDRA-11517 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Minor > Fix For: 3.x > > > I noticed this profiling a query handler implementation that uses UUIDGen to > get handles to track queries for logging purposes. > Under contention threads are being unscheduled instead of spinning until the > lock is available. I would have expected intrinsic locks to be able to adapt > to this based on profiling information. > Either way it's seems pretty straightforward to rewrite this to use a CAS > loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11518) o.a.c.utils.UUIDGen clock generation is not very high in entropy
Ariel Weisberg created CASSANDRA-11518: -- Summary: o.a.c.utils.UUIDGen clock generation is not very high in entropy Key: CASSANDRA-11518 URL: https://issues.apache.org/jira/browse/CASSANDRA-11518 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Ariel Weisberg Assignee: Ariel Weisberg Priority: Trivial Fix For: 3.x makeClockSeqAndNode uses {{java.util.Random}} to generate the clock. {{Random}} only has 48-bits of internal state so it's not going to generate the best bits for clock and in addition to that it uses a collision prone seed that sort of defeats the purpose of clock sequence. A better approach to get the most out of those 14-bits would be to use {{SecureRandom}} with something like SHA1PRNG. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
Ariel Weisberg created CASSANDRA-11517: -- Summary: o.a.c.utils.UUIDGen could handle contention better Key: CASSANDRA-11517 URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Ariel Weisberg Assignee: Ariel Weisberg Priority: Minor Fix For: 3.x I noticed this profiling a query handler implementation that uses UUIDGen to get handles to track queries for logging purposes. Under contention threads are being unscheduled instead of spinning until the lock is available. I would have expected intrinsic locks to be able to adapt to this based on profiling information. Either way it's pretty seems straightforward to rewrite this to use a CAS loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11516) Make max number of streams configurable
Sebastian Estevez created CASSANDRA-11516: - Summary: Make max number of streams configurable Key: CASSANDRA-11516 URL: https://issues.apache.org/jira/browse/CASSANDRA-11516 Project: Cassandra Issue Type: New Feature Reporter: Sebastian Estevez Today we default to num cores. In large boxes (40 cores, etc.), this is suboptimal as it can generate huge amounts of garbage that GC can't keep up with. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-11515: Status: Patch Available (was: Open) > C* won't launch with whitespace in path on Windows > -- > > Key: CASSANDRA-11515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Trivial > Labels: windows > Fix For: 2.2.x, 3.0.x, 3.x > > Attachments: fixWhiteSpace.patch > > > In a directory named 'test space', I see the following on launch: > {noformat} > Error: Could not find or load main class space\cassandra.logs.gc.log > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228799#comment-15228799 ] Joshua McKenzie commented on CASSANDRA-11515: - Trivial patch attached. ccm still fails after applying this so I'll deal with that separately. > C* won't launch with whitespace in path on Windows > -- > > Key: CASSANDRA-11515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Trivial > Labels: windows > Fix For: 2.2.x, 3.0.x, 3.x > > Attachments: fixWhiteSpace.patch > > > In a directory named 'test space', I see the following on launch: > {noformat} > Error: Could not find or load main class space\cassandra.logs.gc.log > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-11515: Priority: Trivial (was: Major) > C* won't launch with whitespace in path on Windows > -- > > Key: CASSANDRA-11515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Trivial > Labels: windows > Fix For: 2.2.x, 3.0.x, 3.x > > Attachments: fixWhiteSpace.patch > > > In a directory named 'test space', I see the following on launch: > {noformat} > Error: Could not find or load main class space\cassandra.logs.gc.log > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11515) C* won't launch with whitespace in path on Windows
Joshua McKenzie created CASSANDRA-11515: --- Summary: C* won't launch with whitespace in path on Windows Key: CASSANDRA-11515 URL: https://issues.apache.org/jira/browse/CASSANDRA-11515 Project: Cassandra Issue Type: Bug Reporter: Joshua McKenzie Assignee: Joshua McKenzie Fix For: 2.2.x, 3.0.x, 3.x Attachments: fixWhiteSpace.patch In a directory named 'test space', I see the following on launch: {noformat} Error: Could not find or load main class space\cassandra.logs.gc.log {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11514) trunk compaction performance regression
[ https://issues.apache.org/jira/browse/CASSANDRA-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-11514: --- Description: It appears that a commit between Mar 29-30 has resulted in a drop in compaction performance. I attempted to get a log list of commits to post here, but {noformat} git log trunk@{2016-03-29}..trunk@{2016-03-31} {noformat} appears to be incomplete, since reading through {{git log}} I see netty and och were upgraded during this time period. !trunk-compaction_dtcs-op_rate.png! !trunk-compaction_lcs-op_rate.png! was: It appears that a commit between Mar 29-30 has resulted in a drop in compaction performance. I attempted to get a log list of commits to post here, but {noformat} git log trunk@{2016-03-29}..trunk@{2016-03-31} {noformat} appears to be incomplete, since reading through {{git log}} I see netty and och were upgraded during this time period. > trunk compaction performance regression > --- > > Key: CASSANDRA-11514 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11514 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: cstar_perf >Reporter: Michael Shuler > Labels: performance > Fix For: 3.x > > Attachments: trunk-compaction_dtcs-op_rate.png, > trunk-compaction_lcs-op_rate.png > > > It appears that a commit between Mar 29-30 has resulted in a drop in > compaction performance. I attempted to get a log list of commits to post > here, but > {noformat} > git log trunk@{2016-03-29}..trunk@{2016-03-31} > {noformat} > appears to be incomplete, since reading through {{git log}} I see netty and > och were upgraded during this time period. > !trunk-compaction_dtcs-op_rate.png! > !trunk-compaction_lcs-op_rate.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11514) trunk compaction performance regression
[ https://issues.apache.org/jira/browse/CASSANDRA-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-11514: --- Description: It appears that a commit between Mar 29-30 has resulted in a drop in compaction performance. I attempted to get a log list of commits to post here, but {noformat} git log trunk@{2016-03-29}..trunk@{2016-03-31} {noformat} appears to be incomplete, since reading through {{git log}} I see netty and och were upgraded during this time period. was: It appears that a commit between Mar 29-30 has resulted in a drop in compaction performance. I attempted to get a log list of commits to post here, but {noformat} git log trunk@{2016-03-29}..trunk@{2016-03-31} {noformat} appears to be incomplete, since reading through {{git log}} I see netty and och were upgraded during this time period. > trunk compaction performance regression > --- > > Key: CASSANDRA-11514 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11514 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: cstar_perf >Reporter: Michael Shuler > Labels: performance > Fix For: 3.x > > Attachments: trunk-compaction_dtcs-op_rate.png, > trunk-compaction_lcs-op_rate.png > > > It appears that a commit between Mar 29-30 has resulted in a drop in > compaction performance. I attempted to get a log list of commits to post > here, but > {noformat} > git log trunk@{2016-03-29}..trunk@{2016-03-31} > {noformat} > appears to be incomplete, since reading through {{git log}} I see netty and > och were upgraded during this time period. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11514) trunk compaction performance regression
Michael Shuler created CASSANDRA-11514: -- Summary: trunk compaction performance regression Key: CASSANDRA-11514 URL: https://issues.apache.org/jira/browse/CASSANDRA-11514 Project: Cassandra Issue Type: Bug Components: Compaction Environment: cstar_perf Reporter: Michael Shuler Fix For: 3.x Attachments: trunk-compaction_dtcs-op_rate.png, trunk-compaction_lcs-op_rate.png It appears that a commit between Mar 29-30 has resulted in a drop in compaction performance. I attempted to get a log list of commits to post here, but {noformat} git log trunk@{2016-03-29}..trunk@{2016-03-31} {noformat} appears to be incomplete, since reading through {{git log}} I see netty and och were upgraded during this time period. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11513) Result set is not unique on primary key (cql)
[ https://issues.apache.org/jira/browse/CASSANDRA-11513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianshi Wang updated CASSANDRA-11513: - Description: [cqlsh 5.0.1 | Cassandra 3.4 | CQL spec 3.4.0 | Native protocol v4] Run followings, {code} drop table if exists test0; CREATE TABLE test0 ( pk int, a int, b text, s text static, PRIMARY KEY (pk, a) ); insert into test0 (pk,a,b,s) values (0,1,'b1','hello b1'); insert into test0 (pk,a,b,s) values (0,2,'b2','hello b2'); insert into test0 (pk,a,b,s) values (0,3,'b3','hello b3'); create index on test0 (b); insert into test0 (pk,a,b,s) values (0,2,'b2 again','b2 again'); {code} Now select one record based on primary key, we got all three records. {code} cqlsh:ops> select * from test0 where pk=0 and a=2; pk | a | s| b +---+--+-- 0 | 1 | b2 again | b1 0 | 2 | b2 again | b2 again 0 | 3 | b2 again | b3 {code} {code} cqlsh:ops> desc test0; CREATE TABLE ops.test0 ( pk int, a int, b text, s text static, PRIMARY KEY (pk, a) ) WITH CLUSTERING ORDER BY (a ASC) AND bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; CREATE INDEX test0_b_idx ON ops.test0 (b); {code} was: [cqlsh 5.0.1 | Cassandra 3.4 | CQL spec 3.4.0 | Native protocol v4] Run followings, {code} drop table if exists test0; CREATE TABLE test0 ( pk int, a int, b text, s text static, PRIMARY KEY (pk, a) ); insert into test0 (pk,a,b,s) values (0,1,'b1','hello b1'); insert into test0 (pk,a,b,s) values (0,2,'b2','hello b2'); insert into test0 (pk,a,b,s) values (0,3,'b3','hello b3'); create index on test0 (b); insert into test0 (pk,a,b,s) values (0,2,'b2 again','b2 again'); {code} Now select one record based on primary key, we got all three records. {code} cqlsh:ops> select * from test0 where pk=0 and a=2; pk | a | s| b +---+--+-- 0 | 1 | b2 again | b1 0 | 2 | b2 again | b2 again 0 | 3 | b2 again | b3 {code} > Result set is not unique on primary key (cql) > -- > > Key: CASSANDRA-11513 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11513 > Project: Cassandra > Issue Type: Bug >Reporter: Tianshi Wang > > [cqlsh 5.0.1 | Cassandra 3.4 | CQL spec 3.4.0 | Native protocol v4] > Run followings, > {code} > drop table if exists test0; > CREATE TABLE test0 ( > pk int, > a int, > b text, > s text static, > PRIMARY KEY (pk, a) > ); > insert into test0 (pk,a,b,s) values (0,1,'b1','hello b1'); > insert into test0 (pk,a,b,s) values (0,2,'b2','hello b2'); > insert into test0 (pk,a,b,s) values (0,3,'b3','hello b3'); > create index on test0 (b); > insert into test0 (pk,a,b,s) values (0,2,'b2 again','b2 again'); > {code} > Now select one record based on primary key, we got all three records. > {code} > cqlsh:ops> select * from test0 where pk=0 and a=2; > pk | a | s| b > +---+--+-- > 0 | 1 | b2 again | b1 > 0 | 2 | b2 again | b2 again > 0 | 3 | b2 again | b3 > {code} > {code} > cqlsh:ops> desc test0; > CREATE TABLE ops.test0 ( > pk int, > a int, > b text, > s text static, > PRIMARY KEY (pk, a) > ) WITH CLUSTERING ORDER BY (a ASC) > AND bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > CREATE INDEX test0_b_idx ON ops.test0 (b); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11513) Result set is not unique on primary key (cql)
Tianshi Wang created CASSANDRA-11513: Summary: Result set is not unique on primary key (cql) Key: CASSANDRA-11513 URL: https://issues.apache.org/jira/browse/CASSANDRA-11513 Project: Cassandra Issue Type: Bug Reporter: Tianshi Wang [cqlsh 5.0.1 | Cassandra 3.4 | CQL spec 3.4.0 | Native protocol v4] Run followings, {code} drop table if exists test0; CREATE TABLE test0 ( pk int, a int, b text, s text static, PRIMARY KEY (pk, a) ); insert into test0 (pk,a,b,s) values (0,1,'b1','hello b1'); insert into test0 (pk,a,b,s) values (0,2,'b2','hello b2'); insert into test0 (pk,a,b,s) values (0,3,'b3','hello b3'); create index on test0 (b); insert into test0 (pk,a,b,s) values (0,2,'b2 again','b2 again'); {code} Now select one record based on primary key, we got all three records. {code} cqlsh:ops> select * from test0 where pk=0 and a=2; pk | a | s| b +---+--+-- 0 | 1 | b2 again | b1 0 | 2 | b2 again | b2 again 0 | 3 | b2 again | b3 {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10041) "timeout during write query at consistency ONE" when updating counter at consistency QUORUM and 2 of 3 nodes alive
[ https://issues.apache.org/jira/browse/CASSANDRA-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228659#comment-15228659 ] Joel Knighton commented on CASSANDRA-10041: --- I should have probably included more detail in my initial post. Any node in a Cassandra cluster can act as coordinator, not only replicas. I assume what you mean is that you are using token-awareness in the driver in an attempt to route your query to a coordinator that is also a replica. Even in situations where the coordinator is a replica, the coordinator may choose a leader other than itself for the counter mutation. You should not interpret these timeouts as an indication that you are not routing your queries from the driver optimally. > "timeout during write query at consistency ONE" when updating counter at > consistency QUORUM and 2 of 3 nodes alive > -- > > Key: CASSANDRA-10041 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10041 > Project: Cassandra > Issue Type: Bug > Environment: centos 6.6 server, java version "1.8.0_45", cassandra > 2.1.8, 3 machines, keyspace with replication factor 3 >Reporter: Anton Lebedevich > Fix For: 2.1.x > > > Test scenario is: kill -9 one node, wait 60 seconds, start it back, wait till > it becomes available, wait 120 seconds (during that time all 3 nodes are up), > repeat with the next node. Application reads from one table and updates > counters in another table with consistency QUORUM. When one node out of 3 is > killed application logs this exception for several seconds: > {noformat} > Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: > Cassandra timeout during write query at consistency ONE (1 replica were > required but only 0 acknowledged the write) > at > com.datastax.driver.core.Responses$Error$1.decode(Responses.java:57) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:204) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:195) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > [io.netty.netty-codec-4.0.27.Final.jar:4.0.27.Final] > ... 13 common frames omitted > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10869) paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions dtest fails on 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-10869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228654#comment-15228654 ] Michael Shuler commented on CASSANDRA-10869: This test failed on 3.0.5-tentative tag novnode_dtest run, but manually checking the logs, I find the string that we're looking for: https://cassci.datastax.com/job/cassandra-3.0.5-tentative_novnode_dtest/lastCompletedBuild/testReport/paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/ {noformat} Error Message Cannot find tombstone failure threshold error in log >> begin captured logging << dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-oiTQzP dtest: DEBUG: Custom init_config not found. Setting defaults. dtest: DEBUG: Done setting configuration options: { 'num_tokens': None, 'phi_convict_threshold': 5, 'range_request_timeout_in_ms': 1, 'read_request_timeout_in_ms': 1, 'request_timeout_in_ms': 1, 'truncate_request_timeout_in_ms': 1, 'write_request_timeout_in_ms': 1} - >> end captured logging << - Stacktrace File "/usr/lib/python2.7/unittest/case.py", line 329, in run testMethod() File "/home/automaton/cassandra-dtest/paging_test.py", line 1817, in test_failure_threshold_deletions self.assertTrue(failure, "Cannot find tombstone failure threshold error in log") File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue raise self.failureException(msg) "Cannot find tombstone failure threshold error in log\n >> begin captured logging << \ndtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-oiTQzP\ndtest: DEBUG: Custom init_config not found. Setting defaults.\ndtest: DEBUG: Done setting configuration options:\n{ 'num_tokens': None,\n'phi_convict_threshold': 5,\n 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 1}\n- >> end captured logging << -" {noformat} {noformat} mshuler@hana:~/tmp/logs$ grep -r "Scanned over.* tombstones during query.* query aborted" 1459899948616_paging_test.TestPagingWithDeletions.test_failure_threshold_deletions/ 1459899948616_paging_test.TestPagingWithDeletions.test_failure_threshold_deletions/node3_debug.log:ERROR [SharedPool-Worker-1] 2016-04-05 23:45:48,582 MessageDeliveryTask.java:77 - Scanned over 501 tombstones during query 'SELECT * FROM test_paging_size.paging_test WHERE token(id) <= -3074457345618258603 LIMIT 1000' (last scanned row partion key was ((1), 2a33f004-6073-44a4-878c-59a93eca7420)); query aborted 1459899948616_paging_test.TestPagingWithDeletions.test_failure_threshold_deletions/node3.log:ERROR [SharedPool-Worker-1] 2016-04-05 23:45:48,582 MessageDeliveryTask.java:77 - Scanned over 501 tombstones during query 'SELECT * FROM test_paging_size.paging_test WHERE token(id) <= -3074457345618258603 LIMIT 1000' (last scanned row partion key was ((1), 2a33f004-6073-44a4-878c-59a93eca7420)); query aborted 1459899948616_paging_test.TestPagingWithDeletions.test_failure_threshold_deletions/node2.log:ERROR [SharedPool-Worker-2] 2016-04-05 23:45:48,562 MessageDeliveryTask.java:77 - Scanned over 501 tombstones during query 'SELECT * FROM test_paging_size.paging_test WHERE token(id) <= -3074457345618258603 LIMIT 1000' (last scanned row partion key was ((1), 2a33f004-6073-44a4-878c-59a93eca7420)); query aborted 1459899948616_paging_test.TestPagingWithDeletions.test_failure_threshold_deletions/node2_debug.log:ERROR [SharedPool-Worker-2] 2016-04-05 23:45:48,562 MessageDeliveryTask.java:77 - Scanned over 501 tombstones during query 'SELECT * FROM test_paging_size.paging_test WHERE token(id) <= -3074457345618258603 LIMIT 1000' (last scanned row partion key was ((1), 2a33f004-6073-44a4-878c-59a93eca7420)); query aborted {noformat} > paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions dtest > fails on 2.1 > -- > > Key: CASSANDRA-10869 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10869 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey > Labels: dtest > Fix For: 2.1.x > > > This test is failing hard on 2.1. Here is its history on the JDK8 job for > cassandra-2.1: > http://cassci.datastax.com/job/cassandra-2.1_dtest_jdk8/lastCompletedBuild/testReport/paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/ > and on the JDK7 job: >
[jira] [Comment Edited] (CASSANDRA-11488) Bug or not?: coordinator using SimpleSnitch may query other nodes for copies of local data
[ https://issues.apache.org/jira/browse/CASSANDRA-11488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228643#comment-15228643 ] Jeremy Hanna edited comment on CASSANDRA-11488 at 4/6/16 5:00 PM: -- Giving this some more thought, I think we need to document the distinction. I didn't realize that the dynamic snitch would route away from local nodes when using TokenAware load balancing for example. It does make sense if that local node is really behind on LCS or has some heavier operation going on. I just don't think people expect TokenAware would get routed to another replica necessarily. In any case I agree that documentation is probably the best outcome for now unless there is data that shows that this is insufficient. was (Author: jeromatron): Giving this some more though, I think we need to document the distinction. I didn't realize that the dynamic snitch would route away from local nodes when using TokenAware load balancing for example. It does make sense if that local node is really behind on LCS or has some heavier operation going on. I just don't think people expect TokenAware would get routed to another replica necessarily. In any case I agree that documentation is probably the best outcome for now unless there is data that shows that this is insufficient. > Bug or not?: coordinator using SimpleSnitch may query other nodes for copies > of local data > --- > > Key: CASSANDRA-11488 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11488 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Jim Witschey >Assignee: Stefania >Priority: Minor > Labels: doc-impacting > > As [~Stefania] explains [in this JIRA > comment|https://issues.apache.org/jira/browse/CASSANDRA-11225?focusedCommentId=15221059=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15221059], > {{SimpleSnitch}} does not implement > {{IEndpointSnitch.sortByProximity(localhost, liveendpoints)}}, so a query for > data on the coordinator may query other nodes. That seems like unnecessary > work to me, and on that note, Stefania woonders [in this JIRA > comment|https://issues.apache.org/jira/browse/CASSANDRA-11225?focusedCommentId=15223598=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15223598] > - should this be considered a bug? > Stefania, I'm assigning you here -- could you find the right people to > involve in this discussion? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11488) Bug or not?: coordinator using SimpleSnitch may query other nodes for copies of local data
[ https://issues.apache.org/jira/browse/CASSANDRA-11488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228643#comment-15228643 ] Jeremy Hanna commented on CASSANDRA-11488: -- Giving this some more though, I think we need to document the distinction. I didn't realize that the dynamic snitch would route away from local nodes when using TokenAware load balancing for example. It does make sense if that local node is really behind on LCS or has some heavier operation going on. I just don't think people expect TokenAware would get routed to another replica necessarily. In any case I agree that documentation is probably the best outcome for now unless there is data that shows that this is insufficient. > Bug or not?: coordinator using SimpleSnitch may query other nodes for copies > of local data > --- > > Key: CASSANDRA-11488 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11488 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Jim Witschey >Assignee: Stefania >Priority: Minor > Labels: doc-impacting > > As [~Stefania] explains [in this JIRA > comment|https://issues.apache.org/jira/browse/CASSANDRA-11225?focusedCommentId=15221059=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15221059], > {{SimpleSnitch}} does not implement > {{IEndpointSnitch.sortByProximity(localhost, liveendpoints)}}, so a query for > data on the coordinator may query other nodes. That seems like unnecessary > work to me, and on that note, Stefania woonders [in this JIRA > comment|https://issues.apache.org/jira/browse/CASSANDRA-11225?focusedCommentId=15223598=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15223598] > - should this be considered a bug? > Stefania, I'm assigning you here -- could you find the right people to > involve in this discussion? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10041) "timeout during write query at consistency ONE" when updating counter at consistency QUORUM and 2 of 3 nodes alive
[ https://issues.apache.org/jira/browse/CASSANDRA-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228625#comment-15228625 ] Rob Emery edited comment on CASSANDRA-10041 at 4/6/16 4:52 PM: --- Hi Joel, Am I reading this correctly in that this implies that the UPDATE query isn't being sent to a replica? Which would imply that we have misconfigured the partition key within our queries? Thanks, was (Author: re_weavers): Hi Joel, Am I reading this correctly in that this imply that the UPDATE query isn't being sent to a replica? Which would imply that we have misconfigured the partition key within our queries? Thanks, > "timeout during write query at consistency ONE" when updating counter at > consistency QUORUM and 2 of 3 nodes alive > -- > > Key: CASSANDRA-10041 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10041 > Project: Cassandra > Issue Type: Bug > Environment: centos 6.6 server, java version "1.8.0_45", cassandra > 2.1.8, 3 machines, keyspace with replication factor 3 >Reporter: Anton Lebedevich > Fix For: 2.1.x > > > Test scenario is: kill -9 one node, wait 60 seconds, start it back, wait till > it becomes available, wait 120 seconds (during that time all 3 nodes are up), > repeat with the next node. Application reads from one table and updates > counters in another table with consistency QUORUM. When one node out of 3 is > killed application logs this exception for several seconds: > {noformat} > Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: > Cassandra timeout during write query at consistency ONE (1 replica were > required but only 0 acknowledged the write) > at > com.datastax.driver.core.Responses$Error$1.decode(Responses.java:57) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:204) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:195) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > [io.netty.netty-codec-4.0.27.Final.jar:4.0.27.Final] > ... 13 common frames omitted > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10041) "timeout during write query at consistency ONE" when updating counter at consistency QUORUM and 2 of 3 nodes alive
[ https://issues.apache.org/jira/browse/CASSANDRA-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228625#comment-15228625 ] Rob Emery commented on CASSANDRA-10041: --- Hi Joel, Am I reading this correctly in that this imply that the UPDATE query isn't being sent to a replica? Which would imply that we have misconfigured the partition key within our queries? Thanks, > "timeout during write query at consistency ONE" when updating counter at > consistency QUORUM and 2 of 3 nodes alive > -- > > Key: CASSANDRA-10041 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10041 > Project: Cassandra > Issue Type: Bug > Environment: centos 6.6 server, java version "1.8.0_45", cassandra > 2.1.8, 3 machines, keyspace with replication factor 3 >Reporter: Anton Lebedevich > Fix For: 2.1.x > > > Test scenario is: kill -9 one node, wait 60 seconds, start it back, wait till > it becomes available, wait 120 seconds (during that time all 3 nodes are up), > repeat with the next node. Application reads from one table and updates > counters in another table with consistency QUORUM. When one node out of 3 is > killed application logs this exception for several seconds: > {noformat} > Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: > Cassandra timeout during write query at consistency ONE (1 replica were > required but only 0 acknowledged the write) > at > com.datastax.driver.core.Responses$Error$1.decode(Responses.java:57) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:204) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:195) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > [io.netty.netty-codec-4.0.27.Final.jar:4.0.27.Final] > ... 13 common frames omitted > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-9766) Bootstrap outgoing streaming speeds are much slower than during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani reassigned CASSANDRA-9766: - Assignee: T Jake Luciani > Bootstrap outgoing streaming speeds are much slower than during repair > -- > > Key: CASSANDRA-9766 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9766 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: Cassandra 2.1.2. more details in the pdf attached >Reporter: Alexei K >Assignee: T Jake Luciani > Labels: performance > Fix For: 2.1.x > > Attachments: problem.pdf > > > I have a cluster in Amazon cloud , its described in detail in the attachment. > What I've noticed is that we during bootstrap we never go above 12MB/sec > transmission speeds and also those speeds flat line almost like we're hitting > some sort of a limit ( this remains true for other tests that I've ran) > however during the repair we see much higher,variable sending rates. I've > provided network charts in the attachment as well . Is there an explanation > for this? Is something wrong with my configuration, or is it a possible bug? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (CASSANDRA-9766) Bootstrap outgoing streaming speeds are much slower than during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-9766: -- Comment: was deleted (was: Perhaps we add a decompression pool so we can offload the work to many cores) > Bootstrap outgoing streaming speeds are much slower than during repair > -- > > Key: CASSANDRA-9766 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9766 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: Cassandra 2.1.2. more details in the pdf attached >Reporter: Alexei K > Labels: performance > Fix For: 2.1.x > > Attachments: problem.pdf > > > I have a cluster in Amazon cloud , its described in detail in the attachment. > What I've noticed is that we during bootstrap we never go above 12MB/sec > transmission speeds and also those speeds flat line almost like we're hitting > some sort of a limit ( this remains true for other tests that I've ran) > however during the repair we see much higher,variable sending rates. I've > provided network charts in the attachment as well . Is there an explanation > for this? Is something wrong with my configuration, or is it a possible bug? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10041) "timeout during write query at consistency ONE" when updating counter at consistency QUORUM and 2 of 3 nodes alive
[ https://issues.apache.org/jira/browse/CASSANDRA-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228591#comment-15228591 ] Joel Knighton commented on CASSANDRA-10041: --- I don't think this is cause for concern - as in [CASSANDRA-9620], this is a result of the write path for the type of mutation. When writing to a counter, a replica is selected as the leader for the mutation. If the leader is not the coordinator, we send this mutation to the leader with CL.ONE. It is a timeout on this that you're seeing. Since there's no clear reason to handle timeouts on a leader write/coordinator write differently, these are both WriteType COUNTER (as opposed to the BATCH/BATCH_LOG disctinction). > "timeout during write query at consistency ONE" when updating counter at > consistency QUORUM and 2 of 3 nodes alive > -- > > Key: CASSANDRA-10041 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10041 > Project: Cassandra > Issue Type: Bug > Environment: centos 6.6 server, java version "1.8.0_45", cassandra > 2.1.8, 3 machines, keyspace with replication factor 3 >Reporter: Anton Lebedevich > Fix For: 2.1.x > > > Test scenario is: kill -9 one node, wait 60 seconds, start it back, wait till > it becomes available, wait 120 seconds (during that time all 3 nodes are up), > repeat with the next node. Application reads from one table and updates > counters in another table with consistency QUORUM. When one node out of 3 is > killed application logs this exception for several seconds: > {noformat} > Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: > Cassandra timeout during write query at consistency ONE (1 replica were > required but only 0 acknowledged the write) > at > com.datastax.driver.core.Responses$Error$1.decode(Responses.java:57) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:204) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:195) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.6.jar:na] > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > [io.netty.netty-codec-4.0.27.Final.jar:4.0.27.Final] > ... 13 common frames omitted > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11488) Bug or not?: coordinator using SimpleSnitch may query other nodes for copies of local data
[ https://issues.apache.org/jira/browse/CASSANDRA-11488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228500#comment-15228500 ] Sylvain Lebresne commented on CASSANDRA-11488: -- bq. I think it's important to consider it Consider what? As I said, dynamic snitch is the default which means that the snitch is pretty much only used for determining the DC and rack of the nodes. > Bug or not?: coordinator using SimpleSnitch may query other nodes for copies > of local data > --- > > Key: CASSANDRA-11488 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11488 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Jim Witschey >Assignee: Stefania >Priority: Minor > Labels: doc-impacting > > As [~Stefania] explains [in this JIRA > comment|https://issues.apache.org/jira/browse/CASSANDRA-11225?focusedCommentId=15221059=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15221059], > {{SimpleSnitch}} does not implement > {{IEndpointSnitch.sortByProximity(localhost, liveendpoints)}}, so a query for > data on the coordinator may query other nodes. That seems like unnecessary > work to me, and on that note, Stefania woonders [in this JIRA > comment|https://issues.apache.org/jira/browse/CASSANDRA-11225?focusedCommentId=15223598=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15223598] > - should this be considered a bug? > Stefania, I'm assigning you here -- could you find the right people to > involve in this discussion? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11246) (windows) dtest failure in replace_address_test.TestReplaceAddress.replace_with_reset_resume_state_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11246: -- Labels: dtest windows (was: dtest) > (windows) dtest failure in > replace_address_test.TestReplaceAddress.replace_with_reset_resume_state_test > --- > > Key: CASSANDRA-11246 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11246 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/165/testReport/replace_address_test/TestReplaceAddress/replace_with_reset_resume_state_test > Failed on CassCI build cassandra-2.2_dtest_win32 #165 > 2 flaps of this test in recent history, looks like a possible test issue, > perhaps with invalid yaml at startup (somehow). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10916) TestGlobalRowKeyCache.functional_test fails on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-10916: -- Labels: dtest windows (was: dtest) > TestGlobalRowKeyCache.functional_test fails on Windows > -- > > Key: CASSANDRA-10916 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10916 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey >Assignee: Joshua McKenzie > Labels: dtest, windows > Fix For: 3.0.x > > > {{global_row_key_cache_test.py:TestGlobalRowKeyCache.functional_test}} fails > hard on Windows when a node fails to start: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test/ > http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/140/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test_2/ > I have not dug much into the failure history, so I don't know how closely the > failures are related. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11298) (windows) dtest failure in repair_tests.repair_test.TestRepairDataSystemTable.repair_table_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11298: -- Labels: dtest windows (was: dtest) > (windows) dtest failure in > repair_tests.repair_test.TestRepairDataSystemTable.repair_table_test > --- > > Key: CASSANDRA-11298 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11298 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/191/testReport/repair_tests.repair_test/TestRepairDataSystemTable/repair_table_test > Failed on CassCI build cassandra-3.0_dtest_win32 #191 > This is a singular new failure, but the error message looks suspicious and > worth digging into. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11266) (windows) dtest failure in read_repair_test.TestReadRepair.alter_rf_and_run_read_repair_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11266: -- Labels: dtest windows (was: dtest) > (windows) dtest failure in > read_repair_test.TestReadRepair.alter_rf_and_run_read_repair_test > > > Key: CASSANDRA-11266 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11266 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/140/testReport/read_repair_test/TestReadRepair/alter_rf_and_run_read_repair_test > Failed on CassCI build cassandra-3.0_dtest_win32 #140 > Failing on every run, looks like could be test or cassandra issue. > {noformat} > Couldn't identify initial replica > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10639) Commitlog compression test fails on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-10639: -- Labels: dtest windows (was: dtest) > Commitlog compression test fails on Windows > --- > > Key: CASSANDRA-10639 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10639 > Project: Cassandra > Issue Type: Sub-task > Components: Local Write-Read Paths >Reporter: Jim Witschey >Assignee: Joshua McKenzie > Labels: dtest, windows > Fix For: 3.0.x > > > {{commitlog_test.py:TestCommitLog.test_compression_error}} fails on Windows > under CassCI. It fails in a number of different ways. Here, it looks like > reading the CRC fails: > http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/commitlog_test/TestCommitLog/test_compression_error/ > Here, I believe it fails when trying to validate the CRC header: > http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/99/testReport/commitlog_test/TestCommitLog/test_compression_error/ > https://github.com/riptano/cassandra-dtest/blob/master/commitlog_test.py#L497 > Here's another failure where the header has a {{Q}} written in it instead of > a closing brace: > http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/91/testReport/junit/commitlog_test/TestCommitLog/test_compression_error/ > https://github.com/riptano/cassandra-dtest/blob/master/commitlog_test.py#L513 > [~bdeggleston] Do I remember correctly that you wrote this test? Can you take > this on? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11251) (windows) dtest failure in putget_test.TestPutGet.non_local_read_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11251: -- Labels: dtest windows (was: dtest) > (windows) dtest failure in putget_test.TestPutGet.non_local_read_test > - > > Key: CASSANDRA-11251 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11251 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/174/testReport/putget_test/TestPutGet/non_local_read_test > Failed on CassCI build cassandra-2.2_dtest_win32 #174 > Failing intermittently, error: > {noformat} > code=1500 [Replica(s) failed to execute write] message="Operation failed - > received 1 responses and 1 failures" info={'failures': 1, > 'received_responses': 1, 'required_responses': 2, 'consistency': 'QUORUM'} > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11234) (windows) dtest failure in largecolumn_test.TestLargeColumn.cleanup_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11234: -- Labels: dtest windows (was: dtest) > (windows) dtest failure in largecolumn_test.TestLargeColumn.cleanup_test > > > Key: CASSANDRA-11234 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11234 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/largecolumn_test/TestLargeColumn/cleanup_test > Failed on CassCI build cassandra-2.2_dtest_win32 #156 > failing consistently > looks like maybe a python platform issue or something: > {noformat} > Expected output from nodetool gcstats starts with a header line with first > column Interval > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11236) (windows) dtest failure in scrub_test.TestScrub.test_standalone_scrub
[ https://issues.apache.org/jira/browse/CASSANDRA-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11236: -- Labels: dtest windows (was: dtest) > (windows) dtest failure in scrub_test.TestScrub.test_standalone_scrub > - > > Key: CASSANDRA-11236 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11236 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/scrub_test/TestScrub/test_standalone_scrub > Failed on CassCI build cassandra-2.2_dtest_win32 #156 > Failing on every run on windows, with: > {noformat} > sstablescrub failed > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11252) (windows) dtest failure in read_repair_test.TestReadRepair.range_slice_query_with_tombstones_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11252: -- Labels: dtest windows (was: dtest) > (windows) dtest failure in > read_repair_test.TestReadRepair.range_slice_query_with_tombstones_test > - > > Key: CASSANDRA-11252 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11252 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/176/testReport/read_repair_test/TestReadRepair/range_slice_query_with_tombstones_test > Failed on CassCI build cassandra-2.2_dtest_win32 #176 > {noformat} > Trace information was not available within 120.00 seconds. Consider > raising Session.max_trace_wait. > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11249) (windows) dtest failure in paging_test.TestPagingData.test_paging_using_secondary_indexes
[ https://issues.apache.org/jira/browse/CASSANDRA-11249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11249: -- Labels: dtest windows (was: dtest) > (windows) dtest failure in > paging_test.TestPagingData.test_paging_using_secondary_indexes > - > > Key: CASSANDRA-11249 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11249 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/169/testReport/paging_test/TestPagingData/test_paging_using_secondary_indexes > Failed on CassCI build cassandra-2.2_dtest_win32 #169 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10915) netstats_test dtest fails on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-10915: -- Labels: dtest windows (was: dtest) > netstats_test dtest fails on Windows > > > Key: CASSANDRA-10915 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10915 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey > Labels: dtest, windows > Fix For: 3.0.x > > > jmx_test.py:TestJMX.netstats_test started failing hard on Windows about a > month ago: > http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/140/testReport/junit/jmx_test/TestJMX/netstats_test/history/?start=25 > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/jmx_test/TestJMX/netstats_test/history/ > It fails when it is unable to connect to a node via JMX. I don't know if this > problem has any relationship to CASSANDRA-10913. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11281) (windows) dtest failures with permission issues on trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-11281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-11281: -- Labels: dtest windows (was: dtest) > (windows) dtest failures with permission issues on trunk > > > Key: CASSANDRA-11281 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11281 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest, windows > > example failure: > http://cassci.datastax.com/job/trunk_dtest_win32/337/testReport/bootstrap_test/TestBootstrap/shutdown_wiped_node_cannot_join_test > Failed on CassCI build trunk_dtest_win32 #337 > Failing tests with very similar error messages: > * > compaction_test.TestCompaction_with_DateTieredCompactionStrategy.compaction_strategy_switching_test > * > compaction_test.TestCompaction_with_LeveledCompactionStrategy.compaction_strategy_switching_test > * bootstrap_test.TestBootstrap.shutdown_wiped_node_cannot_join_test > * bootstrap_test.TestBootstrap.killed_wiped_node_cannot_join_test > * bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_join_test > * > bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_gossip_to_single_seed_test > * bootstrap_test.TestBootstrap.failed_bootstrap_wiped_node_can_join_test -- This message was sent by Atlassian JIRA (v6.3.4#6332)