[
https://issues.apache.org/jira/browse/CASSANDRA-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rok Doltar updated CASSANDRA-13770:
-----------------------------------
Description:
We are getting the following error:
{code}
DEBUG [Native-Transport-Requests-1] 2017-08-17 07:47:01,815
ReadCallback.java:132 - Failed; received 0 of 1 responses
WARN [ReadStage-2] 2017-08-17 07:47:01,816
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread
Thread[ReadStage-2,5,main]: {}
java.lang.AssertionError: Lower bound
[INCL_START_BOUND(0028354338333835414433363737373137344544303642413442444246344544443932334538463946340000283836453642373436354546423435334544363636443236344644313935333032363338314542363200,
ab570080-831f-11e7-a81f-417b646547c3, , 1x) ]is bigger than first returned
value [Row:
partition_key=0028354338333835414433363737373137344544303642413442444246344544443932334538463946340000283836453642373436354546423435334544363636443236344644313935333032363338314542363200,
version=null, file_path=null, file_name=null | ] for sstable
/var/lib/cassandra/data/catalog/file-aa90a340831f11e7aca2ed895c1dab3f/.idx_file_path_hash/mc-51-big-Data.db
at
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:124)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:500)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:360)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:67)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.SinglePartitionReadCommand.withSSTablesIterated(SinglePartitionReadCommand.java:695)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:639)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:514)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.index.internal.CassandraIndexSearcher.queryIndex(CassandraIndexSearcher.java:81)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.index.internal.CassandraIndexSearcher.search(CassandraIndexSearcher.java:63)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:408)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1882)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[na:1.8.0_141]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
[apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
{code}
The related table is:
{code}
CREATE TABLE catalog.file (
path_hash text,
file_hash text,
version timeuuid,
file_path text,
file_name text,
allocations_size bigint,
change_time timestamp,
creation_time timestamp,
dacl frozen<acl>,
ea_size bigint,
end_of_file bigint,
file_attributes bigint,
file_id blob,
group_sid frozen<sid>,
host text static,
last_access_time timestamp,
last_write_time timestamp,
owner_sid frozen<sid>,
share text static,
PRIMARY KEY ((path_hash, file_hash), version, file_path, file_name)
) WITH CLUSTERING ORDER BY (version DESC, file_path ASC, file_name ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class':
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE INDEX idx_file_path_hash ON catalog.file (path_hash);
{code}
This happens always when we are querying by the idx_file_path_hash index and
always for the same path_hash "5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4":
{code}
cqlsh> select * from catalog.file where
path_hash='5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4';
ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read]
message="Operation failed - received 0 responses and 1 failures"
info={'failures': 1, 'received_responses': 0, 'required_responses': 1,
'consistency': 'ONE'}
{code}
If querying without using index the data is displayed just fine:
{code}
cqlsh> select * from catalog.file LIMIT 1;
path_hash | file_hash
| version | file_path
| file_name | host | share | allocations_size
| change_time | creation_time | dacl
| ea_size | end_of_file | file_attributes | file_id
| group_sid
| last_access_time
| last_write_time | owner_sid
------------------------------------------+------------------------------------------+--------------------------------------+----------------------------------+------------------------------------+--------------+-------+------------------+---------------------------------+---------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-------------+-----------------+--------------------+-----------------------------------------------------------------------------------------------------------------------+---------------------------------+---------------------------------+-----------------------------------------------------------------------------------------------------------------------
5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4 |
86E6B7465EFB453ED666D264FD1953026381EB62 | ab570080-831f-11e7-a81f-417b646547c3
| 2015_VSIcon/SchemaObjectProperty | SchemaObjectProperty_16x_24.bmp_13 |
10.17.62.151 | rokd | 12288 | 2017-07-12 11:51:20.159000+0000 |
2017-07-12 11:51:20.151000+0000 | {revision: 2, aces: [{ace_type: 0, ace_flags:
{16}, ace_size: 0, access_mask: null, sid: {revision: 1,
sid_identifier_authority: 0x000000000005, sub_authorities: [21, 769239019,
917752761, 3061700898, 500]}}, {ace_type: 0, ace_flags: {16}, ace_size: 0,
access_mask: null, sid: {revision: 1, sid_identifier_authority: 0x000000000005,
sub_authorities: [32, 544]}}, {ace_type: 0, ace_flags: {16}, ace_size: 0,
access_mask: null, sid: {revision: 1, sid_identifier_authority: 0x000000000005,
sub_authorities: [32, 545]}}]} | 0 | 822 | 33 |
0xd5a2000000000000 | {revision: 1, sid_identifier_authority: 0x000000000005,
sub_authorities: [21, 769239019, 917752761, 3061700898, 513]} | 2017-07-12
11:51:20.151000+0000 | 2016-01-08 09:50:34.000000+0000 | {revision: 1,
sid_identifier_authority: 0x000000000005, sub_authorities: [21, 769239019,
917752761, 3061700898, 500]}
{code}
Rebuiding index doesn't help:
{code}
# nodetool rebuild_index catalog file idx_file_path_hash
{code}
was:
We are getting the following error:
{code}
DEBUG [Native-Transport-Requests-1] 2017-08-17 07:47:01,815
ReadCallback.java:132 - Failed; received 0 of 1 responses
WARN [ReadStage-2] 2017-08-17 07:47:01,816
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread
Thread[ReadStage-2,5,main]: {}
java.lang.AssertionError: Lower bound
[INCL_START_BOUND(0028354338333835414433363737373137344544303642413442444246344544443932334538463946340000283836453642373436354546423435334544363636443236344644313935333032363338314542363200,
ab570080-831f-11e7-a81f-417b646547c3, , 1x) ]is bigger than first returned
value [Row:
partition_key=0028354338333835414433363737373137344544303642413442444246344544443932334538463946340000283836453642373436354546423435334544363636443236344644313935333032363338314542363200,
version=null, file_path=null, file_name=null | ] for sstable
/var/lib/cassandra/data/catalog/file-aa90a340831f11e7aca2ed895c1dab3f/.idx_file_path_hash/mc-51-big-Data.db
at
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:124)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:500)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:360)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:67)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.SinglePartitionReadCommand.withSSTablesIterated(SinglePartitionReadCommand.java:695)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:639)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:514)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.index.internal.CassandraIndexSearcher.queryIndex(CassandraIndexSearcher.java:81)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.index.internal.CassandraIndexSearcher.search(CassandraIndexSearcher.java:63)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:408)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1882)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[na:1.8.0_141]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
~[apache-cassandra-3.11.0.jar:3.11.0]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
[apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
{code}
The related table is:
{code}
CREATE TABLE catalog.file (
path_hash text,
file_hash text,
version timeuuid,
file_path text,
file_name text,
allocations_size bigint,
change_time timestamp,
creation_time timestamp,
dacl frozen<acl>,
ea_size bigint,
end_of_file bigint,
file_attributes bigint,
file_id blob,
group_sid frozen<sid>,
host text static,
last_access_time timestamp,
last_write_time timestamp,
owner_sid frozen<sid>,
share text static,
PRIMARY KEY ((path_hash, file_hash), version, file_path, file_name)
) WITH CLUSTERING ORDER BY (version DESC, file_path ASC, file_name ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class':
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE INDEX idx_file_path_hash ON catalog.file (path_hash);
{code}
This happens always when we are querying by the idx_file_path_hash index and
always for the same path_hash "5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4":
{code}
cqlsh> select * from catalog.file where
path_hash='5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4';
ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read]
message="Operation failed - received 0 responses and 1 failures"
info={'failures': 1, 'received_responses': 0, 'required_responses': 1,
'consistency': 'ONE'}
{code}
If querying without using index the data is displayed just fine:
{code}
cqlsh> select * from catalog.file LIMIT 1;
path_hash | file_hash
| version | file_path
| file_name | host | share | allocations_size
| change_time | creation_time | dacl
| ea_size | end_of_file | file_attributes | file_id
| group_sid
| last_access_time
| last_write_time | owner_sid
------------------------------------------+------------------------------------------+--------------------------------------+----------------------------------+------------------------------------+--------------+-------+------------------+---------------------------------+---------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-------------+-----------------+--------------------+-----------------------------------------------------------------------------------------------------------------------+---------------------------------+---------------------------------+-----------------------------------------------------------------------------------------------------------------------
5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4 |
86E6B7465EFB453ED666D264FD1953026381EB62 | ab570080-831f-11e7-a81f-417b646547c3
| 2015_VSIcon/SchemaObjectProperty | SchemaObjectProperty_16x_24.bmp_13 |
10.17.62.151 | rokd | 12288 | 2017-07-12 11:51:20.159000+0000 |
2017-07-12 11:51:20.151000+0000 | {revision: 2, aces: [{ace_type: 0, ace_flags:
{16}, ace_size: 0, access_mask: null, sid: {revision: 1,
sid_identifier_authority: 0x000000000005, sub_authorities: [21, 769239019,
917752761, 3061700898, 500]}}, {ace_type: 0, ace_flags: {16}, ace_size: 0,
access_mask: null, sid: {revision: 1, sid_identifier_authority: 0x000000000005,
sub_authorities: [32, 544]}}, {ace_type: 0, ace_flags: {16}, ace_size: 0,
access_mask: null, sid: {revision: 1, sid_identifier_authority: 0x000000000005,
sub_authorities: [32, 545]}}]} | 0 | 822 | 33 |
0xd5a2000000000000 | {revision: 1, sid_identifier_authority: 0x000000000005,
sub_authorities: [21, 769239019, 917752761, 3061700898, 513]} | 2017-07-12
11:51:20.151000+0000 | 2016-01-08 09:50:34.000000+0000 | {revision: 1,
sid_identifier_authority: 0x000000000005, sub_authorities: [21, 769239019,
917752761, 3061700898, 500]}
Rebuiding index doesn't help:
# nodetool rebuild_index catalog file idx_file_path_hash
{code}
> AssertionError: Lower bound INCL_START_BOUND during select by index
> -------------------------------------------------------------------
>
> Key: CASSANDRA-13770
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13770
> Project: Cassandra
> Issue Type: Bug
> Environment: Cassandra 3.11 (cassandra.noarch 3.11.0-1),
> CentOS Linux release 7.3.1611 (Core)
> Reporter: Rok Doltar
>
> We are getting the following error:
> {code}
> DEBUG [Native-Transport-Requests-1] 2017-08-17 07:47:01,815
> ReadCallback.java:132 - Failed; received 0 of 1 responses
> WARN [ReadStage-2] 2017-08-17 07:47:01,816
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread
> Thread[ReadStage-2,5,main]: {}
> java.lang.AssertionError: Lower bound
> [INCL_START_BOUND(0028354338333835414433363737373137344544303642413442444246344544443932334538463946340000283836453642373436354546423435334544363636443236344644313935333032363338314542363200,
> ab570080-831f-11e7-a81f-417b646547c3, , 1x) ]is bigger than first returned
> value [Row:
> partition_key=0028354338333835414433363737373137344544303642413442444246344544443932334538463946340000283836453642373436354546423435334544363636443236344644313935333032363338314542363200,
> version=null, file_path=null, file_name=null | ] for sstable
> /var/lib/cassandra/data/catalog/file-aa90a340831f11e7aca2ed895c1dab3f/.idx_file_path_hash/mc-51-big-Data.db
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:124)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:47)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:500)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:360)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:67)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.db.SinglePartitionReadCommand.withSSTablesIterated(SinglePartitionReadCommand.java:695)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:639)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:514)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.index.internal.CassandraIndexSearcher.queryIndex(CassandraIndexSearcher.java:81)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.index.internal.CassandraIndexSearcher.search(CassandraIndexSearcher.java:63)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:408)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1882)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_141]
> at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
> [apache-cassandra-3.11.0.jar:3.11.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
> [apache-cassandra-3.11.0.jar:3.11.0]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
> The related table is:
> {code}
> CREATE TABLE catalog.file (
> path_hash text,
> file_hash text,
> version timeuuid,
> file_path text,
> file_name text,
> allocations_size bigint,
> change_time timestamp,
> creation_time timestamp,
> dacl frozen<acl>,
> ea_size bigint,
> end_of_file bigint,
> file_attributes bigint,
> file_id blob,
> group_sid frozen<sid>,
> host text static,
> last_access_time timestamp,
> last_write_time timestamp,
> owner_sid frozen<sid>,
> share text static,
> PRIMARY KEY ((path_hash, file_hash), version, file_path, file_name)
> ) WITH CLUSTERING ORDER BY (version DESC, file_path ASC, file_name ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class':
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class':
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE INDEX idx_file_path_hash ON catalog.file (path_hash);
> {code}
> This happens always when we are querying by the idx_file_path_hash index and
> always for the same path_hash "5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4":
> {code}
> cqlsh> select * from catalog.file where
> path_hash='5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4';
> ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read]
> message="Operation failed - received 0 responses and 1 failures"
> info={'failures': 1, 'received_responses': 0, 'required_responses': 1,
> 'consistency': 'ONE'}
> {code}
> If querying without using index the data is displayed just fine:
> {code}
> cqlsh> select * from catalog.file LIMIT 1;
> path_hash | file_hash
> | version | file_path
> | file_name | host | share |
> allocations_size | change_time | creation_time
> | dacl
>
>
>
>
>
> | ea_size |
> end_of_file | file_attributes | file_id | group_sid
>
> | last_access_time | last_write_time
> | owner_sid
> ------------------------------------------+------------------------------------------+--------------------------------------+----------------------------------+------------------------------------+--------------+-------+------------------+---------------------------------+---------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-------------+-----------------+--------------------+-----------------------------------------------------------------------------------------------------------------------+---------------------------------+---------------------------------+-----------------------------------------------------------------------------------------------------------------------
> 5C8385AD36777174ED06BA4BDBF4EDD923E8F9F4 |
> 86E6B7465EFB453ED666D264FD1953026381EB62 |
> ab570080-831f-11e7-a81f-417b646547c3 | 2015_VSIcon/SchemaObjectProperty |
> SchemaObjectProperty_16x_24.bmp_13 | 10.17.62.151 | rokd | 12288
> | 2017-07-12 11:51:20.159000+0000 | 2017-07-12 11:51:20.151000+0000 |
> {revision: 2, aces: [{ace_type: 0, ace_flags: {16}, ace_size: 0, access_mask:
> null, sid: {revision: 1, sid_identifier_authority: 0x000000000005,
> sub_authorities: [21, 769239019, 917752761, 3061700898, 500]}}, {ace_type: 0,
> ace_flags: {16}, ace_size: 0, access_mask: null, sid: {revision: 1,
> sid_identifier_authority: 0x000000000005, sub_authorities: [32, 544]}},
> {ace_type: 0, ace_flags: {16}, ace_size: 0, access_mask: null, sid:
> {revision: 1, sid_identifier_authority: 0x000000000005, sub_authorities: [32,
> 545]}}]} | 0 | 822 | 33 | 0xd5a2000000000000 |
> {revision: 1, sid_identifier_authority: 0x000000000005, sub_authorities: [21,
> 769239019, 917752761, 3061700898, 513]} | 2017-07-12 11:51:20.151000+0000 |
> 2016-01-08 09:50:34.000000+0000 | {revision: 1, sid_identifier_authority:
> 0x000000000005, sub_authorities: [21, 769239019, 917752761, 3061700898, 500]}
> {code}
> Rebuiding index doesn't help:
> {code}
> # nodetool rebuild_index catalog file idx_file_path_hash
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]