[ https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17791445#comment-17791445 ]
Rajeshbabu Chintaguntla edited comment on OMID-240 at 11/30/23 4:43 AM: ------------------------------------------------------------------------ [~stoty] All kinds of select queries are failing without server side filtering with both inmemory commit storage module as well as hbase commit storage module. So would be better to stick to server side filtering and hbase commit storage table. {noformat} 0: jdbc:phoenix:> select count(*) from test; Error: java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:9223372036854775807, maxStamp:-9223372036854775808 (state=08000,code=101) org.apache.phoenix.exception.PhoenixIOException: java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:9223372036854775807, maxStamp:-9223372036854775808 at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:138) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1379) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1318) at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:52) at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:107) at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:127) at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:841) at sqlline.BufferedRows.nextList(BufferedRows.java:109) at sqlline.BufferedRows.<init>(BufferedRows.java:52) at sqlline.SqlLine.print(SqlLine.java:1672) at sqlline.Commands.executeSingleQuery(Commands.java:1063) at sqlline.Commands.execute(Commands.java:1003) at sqlline.Commands.sql(Commands.java:967) at sqlline.SqlLine.dispatch(SqlLine.java:734) at sqlline.SqlLine.begin(SqlLine.java:541) at sqlline.SqlLine.start(SqlLine.java:267) at sqlline.SqlLine.main(SqlLine.java:206) Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:9223372036854775807, maxStamp:-9223372036854775808 at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:206) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1374) ... 16 more Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:9223372036854775807, maxStamp:-9223372036854775808 at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:157) at org.apache.hadoop.hbase.io.TimeRange.<init>(TimeRange.java:145) at org.apache.hadoop.hbase.client.Get.setTimestamp(Get.java:238) at org.apache.omid.transaction.HBaseTransactionManager$CommitTimestampLocatorImpl.readCommitTimestampFromShadowCell(HBaseTransactionManager.java:299) at org.apache.omid.transaction.SnapshotFilterImpl.readCommitTimestampFromShadowCell(SnapshotFilterImpl.java:143) at org.apache.omid.transaction.SnapshotFilterImpl.locateCellCommitTimestamp(SnapshotFilterImpl.java:188) at org.apache.omid.transaction.SnapshotFilterImpl.tryToLocateCellCommitTimestamp(SnapshotFilterImpl.java:250) at org.apache.omid.transaction.SnapshotFilterImpl.getCommitTimestamp(SnapshotFilterImpl.java:303) at org.apache.omid.transaction.SnapshotFilterImpl.getTSIfInSnapshot(SnapshotFilterImpl.java:388) at org.apache.omid.transaction.SnapshotFilterImpl.filterCellsForSnapshot(SnapshotFilterImpl.java:449) at org.apache.omid.transaction.SnapshotFilterImpl$TransactionalClientScanner.next(SnapshotFilterImpl.java:633) at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:158) at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:172) at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:55) at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:67) at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:81) at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:138) at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) {noformat} {noformat} 0: jdbc:phoenix:> select * from test; Error: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family _v does not exist in region TEST,,1701319095144.446b404f285c378344e4ae553836f3a0. in table 'TEST', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3 => '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', coprocessor$4 => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', coprocessor$5 => '|org.apache.phoenix.index.PhoenixTransactionalIndexer|805306366|', coprocessor$6 => '|org.apache.phoenix.coprocessor.OmidTransactionalProcessor|805306356|', coprocessor$7 => '|org.apache.phoenix.coprocessor.OmidGCProcessor|805306356|', coprocessor$8 => '|org.apache.phoenix.coprocessor.PhoenixTTLRegionObserver|805306364|', METADATA => {'hbase.store.file-tracker.impl' => 'DEFAULT'}}}, {NAME => '0', INDEX_BLOCK_ENCODING => 'NONE', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536 B (64KB)'} at org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:7818) at org.apache.hadoop.hbase.regionserver.HRegion.prepareGet(HRegion.java:7347) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2633) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2577) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45023) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) (state=08000,code=101) {noformat} was (Author: rajeshbabu): [~stoty] All kinds of select queries are failing without server side filtering with both inmemory commit storage module as well as hbase commit storage module. So would be better to stick to server side filtering. {noformat} 0: jdbc:phoenix:> select count(*) from test; Error: java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:9223372036854775807, maxStamp:-9223372036854775808 (state=08000,code=101) org.apache.phoenix.exception.PhoenixIOException: java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:9223372036854775807, maxStamp:-9223372036854775808 at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:138) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1379) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1318) at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:52) at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:107) at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:127) at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:841) at sqlline.BufferedRows.nextList(BufferedRows.java:109) at sqlline.BufferedRows.<init>(BufferedRows.java:52) at sqlline.SqlLine.print(SqlLine.java:1672) at sqlline.Commands.executeSingleQuery(Commands.java:1063) at sqlline.Commands.execute(Commands.java:1003) at sqlline.Commands.sql(Commands.java:967) at sqlline.SqlLine.dispatch(SqlLine.java:734) at sqlline.SqlLine.begin(SqlLine.java:541) at sqlline.SqlLine.start(SqlLine.java:267) at sqlline.SqlLine.main(SqlLine.java:206) Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:9223372036854775807, maxStamp:-9223372036854775808 at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:206) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1374) ... 16 more Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative. minStamp:9223372036854775807, maxStamp:-9223372036854775808 at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:157) at org.apache.hadoop.hbase.io.TimeRange.<init>(TimeRange.java:145) at org.apache.hadoop.hbase.client.Get.setTimestamp(Get.java:238) at org.apache.omid.transaction.HBaseTransactionManager$CommitTimestampLocatorImpl.readCommitTimestampFromShadowCell(HBaseTransactionManager.java:299) at org.apache.omid.transaction.SnapshotFilterImpl.readCommitTimestampFromShadowCell(SnapshotFilterImpl.java:143) at org.apache.omid.transaction.SnapshotFilterImpl.locateCellCommitTimestamp(SnapshotFilterImpl.java:188) at org.apache.omid.transaction.SnapshotFilterImpl.tryToLocateCellCommitTimestamp(SnapshotFilterImpl.java:250) at org.apache.omid.transaction.SnapshotFilterImpl.getCommitTimestamp(SnapshotFilterImpl.java:303) at org.apache.omid.transaction.SnapshotFilterImpl.getTSIfInSnapshot(SnapshotFilterImpl.java:388) at org.apache.omid.transaction.SnapshotFilterImpl.filterCellsForSnapshot(SnapshotFilterImpl.java:449) at org.apache.omid.transaction.SnapshotFilterImpl$TransactionalClientScanner.next(SnapshotFilterImpl.java:633) at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:158) at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:172) at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:55) at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:67) at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:81) at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:138) at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) {noformat} {noformat} 0: jdbc:phoenix:> select * from test; Error: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family _v does not exist in region TEST,,1701319095144.446b404f285c378344e4ae553836f3a0. in table 'TEST', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3 => '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', coprocessor$4 => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', coprocessor$5 => '|org.apache.phoenix.index.PhoenixTransactionalIndexer|805306366|', coprocessor$6 => '|org.apache.phoenix.coprocessor.OmidTransactionalProcessor|805306356|', coprocessor$7 => '|org.apache.phoenix.coprocessor.OmidGCProcessor|805306356|', coprocessor$8 => '|org.apache.phoenix.coprocessor.PhoenixTTLRegionObserver|805306364|', METADATA => {'hbase.store.file-tracker.impl' => 'DEFAULT'}}}, {NAME => '0', INDEX_BLOCK_ENCODING => 'NONE', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536 B (64KB)'} at org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:7818) at org.apache.hadoop.hbase.regionserver.HRegion.prepareGet(HRegion.java:7347) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2633) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2577) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45023) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) (state=08000,code=101) {noformat} > Transactional visibility is broken > ---------------------------------- > > Key: OMID-240 > URL: https://issues.apache.org/jira/browse/OMID-240 > Project: Phoenix Omid > Issue Type: Bug > Affects Versions: 1.1.0 > Reporter: Lars Hofhansl > Assignee: Rajeshbabu Chintaguntla > Priority: Critical > Attachments: hbase-omid-client-config.yml, > omid-server-configuration.yml > > > Client I: > {code:java} > > create table test(x float primary key, y float) DISABLE_WAL=true, > TRANSACTIONAL=true; > No rows affected (1.872 seconds) > > !autocommit off > Autocommit status: false > > upsert into test values(rand(), rand()); > 1 row affected (0.018 seconds) > > upsert into test select rand(), rand() from test; > -- 18-20x > > !commit{code} > > Client II: > {code:java} > -- repeat quickly after the commit on client I > > select count(*) from test; > +----------+ > | COUNT(1) | > +----------+ > | 0 | > +----------+ > 1 row selected (1.408 seconds) > > select count(*) from test; > +----------+ > | COUNT(1) | > +----------+ > | 259884 | > +----------+ > 1 row selected (2.959 seconds) > > select count(*) from test; > +----------+ > | COUNT(1) | > +----------+ > | 260145 | > +----------+ > 1 row selected (4.274 seconds) > > select count(*) from test; > +----------+ > | COUNT(1) | > +----------+ > | 260148 | > +----------+ > 1 row selected (5.563 seconds) > > select count(*) from test; > +----------+ > | COUNT(1) | > +----------+ > | 260148 | > +----------+ > 1 row selected (5.573 seconds){code} > The second client should either show 0 or 260148. But no other value! -- This message was sent by Atlassian Jira (v8.20.10#820010)