[jira] [Commented] (OMID-277) Omid 1.1.2 fails with Phoenix 5.2

2024-03-01 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822682#comment-17822682
 ] 

Lars Hofhansl commented on OMID-277:


Sorry. Did not realize that Phoenix "packages" Omid code. Build Phoenix against 
a local 1.1.2-SNAPHOT and now this is working as expected.

So before we release Phoenix 5.2.0 we should release an Omid version with the 
fix.

> Omid 1.1.2 fails with Phoenix 5.2
> -
>
> Key: OMID-277
> URL: https://issues.apache.org/jira/browse/OMID-277
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.1, 1.1.2
>Reporter: Lars Hofhansl
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.2
>
>
> Try to create a transactional table with Phoenix 5.2 and Omid 1.1.2, and 
> you'll find this in the RS log:
> {code:java}
>  2024-02-28T20:26:13,055 ERROR [RS_OPEN_REGION-regionserver/think:16020-2] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor threw 
> java.lang.NoClassDefFoundE
> rror: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> at 
> org.apache.omid.transaction.OmidSnapshotFilter.start(OmidSnapshotFilter.java:85)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor.start(OmidTransactionalProcessor.java:44)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.hadoop.hbase.coprocessor.BaseEnvironment.startup(BaseEnvironment.java:69)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.checkAndLoadInstance(CoprocessorHost.java:285)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:249)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:388)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:278)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:859) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:734) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
>  ~[?:?]
> at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) 
> ~[?:?]
> at java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6971) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7184)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7161) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7120) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7076) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:149)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>  ~[?:?]
> at java.lang.Thread.run(Thread.java:1583) ~[?:?]
> Caused by: java.lang.ExceptionInInitializerError: Exception 
> java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/com/google/common/base/Charsets [in thread 
> "RS_OPEN_REGION-regionserver/think:16020-2"]
> at 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig.(HBaseCommitTableConfig.java:36)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at org.apache.omid.transaction.OmidCompactor.start(OmidCompactor.java:92) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidGCProcessor.start(OmidGCProcessor.java:43) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> ... 21 more{code}
>  
> As before I have no time to track this down as I do not work on Phoenix/HBase 
> anymore, but at least I can file an issue. :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (OMID-277) Omid 1.1.2 fails with Phoenix 5.2

2024-02-29 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822319#comment-17822319
 ] 

Lars Hofhansl edited comment on OMID-277 at 3/1/24 2:08 AM:


Hmm... For me the problem remains. Cleaned and Recompiled omid, wiped the 
/hbase directory in HDFS and ZK, restarted HBase. Same problem.

[~stoty] 


was (Author: lhofhansl):
Hmm... For me the problem remains. Cleaned and Recompiled omid, wiped the 
/hbase directory in HDFS and ZK, restarted HBase. Same problem.

> Omid 1.1.2 fails with Phoenix 5.2
> -
>
> Key: OMID-277
> URL: https://issues.apache.org/jira/browse/OMID-277
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.1, 1.1.2
>Reporter: Lars Hofhansl
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.2
>
>
> Try to create a transactional table with Phoenix 5.2 and Omid 1.1.2, and 
> you'll find this in the RS log:
> {code:java}
>  2024-02-28T20:26:13,055 ERROR [RS_OPEN_REGION-regionserver/think:16020-2] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor threw 
> java.lang.NoClassDefFoundE
> rror: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> at 
> org.apache.omid.transaction.OmidSnapshotFilter.start(OmidSnapshotFilter.java:85)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor.start(OmidTransactionalProcessor.java:44)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.hadoop.hbase.coprocessor.BaseEnvironment.startup(BaseEnvironment.java:69)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.checkAndLoadInstance(CoprocessorHost.java:285)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:249)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:388)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:278)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:859) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:734) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
>  ~[?:?]
> at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) 
> ~[?:?]
> at java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6971) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7184)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7161) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7120) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7076) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:149)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>  ~[?:?]
> at java.lang.Thread.run(Thread.java:1583) ~[?:?]
> Caused by: java.lang.ExceptionInInitializerError: Exception 
> java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/com/google/common/base/Charsets [in thread 
> "RS_OPEN_REGION-regionserver/think:16020-2"]
> at 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig.(HBaseCommitTableConfig.java:36)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at org.apache.omid.transaction.OmidCompactor.start(OmidCompactor.java:92) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidGCProcessor.start(OmidGCProcessor.java:43) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> ... 21 more{code}
>  
> As before I have no time to track this down as I do not work on Phoenix/HBase 

[jira] [Commented] (OMID-277) Omid 1.1.2 fails with Phoenix 5.2

2024-02-29 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822319#comment-17822319
 ] 

Lars Hofhansl commented on OMID-277:


Hmm... For me the problem remains. Cleaned and Recompiled omid, wiped the 
/hbase directory in HDFS and ZK, restarted HBase. Same problem.

> Omid 1.1.2 fails with Phoenix 5.2
> -
>
> Key: OMID-277
> URL: https://issues.apache.org/jira/browse/OMID-277
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.1, 1.1.2
>Reporter: Lars Hofhansl
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.2
>
>
> Try to create a transactional table with Phoenix 5.2 and Omid 1.1.2, and 
> you'll find this in the RS log:
> {code:java}
>  2024-02-28T20:26:13,055 ERROR [RS_OPEN_REGION-regionserver/think:16020-2] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor threw 
> java.lang.NoClassDefFoundE
> rror: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> at 
> org.apache.omid.transaction.OmidSnapshotFilter.start(OmidSnapshotFilter.java:85)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor.start(OmidTransactionalProcessor.java:44)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.hadoop.hbase.coprocessor.BaseEnvironment.startup(BaseEnvironment.java:69)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.checkAndLoadInstance(CoprocessorHost.java:285)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:249)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:388)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:278)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:859) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:734) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
>  ~[?:?]
> at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) 
> ~[?:?]
> at java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6971) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7184)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7161) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7120) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7076) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:149)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>  ~[?:?]
> at java.lang.Thread.run(Thread.java:1583) ~[?:?]
> Caused by: java.lang.ExceptionInInitializerError: Exception 
> java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/com/google/common/base/Charsets [in thread 
> "RS_OPEN_REGION-regionserver/think:16020-2"]
> at 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig.(HBaseCommitTableConfig.java:36)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at org.apache.omid.transaction.OmidCompactor.start(OmidCompactor.java:92) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidGCProcessor.start(OmidGCProcessor.java:43) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> ... 21 more{code}
>  
> As before I have no time to track this down as I do not work on Phoenix/HBase 
> anymore, but at least I can file an issue. :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-277) Omid 1.1.2 fails with Phoenix 5.2

2024-02-28 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated OMID-277:
---
Description: 
Try to create a transactional table with Phoenix 5.2 and Omid 1.1.2, and you'll 
find this in the RS log:
{code:java}
 2024-02-28T20:26:13,055 ERROR [RS_OPEN_REGION-regionserver/think:16020-2] 
coprocessor.CoprocessorHost: The coprocessor 
org.apache.phoenix.coprocessor.OmidTransactionalProcessor threw 
java.lang.NoClassDefFoundE
rror: Could not initialize class 
org.apache.omid.committable.hbase.HBaseCommitTableConfig
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.omid.committable.hbase.HBaseCommitTableConfig
at 
org.apache.omid.transaction.OmidSnapshotFilter.start(OmidSnapshotFilter.java:85)
 ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at 
org.apache.phoenix.coprocessor.OmidTransactionalProcessor.start(OmidTransactionalProcessor.java:44)
 ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at 
org.apache.hadoop.hbase.coprocessor.BaseEnvironment.startup(BaseEnvironment.java:69)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.checkAndLoadInstance(CoprocessorHost.java:285)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:249)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:388)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:278)
 ~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:859) 
~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:734) 
~[hbase-server-2.5.7.jar:2.5.7]
at 
jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
 ~[?:?]
at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) 
~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6971) 
~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7184)
 ~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7161) 
~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7120) 
~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7076) 
~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:149)
 ~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) 
~[hbase-server-2.5.7.jar:2.5.7]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) 
~[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) 
~[?:?]
at java.lang.Thread.run(Thread.java:1583) ~[?:?]
Caused by: java.lang.ExceptionInInitializerError: Exception 
java.lang.NoClassDefFoundError: 
org/apache/phoenix/shaded/com/google/common/base/Charsets [in thread 
"RS_OPEN_REGION-regionserver/think:16020-2"]
at 
org.apache.omid.committable.hbase.HBaseCommitTableConfig.(HBaseCommitTableConfig.java:36)
 ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at org.apache.omid.transaction.OmidCompactor.start(OmidCompactor.java:92) 
~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at 
org.apache.phoenix.coprocessor.OmidGCProcessor.start(OmidGCProcessor.java:43) 
~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
... 21 more{code}
 

As before I have no time to track this down as I do not work on Phoenix/HBase 
anymore, but at least I can file an issue. :)

  was:
{code:java}
 2024-02-28T20:26:13,055 ERROR [RS_OPEN_REGION-regionserver/think:16020-2] 
coprocessor.CoprocessorHost: The coprocessor 
org.apache.phoenix.coprocessor.OmidTransactionalProcessor threw 
java.lang.NoClassDefFoundE
rror: Could not initialize class 
org.apache.omid.committable.hbase.HBaseCommitTableConfig
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.omid.committable.hbase.HBaseCommitTableConfig
at 
org.apache.omid.transaction.OmidSnapshotFilter.start(OmidSnapshotFilter.java:85)
 ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at 
org.apache.phoenix.coprocessor.OmidTransactionalProcessor.start(OmidTransactionalProcessor.java:44)
 ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at 
org.apache.hadoop.hbase.coprocessor.BaseEnvironment.startup(BaseEnvironment.java:69)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 

[jira] [Created] (OMID-277) Omid 1.1.2 fails with Phoenix 5.2

2024-02-28 Thread Lars Hofhansl (Jira)
Lars Hofhansl created OMID-277:
--

 Summary: Omid 1.1.2 fails with Phoenix 5.2
 Key: OMID-277
 URL: https://issues.apache.org/jira/browse/OMID-277
 Project: Phoenix Omid
  Issue Type: Bug
Affects Versions: 1.1.2
Reporter: Lars Hofhansl


{code:java}
 2024-02-28T20:26:13,055 ERROR [RS_OPEN_REGION-regionserver/think:16020-2] 
coprocessor.CoprocessorHost: The coprocessor 
org.apache.phoenix.coprocessor.OmidTransactionalProcessor threw 
java.lang.NoClassDefFoundE
rror: Could not initialize class 
org.apache.omid.committable.hbase.HBaseCommitTableConfig
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.omid.committable.hbase.HBaseCommitTableConfig
at 
org.apache.omid.transaction.OmidSnapshotFilter.start(OmidSnapshotFilter.java:85)
 ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at 
org.apache.phoenix.coprocessor.OmidTransactionalProcessor.start(OmidTransactionalProcessor.java:44)
 ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at 
org.apache.hadoop.hbase.coprocessor.BaseEnvironment.startup(BaseEnvironment.java:69)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.checkAndLoadInstance(CoprocessorHost.java:285)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:249)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:388)
 ~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:278)
 ~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:859) 
~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:734) 
~[hbase-server-2.5.7.jar:2.5.7]
at 
jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
 ~[?:?]
at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) 
~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6971) 
~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7184)
 ~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7161) 
~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7120) 
~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7076) 
~[hbase-server-2.5.7.jar:2.5.7]
at 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:149)
 ~[hbase-server-2.5.7.jar:2.5.7]
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) 
~[hbase-server-2.5.7.jar:2.5.7]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) 
~[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) 
~[?:?]
at java.lang.Thread.run(Thread.java:1583) ~[?:?]
Caused by: java.lang.ExceptionInInitializerError: Exception 
java.lang.NoClassDefFoundError: 
org/apache/phoenix/shaded/com/google/common/base/Charsets [in thread 
"RS_OPEN_REGION-regionserver/think:16020-2"]
at 
org.apache.omid.committable.hbase.HBaseCommitTableConfig.(HBaseCommitTableConfig.java:36)
 ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at org.apache.omid.transaction.OmidCompactor.start(OmidCompactor.java:92) 
~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
at 
org.apache.phoenix.coprocessor.OmidGCProcessor.start(OmidGCProcessor.java:43) 
~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
... 21 more{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-12-17 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798032#comment-17798032
 ] 

Lars Hofhansl commented on OMID-240:


Awesome. Thank you!

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 1.1.1
>
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (OMID-240) Transactional visibility is broken

2023-04-10 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17710359#comment-17710359
 ] 

Lars Hofhansl edited comment on OMID-240 at 4/11/23 12:59 AM:
--

Happens whether PostCommitMode is SYNC or ASYNC, and ConflictDetectionLevel is 
ROW or CELL.

So it looks like this is generally broken!


was (Author: lhofhansl):
Happens whether PostCommitMode is SYNC or ASYNC

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-240) Transactional visibility is broken

2023-04-10 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated OMID-240:
---
Priority: Critical  (was: Major)

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Priority: Critical
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-04-10 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17710359#comment-17710359
 ] 

Lars Hofhansl commented on OMID-240:


Happens whether PostCommitMode is SYNC or ASYNC

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-240) Transactional visibility is broken

2023-04-10 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated OMID-240:
---
Summary: Transactional visibility is broken  (was: Transactional visibility 
is broken with PosCommitMode = ASYNC)

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (OMID-240) Transactional visibility is broken with PosCommitMode = ASYNC

2023-04-10 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17710358#comment-17710358
 ] 

Lars Hofhansl edited comment on OMID-240 at 4/11/23 12:39 AM:
--

I don't work on Phoenix/HBase/OMID anymore, and won't have a fix. But at least 
I can file this issue.

Find my OMID config files attached.


was (Author: lhofhansl):
I don't work on Phoenix/HBase/OMID anymore, and won't have a fix. But at least 
I can file this issue.

> Transactional visibility is broken with PosCommitMode = ASYNC
> -
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-240) Transactional visibility is broken with PosCommitMode = ASYNC

2023-04-10 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated OMID-240:
---
Description: 
Client I:
{code:java}
 > create table test(x float primary key, y float) DISABLE_WAL=true, 
 > TRANSACTIONAL=true;

No rows affected (1.872 seconds)

> !autocommit off
Autocommit status: false

> upsert into test values(rand(), rand());
1 row affected (0.018 seconds)

> upsert into test select rand(), rand() from test;
-- 18-20x

> !commit{code}
 

Client II:
{code:java}
-- repeat quickly after the commit on client I


> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 0        |
+--+
1 row selected (1.408 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 259884   |
+--+
1 row selected (2.959 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260145   |
+--+
1 row selected (4.274 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260148   |
+--+
1 row selected (5.563 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260148   |
+--+
1 row selected (5.573 seconds){code}
The second client should either show 0 or 260148. But no other value!

  was:
Client I:
{code:java}
 > create table test(x float primary key, y float) DISABLE_WAL=true, 
 > TRANSACTIONAL=true;

No rows affected (1.872 seconds)

> !autocommit off
Autocommit status: false

> upsert into test values(rand(), rand());
1 row affected (0.018 seconds)

> upsert into test select rand(), rand() from test;
-- 18x

> !commit{code}
 

Client II:
{code:java}
-- repeat quickly after the commit on client I


> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 0        |
+--+
1 row selected (1.408 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 259884   |
+--+
1 row selected (2.959 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260145   |
+--+
1 row selected (4.274 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260148   |
+--+
1 row selected (5.563 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260148   |
+--+
1 row selected (5.573 seconds){code}
The second client should either show 0 or 260148. But no other value!


> Transactional visibility is broken with PosCommitMode = ASYNC
> -
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-240) Transactional visibility is broken with PosCommitMode = ASYNC

2023-04-10 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated OMID-240:
---
Attachment: hbase-omid-client-config.yml
omid-server-configuration.yml

> Transactional visibility is broken with PosCommitMode = ASYNC
> -
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (OMID-240) Transactional visibility is broken with PosCommitMode = ASYNC

2023-04-10 Thread Lars Hofhansl (Jira)
Lars Hofhansl created OMID-240:
--

 Summary: Transactional visibility is broken with PosCommitMode = 
ASYNC
 Key: OMID-240
 URL: https://issues.apache.org/jira/browse/OMID-240
 Project: Phoenix Omid
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Lars Hofhansl


Client I:
{code:java}
 > create table test(x float primary key, y float) DISABLE_WAL=true, 
 > TRANSACTIONAL=true;

No rows affected (1.872 seconds)

> !autocommit off
Autocommit status: false

> upsert into test values(rand(), rand());
1 row affected (0.018 seconds)

> upsert into test select rand(), rand() from test;
-- 18x

> !commit{code}
 

Client II:
{code:java}
-- repeat quickly after the commit on client I


> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 0        |
+--+
1 row selected (1.408 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 259884   |
+--+
1 row selected (2.959 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260145   |
+--+
1 row selected (4.274 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260148   |
+--+
1 row selected (5.563 seconds)
> select count(*) from test;
+--+
| COUNT(1) |
+--+
| 260148   |
+--+
1 row selected (5.573 seconds){code}
The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-240) Transactional visibility is broken with PosCommitMode = ASYNC

2023-04-10 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17710358#comment-17710358
 ] 

Lars Hofhansl commented on OMID-240:


I don't work on Phoenix/HBase/OMID anymore, and won't have a fix. But at least 
I can file this issue.

> Transactional visibility is broken with PosCommitMode = ASYNC
> -
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Priority: Major
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-04-02 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6671:
---
Attachment: 6671-v2-5.1.txt

> Avoid ShortCirtuation Coprocessor Connection with HBase 2.x
> ---
>
> Key: PHOENIX-6671
> URL: https://issues.apache.org/jira/browse/PHOENIX-6671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
> Attachments: 6671-5.1.txt, 6671-v2-5.1.txt
>
>
> See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.
> HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
> be able to fix it there, but with all the work the RPC handlers perform now 
> (closing scanning, resolving current user, etc), I doubt we'll get that 100% 
> right. HBase 3 has removed this functionality.
> Even with HBase 2, which does not have the async protobuf code, I could 
> hardly see any performance improvement from circumventing the RPC stack in 
> case the target of a Get or Scan is local. Even in the most ideal conditions 
> where everything is local, there was improvement outside of noise.
> I suggest we do not use ShortCircuited Connections in Phoenix 5+.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-03-18 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6671:
---
Description: 
See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.

HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
be able to fix it there, but with all the work the RPC handlers perform now 
(closing scanning, resolving current user, etc), I doubt we'll get that 100% 
right. HBase 3 has removed this functionality.

Even with HBase 2, which does not have the async protobuf code, I could hardly 
see any performance improvement from circumventing the RPC stack in case the 
target of a Get or Scan is local. Even in the most ideal conditions where 
everything is local, there was improvement outside of noise.

I suggest we do not use ShortCircuited Connections in Phoenix 5+.

  was:
See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.

HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
be able to fix it there, but with all the work the RPC handlers perform now 
(closing scanning, resolving current user, etc), I doubt we'll get that 100% 
right. HBase 3 has removed this functionality.

Even with HBase, which does not have the async protobuf code, I could hardly 
see any performance improvement from circumventing the RPC stack in case the 
target of a Get or Scan is local. Even in the most ideal conditions where 
everything is local, there was improvement outside of noise.

I suggest we do not use ShortCircuited Connections in Phoenix 5+.


> Avoid ShortCirtuation Coprocessor Connection with HBase 2.x
> ---
>
> Key: PHOENIX-6671
> URL: https://issues.apache.org/jira/browse/PHOENIX-6671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6671-5.1.txt
>
>
> See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.
> HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
> be able to fix it there, but with all the work the RPC handlers perform now 
> (closing scanning, resolving current user, etc), I doubt we'll get that 100% 
> right. HBase 3 has removed this functionality.
> Even with HBase 2, which does not have the async protobuf code, I could 
> hardly see any performance improvement from circumventing the RPC stack in 
> case the target of a Get or Scan is local. Even in the most ideal conditions 
> where everything is local, there was improvement outside of noise.
> I suggest we do not use ShortCircuited Connections in Phoenix 5+.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-03-18 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6671:
---
Attachment: 6671-5.1.txt

> Avoid ShortCirtuation Coprocessor Connection with HBase 2.x
> ---
>
> Key: PHOENIX-6671
> URL: https://issues.apache.org/jira/browse/PHOENIX-6671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6671-5.1.txt
>
>
> See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.
> HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
> be able to fix it there, but with all the work the RPC handlers perform now 
> (closing scanning, resolving current user, etc), I doubt we'll get that 100% 
> right. HBase 3 has removed this functionality.
> Even with HBase, which does not have the async protobuf code, I could hardly 
> see any performance improvement from circumventing the RPC stack in case the 
> target of a Get or Scan is local. Even in the most ideal conditions where 
> everything is local, there was improvement outside of noise.
> I suggest we do not use ShortCircuited Connections in Phoenix 5+.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-03-18 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6671:
---
Description: 
See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.

HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
be able to fix it there, but with all the work the RPC handlers perform now 
(closing scanning, resolving current user, etc), I doubt we'll get that 100% 
right. HBase 3 has removed this functionality.

Even with HBase, which does not have the async protobuf code, I could hardly 
see any performance improvement from circumventing the RPC stack in case the 
target of a Get or Scan is local. Even in the most ideal conditions where 
everything is local, there was improvement outside of noise.

I suggest we do not use ShortCircuited Connections in Phoenix 5+.

  was:
See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.

HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
be able to fix it there, but with all the work the RPC handlers perform now 
(closing scanning, resolving current user, etc), I doubt we'll get that 100% 
right. HBase 3 has removed this functionality.

Even with HBase, which does not have the async protobuf code, I could hardly 
see any performance improvement from circumventing the RPC stack in case the 
target of a Get or Scan is local. Even in the most ideal conditions where 
everything is local, there was improvement outside of noise.

I suggest we do not use ShortCircuited Connections.


> Avoid ShortCirtuation Coprocessor Connection with HBase 2.x
> ---
>
> Key: PHOENIX-6671
> URL: https://issues.apache.org/jira/browse/PHOENIX-6671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
>
> See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.
> HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
> be able to fix it there, but with all the work the RPC handlers perform now 
> (closing scanning, resolving current user, etc), I doubt we'll get that 100% 
> right. HBase 3 has removed this functionality.
> Even with HBase, which does not have the async protobuf code, I could hardly 
> see any performance improvement from circumventing the RPC stack in case the 
> target of a Get or Scan is local. Even in the most ideal conditions where 
> everything is local, there was improvement outside of noise.
> I suggest we do not use ShortCircuited Connections in Phoenix 5+.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-03-18 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6671:
--

 Summary: Avoid ShortCirtuation Coprocessor Connection with HBase 
2.x
 Key: PHOENIX-6671
 URL: https://issues.apache.org/jira/browse/PHOENIX-6671
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl


See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.

HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
be able to fix it there, but with all the work the RPC handlers perform now 
(closing scanning, resolving current user, etc), I doubt we'll get that 100% 
right. HBase 3 has removed this functionality.

Even with HBase, which does not have the async protobuf code, I could hardly 
see any performance improvement from circumventing the RPC stack in case the 
target of a Get or Scan is local. Even in the most ideal conditions where 
everything is local, there was improvement outside of noise.

I suggest we do not use ShortCircuited Connections.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-6501) Use batching when joining data table rows with uncovered global index rows

2022-03-11 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-6501:
--

Assignee: Lars Hofhansl  (was: Kadir OZDEMIR)

> Use batching when joining data table rows with uncovered global index rows
> --
>
> Key: PHOENIX-6501
> URL: https://issues.apache.org/jira/browse/PHOENIX-6501
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.2
>Reporter: Kadir Ozdemir
>Assignee: Lars Hofhansl
>Priority: Major
> Attachments: PHOENIX-6501.master.001.patch
>
>
> PHOENIX-6458 extends the existing uncovered local index support for global 
> indexes. The current solution uses HBase get operations to join data table 
> rows with uncovered index rows on the server side. Doing a separate RPC call 
> for every data table row can be expensive. Instead, we can buffer lots of 
> data row keys in memory,  use a skip scan filter and even multiple threads to 
> issue a separate scan for each data table region in parallel. This will 
> reduce the cost of join and also improve the performance.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Reopened] (PHOENIX-6458) Using global indexes for queries with uncovered columns

2022-03-07 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reopened PHOENIX-6458:


> Using global indexes for queries with uncovered columns
> ---
>
> Key: PHOENIX-6458
> URL: https://issues.apache.org/jira/browse/PHOENIX-6458
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.0
>Reporter: Kadir Ozdemir
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 5.1.3
>
> Attachments: PHOENIX-6458.master.001.patch, 
> PHOENIX-6458.master.002.patch, PHOENIX-6458.master.addendum.patch
>
>
> The Phoenix query optimizer does not use a global index for a query with the 
> columns that are not covered by the global index if the query does not have 
> the corresponding index hint for this index. With the index hint, the 
> optimizer rewrites the query where the index is used within a subquery. With 
> this subquery, the row keys of the index rows that satisfy the subquery are 
> retrieved by the Phoenix client and then pushed into the Phoenix server 
> caches of the data table regions. Finally, on the server side, data table 
> rows are scanned and joined with the index rows using HashJoin. Based on the 
> selectivity of the original query, this join operation may still result in 
> scanning a large amount of data table rows. 
> Eliminating these data table scans would be a significant improvement. To do 
> that, instead of rewriting the query, the Phoenix optimizer simply treats the 
> global index as a covered index for the given query. With this, the Phoenix 
> query optimizer chooses the index table for the query especially when the 
> index row key prefix length is greater than the data row key prefix length 
> for the query. On the server side, the index table is scanned using index row 
> key ranges implied by the query and the index row keys are then mapped to the 
> data table row keys (please note an index row key includes all the data row 
> key columns). Finally, the corresponding data table rows are scanned using 
> server-to-server RPCs.  PHOENIX-6458 (this Jira) retrieves the data table 
> rows one by one using the HBase get operation. PHOENIX-6501 replaces this get 
> operation with the scan operation to reduce the number of server-to-server 
> RPC calls.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-6458) Using global indexes for queries with uncovered columns

2022-02-24 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-6458:
--

Assignee: Lars Hofhansl  (was: Kadir OZDEMIR)

> Using global indexes for queries with uncovered columns
> ---
>
> Key: PHOENIX-6458
> URL: https://issues.apache.org/jira/browse/PHOENIX-6458
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.0
>Reporter: Kadir Ozdemir
>Assignee: Lars Hofhansl
>Priority: Major
> Attachments: PHOENIX-6458.master.001.patch, 
> PHOENIX-6458.master.002.patch
>
>
> The Phoenix query optimizer does not use a global index for a query with the 
> columns that are not covered by the global index if the query does not have 
> the corresponding index hint for this index. With the index hint, the 
> optimizer rewrites the query where the index is used within a subquery. With 
> this subquery, the row keys of the index rows that satisfy the subquery are 
> retrieved by the Phoenix client and then pushed into the Phoenix server 
> caches of the data table regions. Finally, on the server side, data table 
> rows are scanned and joined with the index rows using HashJoin. Based on the 
> selectivity of the original query, this join operation may still result in 
> scanning a large amount of data table rows. 
> Eliminating these data table scans would be a significant improvement. To do 
> that, instead of rewriting the query, the Phoenix optimizer simply treats the 
> global index as a covered index for the given query. With this, the Phoenix 
> query optimizer chooses the index table for the query especially when the 
> index row key prefix length is greater than the data row key prefix length 
> for the query. On the server side, the index table is scanned using index row 
> key ranges implied by the query and the index row keys are then mapped to the 
> data table row keys (please note an index row key includes all the data row 
> key columns). Finally, the corresponding data table rows are scanned using 
> server-to-server RPCs.  PHOENIX-6458 (this Jira) retrieves the data table 
> rows one by one using the HBase get operation. PHOENIX-6501 replaces this get 
> operation with the scan operation to reduce the number of server-to-server 
> RPC calls.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6647) A local index should not be chosen for a full scan if that scan is not covered by the index.

2022-02-09 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6647:
---
Attachment: 6647-v2-5.1.txt

> A local index should not be chosen for a full scan if that scan is not 
> covered by the index.
> 
>
> Key: PHOENIX-6647
> URL: https://issues.apache.org/jira/browse/PHOENIX-6647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6647-5.1.txt, 6647-v2-5.1.txt
>
>
> {code}
> > explain select * from lineitem;
> +-++-+
> | 
>  PLAN 
>   | EST_BYTES_READ | EST |
> +-++-+
> | CLIENT 103-CHUNK 17711182 ROWS 1059064693 BYTES PARALLEL 2-WAY ROUND ROBIN 
> RANGE SCAN OVER LINEITEM [1]  
>| 1059064693 | 177 |
> | SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT, 0.TAX, 0.RETURNFLAG, 0.LINESTATUS, 0.COMMITDATE, 0.RECEIPTDATE, 
> 0.SHIPINSTRUCT, 0.SHIPMODE, 0.COMMENT] | 1059064693 | 177 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   
>   | 1059064693 | 177 |
> +-++-+
> 3 rows selected (0.056 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6647) A local index should not be chosen for a full scan if that scan is not covered by the index.

2022-02-09 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6647:
---
Attachment: 6647-5.1.txt

> A local index should not be chosen for a full scan if that scan is not 
> covered by the index.
> 
>
> Key: PHOENIX-6647
> URL: https://issues.apache.org/jira/browse/PHOENIX-6647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6647-5.1.txt
>
>
> {code}
> > explain select * from lineitem;
> +-++-+
> | 
>  PLAN 
>   | EST_BYTES_READ | EST |
> +-++-+
> | CLIENT 103-CHUNK 17711182 ROWS 1059064693 BYTES PARALLEL 2-WAY ROUND ROBIN 
> RANGE SCAN OVER LINEITEM [1]  
>| 1059064693 | 177 |
> | SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT, 0.TAX, 0.RETURNFLAG, 0.LINESTATUS, 0.COMMITDATE, 0.RECEIPTDATE, 
> 0.SHIPINSTRUCT, 0.SHIPMODE, 0.COMMENT] | 1059064693 | 177 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   
>   | 1059064693 | 177 |
> +-++-+
> 3 rows selected (0.056 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6647) A local index should not be chosen for a full scan if that scan is not covered by the index.

2022-02-09 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6647:
---
Description: 
{code}
> explain select * from lineitem;
+-++-+
|   
   PLAN 
  | EST_BYTES_READ | EST |
+-++-+
| CLIENT 103-CHUNK 17711182 ROWS 1059064693 BYTES PARALLEL 2-WAY ROUND ROBIN 
RANGE SCAN OVER LINEITEM [1]
 | 1059064693 | 177 |
| SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
0.DISCOUNT, 0.TAX, 0.RETURNFLAG, 0.LINESTATUS, 0.COMMITDATE, 0.RECEIPTDATE, 
0.SHIPINSTRUCT, 0.SHIPMODE, 0.COMMENT] | 1059064693 | 177 |
| SERVER FILTER BY FIRST KEY ONLY   

  | 1059064693 | 177 |
+-++-+
3 rows selected (0.056 seconds)
{code}


  was:
> explain select * from lineitem;
+-++-+
|   
   PLAN 
  | EST_BYTES_READ | EST |
+-++-+
| CLIENT 103-CHUNK 17711182 ROWS 1059064693 BYTES PARALLEL 2-WAY ROUND ROBIN 
RANGE SCAN OVER LINEITEM [1]
 | 1059064693 | 177 |
| SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
0.DISCOUNT, 0.TAX, 0.RETURNFLAG, 0.LINESTATUS, 0.COMMITDATE, 0.RECEIPTDATE, 
0.SHIPINSTRUCT, 0.SHIPMODE, 0.COMMENT] | 1059064693 | 177 |
| SERVER FILTER BY FIRST KEY ONLY   

  | 1059064693 | 177 |
+-++-+
3 rows selected (0.056 seconds)


> A local index should not be chosen for a full scan if that scan is not 
> covered by the index.
> 
>
> Key: PHOENIX-6647
> URL: https://issues.apache.org/jira/browse/PHOENIX-6647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> > explain select * from lineitem;
> +-++-+
> | 
>  PLAN 
>   | EST_BYTES_READ | EST |
> +-++-+
> | CLIENT 103-CHUNK 17711182 ROWS 1059064693 BYTES PARALLEL 2-WAY ROUND ROBIN 
> RANGE SCAN OVER LINEITEM [1]  
>| 1059064693 | 177 |
> | SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT, 0.TAX, 0.RETURNFLAG, 0.LINESTATUS, 0.COMMITDATE, 0.RECEIPTDATE, 
> 0.SHIPINSTRUCT, 0.SHIPMODE, 0.COMMENT] | 1059064693 | 177 |
> | SERVER FILTER BY FIRST KEY ONLY

[jira] [Created] (PHOENIX-6647) A local index should not be chosen for a full scan if that scan is not covered by the index.

2022-02-09 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6647:
--

 Summary: A local index should not be chosen for a full scan if 
that scan is not covered by the index.
 Key: PHOENIX-6647
 URL: https://issues.apache.org/jira/browse/PHOENIX-6647
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.2
Reporter: Lars Hofhansl


> explain select * from lineitem;
+-++-+
|   
   PLAN 
  | EST_BYTES_READ | EST |
+-++-+
| CLIENT 103-CHUNK 17711182 ROWS 1059064693 BYTES PARALLEL 2-WAY ROUND ROBIN 
RANGE SCAN OVER LINEITEM [1]
 | 1059064693 | 177 |
| SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
0.DISCOUNT, 0.TAX, 0.RETURNFLAG, 0.LINESTATUS, 0.COMMITDATE, 0.RECEIPTDATE, 
0.SHIPINSTRUCT, 0.SHIPMODE, 0.COMMENT] | 1059064693 | 177 |
| SERVER FILTER BY FIRST KEY ONLY   

  | 1059064693 | 177 |
+-++-+
3 rows selected (0.056 seconds)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (OMID-217) DISCUSS: Change some Omid defaults.

2021-12-23 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated OMID-217:
---
Description: 
Playing with Omid I would like to propose two settings and one code change:
# (server) waitStrategy: LOW_CPU (without this, the TSO server will keep one 
core 100% busy)
# (client) postCommitMode: 
!!org.apache.omid.tso.client.OmidClientConfiguration$PostCommitMode ASYNC (the 
client returns when it is safe to do so for correctness, optimizations - 
updating shadow columns - happen asynchronously)
# (client) conflictDetectionLevel: 
!!org.apache.omid.tso.client.OmidClientConfiguration$ConflictDetectionLevel ROW 
(this is lighter than CELL)
# TSOChannelHandler.java:106 change the max packet size to 100MB, so that - if 
needed - the TSO can handle large transactions (up to 20m rows). Or at least 
make this a config option.

Let's discuss these...

  was:
Playing with Omid I would like to propose two settings and one code change:
# (server) waitStrategy: LOW_CPU (without this, the TSO server will keep one 
core 100% busy)
# (client) postCommitMode: 
!!org.apache.omid.tso.client.OmidClientConfiguration$PostCommitMode ASYNC (the 
client returns when it is safe to do so for correctness, optimizations - 
updating shadow columns - happen asynchronously)
# (client) conflictDetectionLevel: 
!!org.apache.omid.tso.client.OmidClientConfiguration$ConflictDetectionLevel ROW 
(this is lighter than CELL)
# TSOChannelHandler.java:106 change the max packet size to 100MB, so that - if 
needed - the TSO can handle large transactions (up to 20m rows)

Let's discuss these...


> DISCUSS: Change some Omid defaults.
> ---
>
> Key: OMID-217
> URL: https://issues.apache.org/jira/browse/OMID-217
> Project: Phoenix Omid
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Priority: Major
>
> Playing with Omid I would like to propose two settings and one code change:
> # (server) waitStrategy: LOW_CPU (without this, the TSO server will keep one 
> core 100% busy)
> # (client) postCommitMode: 
> !!org.apache.omid.tso.client.OmidClientConfiguration$PostCommitMode ASYNC 
> (the client returns when it is safe to do so for correctness, optimizations - 
> updating shadow columns - happen asynchronously)
> # (client) conflictDetectionLevel: 
> !!org.apache.omid.tso.client.OmidClientConfiguration$ConflictDetectionLevel 
> ROW (this is lighter than CELL)
> # TSOChannelHandler.java:106 change the max packet size to 100MB, so that - 
> if needed - the TSO can handle large transactions (up to 20m rows). Or at 
> least make this a config option.
> Let's discuss these...



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (OMID-217) DISCUSS: Change some Omid defaults.

2021-12-23 Thread Lars Hofhansl (Jira)
Lars Hofhansl created OMID-217:
--

 Summary: DISCUSS: Change some Omid defaults.
 Key: OMID-217
 URL: https://issues.apache.org/jira/browse/OMID-217
 Project: Phoenix Omid
  Issue Type: Wish
Reporter: Lars Hofhansl


Playing with Omid I would like to propose two settings and one code change:
# (server) waitStrategy: LOW_CPU (without this, the TSO server will keep one 
core 100% busy)
# (client) postCommitMode: 
!!org.apache.omid.tso.client.OmidClientConfiguration$PostCommitMode ASYNC (the 
client returns when it is safe to do so for correctness, optimizations - 
updating shadow columns - happen asynchronously)
# (client) conflictDetectionLevel: 
!!org.apache.omid.tso.client.OmidClientConfiguration$ConflictDetectionLevel ROW 
(this is lighter than CELL)
# TSOChannelHandler.java:106 change the max packet size to 100MB, so that - if 
needed - the TSO can handle large transactions (up to 20m rows)

Let's discuss these...



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (TEPHRA-320) Compiling Tephra with Hadoop 3.3.1

2021-12-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/TEPHRA-320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated TEPHRA-320:
-
Description: 
Turns out this was missing Hadoop's thirdparty library.
{code}
diff --git a/tephra-distribution/pom.xml b/tephra-distribution/pom.xml
index 6a9494e2..1f192c3d 100644
--- a/tephra-distribution/pom.xml
+++ b/tephra-distribution/pom.xml
@@ -97,6 +97,11 @@
tephra-hbase-compat-2.3
${project.version}

+    
+      org.apache.hadoop.thirdparty
+      hadoop-shaded-guava
+      1.1.1
+    



{code}

  was:
Turns out this was missing Hadoop's thirdparty library.

diff --git a/tephra-distribution/pom.xml b/tephra-distribution/pom.xml
index 6a9494e2..1f192c3d 100644
--- a/tephra-distribution/pom.xml
+++ b/tephra-distribution/pom.xml
@@ -97,6 +97,11 @@
tephra-hbase-compat-2.3
${project.version}

+    
+      org.apache.hadoop.thirdparty
+      hadoop-shaded-guava
+      1.1.1
+    





> Compiling Tephra with Hadoop 3.3.1
> --
>
> Key: TEPHRA-320
> URL: https://issues.apache.org/jira/browse/TEPHRA-320
> Project: Phoenix Tephra
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Poorna Chandra
>Priority: Major
>
> Turns out this was missing Hadoop's thirdparty library.
> {code}
> diff --git a/tephra-distribution/pom.xml b/tephra-distribution/pom.xml
> index 6a9494e2..1f192c3d 100644
> --- a/tephra-distribution/pom.xml
> +++ b/tephra-distribution/pom.xml
> @@ -97,6 +97,11 @@
> tephra-hbase-compat-2.3
> ${project.version}
> 
> +    
> +      org.apache.hadoop.thirdparty
> +      hadoop-shaded-guava
> +      1.1.1
> +    
> 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (TEPHRA-319) HBase 2.4 is missing from Tephra-Distribution

2021-12-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/TEPHRA-319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated TEPHRA-319:
-
Description: 
Just needs this:
{code}
diff --git a/tephra-distribution/pom.xml b/tephra-distribution/pom.xml
index 6a9494e2..1f192c3d 100644
— a/tephra-distribution/pom.xml
+++ b/tephra-distribution/pom.xml
@@ -97,6 +97,11 @@
tephra-hbase-compat-2.3
${project.version}

+    
+      org.apache.tephra
+      tephra-hbase-compat-2.4
+      ${project.version}
+    



{code}

  was:
Just needs this:

diff --git a/tephra-distribution/pom.xml b/tephra-distribution/pom.xml
index 6a9494e2..1f192c3d 100644
--- a/tephra-distribution/pom.xml
+++ b/tephra-distribution/pom.xml
@@ -97,6 +97,11 @@
tephra-hbase-compat-2.3
${project.version}

+    
+      org.apache.tephra
+      tephra-hbase-compat-2.4
+      ${project.version}
+    





> HBase 2.4 is missing from Tephra-Distribution
> -
>
> Key: TEPHRA-319
> URL: https://issues.apache.org/jira/browse/TEPHRA-319
> Project: Phoenix Tephra
>  Issue Type: Bug
>Affects Versions: 0.16.1
>Reporter: Lars Hofhansl
>Assignee: Poorna Chandra
>Priority: Major
>
> Just needs this:
> {code}
> diff --git a/tephra-distribution/pom.xml b/tephra-distribution/pom.xml
> index 6a9494e2..1f192c3d 100644
> — a/tephra-distribution/pom.xml
> +++ b/tephra-distribution/pom.xml
> @@ -97,6 +97,11 @@
> tephra-hbase-compat-2.3
> ${project.version}
> 
> +    
> +      org.apache.tephra
> +      tephra-hbase-compat-2.4
> +      ${project.version}
> +    
> 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (TEPHRA-320) Compiling Tephra with Hadoop 3.3.1

2021-12-21 Thread Lars Hofhansl (Jira)
Lars Hofhansl created TEPHRA-320:


 Summary: Compiling Tephra with Hadoop 3.3.1
 Key: TEPHRA-320
 URL: https://issues.apache.org/jira/browse/TEPHRA-320
 Project: Phoenix Tephra
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Poorna Chandra


Turns out this was missing Hadoop's thirdparty library.

diff --git a/tephra-distribution/pom.xml b/tephra-distribution/pom.xml
index 6a9494e2..1f192c3d 100644
--- a/tephra-distribution/pom.xml
+++ b/tephra-distribution/pom.xml
@@ -97,6 +97,11 @@
tephra-hbase-compat-2.3
${project.version}

+    
+      org.apache.hadoop.thirdparty
+      hadoop-shaded-guava
+      1.1.1
+    






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (TEPHRA-319) HBase 2.4 is missing from Tephra-Distribution

2021-12-21 Thread Lars Hofhansl (Jira)
Lars Hofhansl created TEPHRA-319:


 Summary: HBase 2.4 is missing from Tephra-Distribution
 Key: TEPHRA-319
 URL: https://issues.apache.org/jira/browse/TEPHRA-319
 Project: Phoenix Tephra
  Issue Type: Bug
Affects Versions: 0.16.1
Reporter: Lars Hofhansl
Assignee: Poorna Chandra


Just needs this:

diff --git a/tephra-distribution/pom.xml b/tephra-distribution/pom.xml
index 6a9494e2..1f192c3d 100644
--- a/tephra-distribution/pom.xml
+++ b/tephra-distribution/pom.xml
@@ -97,6 +97,11 @@
tephra-hbase-compat-2.3
${project.version}

+    
+      org.apache.tephra
+      tephra-hbase-compat-2.4
+      ${project.version}
+    






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6615) The Tephra transaction processor cannot be loaded anymore.

2021-12-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6615:
---
Description: 
See
 # TransactionFactory
 # TephraTransactionProvider

Can you spot the problem? :)  (Hint: The constructor is private.)

Broken since PHOENIX-6064. [~stoty] .

Can I just say... Unless I am missing something... How could we not have 
noticed that one of the transaction processors has not been working since 
August (in 5.x at least)? Is really nobody using the transaction engines?

 

  was:
See
 # TransactionFactory
 # TephraTransactionProvider

Can you spot the problem? :)  (Hint: The constructor is private.)

Broken since PHOENIX-6064. [~stoty] .

Can I just say... Unless I am missing something... How could we not have 
noticed that one of the transaction processors does not work since August? Is 
really nobody using the transaction engines?

 


> The Tephra transaction processor cannot be loaded anymore.
> --
>
> Key: PHOENIX-6615
> URL: https://issues.apache.org/jira/browse/PHOENIX-6615
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6615.txt
>
>
> See
>  # TransactionFactory
>  # TephraTransactionProvider
> Can you spot the problem? :)  (Hint: The constructor is private.)
> Broken since PHOENIX-6064. [~stoty] .
> Can I just say... Unless I am missing something... How could we not have 
> noticed that one of the transaction processors has not been working since 
> August (in 5.x at least)? Is really nobody using the transaction engines?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6615) The Tephra transaction processor cannot be loaded anymore.

2021-12-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6615:
---
Attachment: 6615.txt

> The Tephra transaction processor cannot be loaded anymore.
> --
>
> Key: PHOENIX-6615
> URL: https://issues.apache.org/jira/browse/PHOENIX-6615
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6615.txt
>
>
> See
>  # TransactionFactory
>  # TephraTransactionProvider
> Can you spot the problem? :)  (Hint: The constructor is private.)
> Broken since PHOENIX-6064. [~stoty] .
> Can I just say... Unless I am missing something... How could we not have 
> noticed that one of the transaction processors does not work since August? Is 
> really nobody using the transaction engines?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6615) The Tephra transaction processor cannot be loaded anymore.

2021-12-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6615:
---
Description: 
See
 # TransactionFactory
 # TephraTransactionProvider

Can you spot the problem? :)  (Hint: The constructor is private.)

Broken since PHOENIX-6064. [~stoty] .

Can I just say... Unless I am missing something... How could we not have 
noticed that one of the transaction processors does not work since August? Is 
really nobody using the transaction engines?

 

  was:
See
 # TransactionFactory
 # TephraTransactionProvider

Can you spot the problem? :)  (Hint: The constructor is private.)

Broken since PHOENIX-6064. [~stoty] .

Can I just say... Unless I am missing something... How could we not have 
noticed that one of the transaction processors does not work since August? I 
really nobody using the transaction engines?

 


> The Tephra transaction processor cannot be loaded anymore.
> --
>
> Key: PHOENIX-6615
> URL: https://issues.apache.org/jira/browse/PHOENIX-6615
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
>
> See
>  # TransactionFactory
>  # TephraTransactionProvider
> Can you spot the problem? :)  (Hint: The constructor is private.)
> Broken since PHOENIX-6064. [~stoty] .
> Can I just say... Unless I am missing something... How could we not have 
> noticed that one of the transaction processors does not work since August? Is 
> really nobody using the transaction engines?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6615) The Tephra transaction processor cannot be loaded anymore.

2021-12-20 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6615:
--

 Summary: The Tephra transaction processor cannot be loaded anymore.
 Key: PHOENIX-6615
 URL: https://issues.apache.org/jira/browse/PHOENIX-6615
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.2
Reporter: Lars Hofhansl


See
 # TransactionFactory
 # TephraTransactionProvider

Can you spot the problem? :)  (Hint: The constructor is private.)

Broken since PHOENIX-6064. [~stoty] .

Can I just say... Unless I am missing something... How could we not have 
noticed that one of the transaction processors does not work since August? I 
really nobody using the transaction engines?

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-6604) Allow using indexes for wildcard topN queries on salted tables

2021-12-16 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-6604:
--

Assignee: Lars Hofhansl

> Allow using indexes for wildcard topN queries on salted tables
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
> Attachments:  PHOENIX-6604.5.1.3.v1.patch, 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;
> CREATE LOCAL INDEX l_shipdate ON lineitem(shipdate);{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  
> The same happens with a covered global index.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6604) Allow using indexes for wildcard topN queries on salted tables

2021-12-16 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Fix Version/s: 5.2.0

> Allow using indexes for wildcard topN queries on salted tables
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
> Attachments:  PHOENIX-6604.5.1.3.v1.patch, 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;
> CREATE LOCAL INDEX l_shipdate ON lineitem(shipdate);{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  
> The same happens with a covered global index.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6604) Allow using indexes for wildcard topN queries on salted tables

2021-12-04 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Summary: Allow using indexes for wildcard topN queries on salted tables  
(was: Allow using indexes for wildcard topN query on salted tables)

> Allow using indexes for wildcard topN queries on salted tables
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
> Attachments: 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;
> CREATE LOCAL INDEX l_shipdate ON lineitem(shipdate);{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  
> The same happens with a covered global index.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6604) Allow using indexes for wildcard topN query on salted tables

2021-12-04 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Summary: Allow using indexes for wildcard topN query on salted tables  
(was: Index not used for wildcard topN query on salted table)

> Allow using indexes for wildcard topN query on salted tables
> 
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
> Attachments: 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;
> CREATE LOCAL INDEX l_shipdate ON lineitem(shipdate);{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  
> The same happens with a covered global index.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6608) DISCUSS: Rethink MapReduce split generation

2021-12-04 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6608:
--

 Summary: DISCUSS: Rethink MapReduce split generation
 Key: PHOENIX-6608
 URL: https://issues.apache.org/jira/browse/PHOENIX-6608
 Project: Phoenix
  Issue Type: Improvement
Reporter: Lars Hofhansl


I just ran into an issue with Trino, which uses Phoenix' M/R integration to 
generate splits for its worker nodes.

See: [https://github.com/trinodb/trino/issues/10143]

And a fix: [https://github.com/trinodb/trino/pull/10153]

In short the issue is that with large data size and guideposts enabled 
(default) Phoenix' RoundRobinResultIterator starts scanning when tasks are 
submitted to the queue. For large datasets (per client) this fills the heap 
with pre-fetches HBase result objects.

MapReduce (and Spark) integrations have presumably the same issue.

My proposed solution is instead of allowing Phoenix to do intra-split 
parallelism we create more splits (the fix above groups 20 scans into a split - 
20 turned out to be a good number).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6604) Index not used for wildcard topN query on salted table

2021-11-27 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Description: 
Just randomly came across this, playing with TPCH data.
{code:java}
CREATE TABLE lineitem (
 orderkey bigint not null,
 partkey bigint,
 suppkey bigint,
 linenumber integer not null,
 quantity double,
 extendedprice double,
 discount double,
 tax double,
 returnflag varchar(1),
 linestatus varchar(1),
 shipdate date,
 commitdate date,
 receiptdate date,
 shipinstruct varchar(25),
 shipmode varchar(10),
 comment varchar(44)
 constraint pk primary key(orderkey, linenumber)) 
IMMUTABLE_ROWS=true,SALT_BUCKETS=4;

CREATE LOCAL INDEX l_shipdate ON lineitem(shipdate);{code}
Now:
{code:java}
 > explain select * from lineitem order by shipdate limit 1;
+---+
|                                          PLAN                                 
    |
+---+
| CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
OVER LI |
|     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                     
    |
| CLIENT MERGE SORT                                                             
    |
| CLIENT LIMIT 1                                                                
    |
+---+
4 rows selected (6.525 seconds)

-- SAME COLUMNS!
> explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by shipdate 
> limit 1;
+---+
|                                                                               
    |
+---+
| CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1]   
    |
|     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
0.DISCOUNT,  |
|     SERVER FILTER BY FIRST KEY ONLY                                           
    |
|     SERVER 1 ROW LIMIT                                                        
    |
| CLIENT MERGE SORT                                                             
    |
| CLIENT 1 ROW LIMIT                                                            
    |
+---+
6 rows selected (2.736 seconds){code}
 

The same happens with a covered global index.

  was:
Just randomly came across this, playing with TPCH data.
{code:java}
CREATE TABLE lineitem (
 orderkey bigint not null,
 partkey bigint,
 suppkey bigint,
 linenumber integer not null,
 quantity double,
 extendedprice double,
 discount double,
 tax double,
 returnflag varchar(1),
 linestatus varchar(1),
 shipdate date,
 commitdate date,
 receiptdate date,
 shipinstruct varchar(25),
 shipmode varchar(10),
 comment varchar(44)
 constraint pk primary key(orderkey, linenumber)) 
IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
Now:
{code:java}
 > explain select * from lineitem order by shipdate limit 1;
+---+
|                                          PLAN                                 
    |
+---+
| CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
OVER LI |
|     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                     
    |
| CLIENT MERGE SORT                                                             
    |
| CLIENT LIMIT 1                                                                
    |
+---+
4 rows selected (6.525 seconds)

-- SAME COLUMNS!
> explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by shipdate 
> limit 1;
+---+
|                                                                               
    |
+---+
| CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1]   
    |
|     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
0.DISCOUNT,  |
|     SERVER FILTER BY FIRST KEY ONLY                                           
    |
|     SERVER 1 ROW LIMIT                                                        
   

[jira] [Updated] (PHOENIX-6604) Index not used for wildcard topN query on salted table

2021-11-25 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Attachment: 6604-1.5.1.3

> Index not used for wildcard topN query on salted table
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
> Attachments: 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6604) Index not used for wildcard topN query on salted table

2021-11-24 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Attachment: 6604.5.1.3

> Index not used for wildcard topN query on salted table
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
> Attachments: 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6604) Index not used for wildcard topN query on salted table

2021-11-24 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Summary: Index not used for wildcard topN query on salted table  (was: 
Local index not used for wildcard topN query on salted table)

> Index not used for wildcard topN query on salted table
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6604) Local index not used for wildcard topN query on salted table

2021-11-24 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Description: 
Just randomly came across this, playing with TPCH data.
{code:java}
CREATE TABLE lineitem (
 orderkey bigint not null,
 partkey bigint,
 suppkey bigint,
 linenumber integer not null,
 quantity double,
 extendedprice double,
 discount double,
 tax double,
 returnflag varchar(1),
 linestatus varchar(1),
 shipdate date,
 commitdate date,
 receiptdate date,
 shipinstruct varchar(25),
 shipmode varchar(10),
 comment varchar(44)
 constraint pk primary key(orderkey, linenumber)) 
IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
Now:
{code:java}
 > explain select * from lineitem order by shipdate limit 1;
+---+
|                                          PLAN                                 
    |
+---+
| CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
OVER LI |
|     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                     
    |
| CLIENT MERGE SORT                                                             
    |
| CLIENT LIMIT 1                                                                
    |
+---+
4 rows selected (6.525 seconds)

-- SAME COLUMNS!
> explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by shipdate 
> limit 1;
+---+
|                                                                               
    |
+---+
| CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1]   
    |
|     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
0.DISCOUNT,  |
|     SERVER FILTER BY FIRST KEY ONLY                                           
    |
|     SERVER 1 ROW LIMIT                                                        
    |
| CLIENT MERGE SORT                                                             
    |
| CLIENT 1 ROW LIMIT                                                            
    |
+---+
6 rows selected (2.736 seconds){code}
 

  was:
Just randomly came across this, playing with TPCH data.
{code:java}
CREATE TABLE lineitem (
 orderkey bigint not null,
 partkey bigint,
 suppkey bigint,
 linenumber integer not null,
 quantity double,
 extendedprice double,
 discount double,
 tax double,
 returnflag varchar(1),
 linestatus varchar(1),
 shipdate date,
 commitdate date,
 receiptdate date,
 shipinstruct varchar(25),
 shipmode varchar(10),
 comment varchar(44)
 constraint pk primary key(orderkey, linenumber)) 
DATA_BLOCK_ENCODING='ROW_INDEX_V1', COMPRESSION='ZSTD', DISABLE_WAL=true, 
IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
Now:
{code:java}


 > explain select * from lineitem order by shipdate limit 1;
+---+
|                                          PLAN                                 
    |
+---+
| CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
OVER LI |
|     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                     
    |
| CLIENT MERGE SORT                                                             
    |
| CLIENT LIMIT 1                                                                
    |
+---+
4 rows selected (6.525 seconds)

-- SAME COLUMNS!
> explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by shipdate 
> limit 1;
+---+
|                                                                               
    |
+---+
| CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1]   
    |
|     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
0.DISCOUNT,  |
|     SERVER FILTER BY FIRST KEY ONLY                                           
    |
|     SERVER 1 ROW LIMIT                                                        
    |
| CLIENT MERGE SORT  

[jira] [Updated] (PHOENIX-6604) Local index not used for wildcard topN query on salted table

2021-11-24 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6604:
---
Affects Version/s: 5.1.2

> Local index not used for wildcard topN query on salted table
> 
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> DATA_BLOCK_ENCODING='ROW_INDEX_V1', COMPRESSION='ZSTD', DISABLE_WAL=true, 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6604) Local index not used for wildcard topN query on salted table

2021-11-24 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6604:
--

 Summary: Local index not used for wildcard topN query on salted 
table
 Key: PHOENIX-6604
 URL: https://issues.apache.org/jira/browse/PHOENIX-6604
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 5.1.3


Just randomly came across this, playing with TPCH data.
{code:java}
CREATE TABLE lineitem (
 orderkey bigint not null,
 partkey bigint,
 suppkey bigint,
 linenumber integer not null,
 quantity double,
 extendedprice double,
 discount double,
 tax double,
 returnflag varchar(1),
 linestatus varchar(1),
 shipdate date,
 commitdate date,
 receiptdate date,
 shipinstruct varchar(25),
 shipmode varchar(10),
 comment varchar(44)
 constraint pk primary key(orderkey, linenumber)) 
DATA_BLOCK_ENCODING='ROW_INDEX_V1', COMPRESSION='ZSTD', DISABLE_WAL=true, 
IMMUTABLE_ROWS=true,SALT_BUCKETS=4;{code}
Now:
{code:java}


 > explain select * from lineitem order by shipdate limit 1;
+---+
|                                          PLAN                                 
    |
+---+
| CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
OVER LI |
|     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                     
    |
| CLIENT MERGE SORT                                                             
    |
| CLIENT LIMIT 1                                                                
    |
+---+
4 rows selected (6.525 seconds)

-- SAME COLUMNS!
> explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by shipdate 
> limit 1;
+---+
|                                                                               
    |
+---+
| CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1]   
    |
|     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
0.DISCOUNT,  |
|     SERVER FILTER BY FIRST KEY ONLY                                           
    |
|     SERVER 1 ROW LIMIT                                                        
    |
| CLIENT MERGE SORT                                                             
    |
| CLIENT 1 ROW LIMIT                                                            
    |
+---+
6 rows selected (2.736 seconds){code}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-5072) Cursor Query Loops Eternally with Local Index, Returns Fine Without It

2021-06-14 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5072:
---
Affects Version/s: 5.1.2

> Cursor Query Loops Eternally with Local Index, Returns Fine Without It
> --
>
> Key: PHOENIX-5072
> URL: https://issues.apache.org/jira/browse/PHOENIX-5072
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 5.2.0, 5.1.2
>Reporter: Jack Steenkamp
>Priority: Major
> Attachments: PhoenixEternalCursorTest.java
>
>
>  
> I have come across a case where a particular cursor query would carry on 
> looping forever if executed when a local index is present. If however, I 
> execute the same query without a local index on the table, then it works as 
> expected.
> You can reproduce this by executing the attached  standalone test case. You 
> only need to modify the JDBC_URL constant (by default it tries to connect to 
> localhost) and then you compare the outputs between hving CREATE_INDEX = true 
> or CREATE_INDEX = false.
> Here is an example of the output: 
> *1) Connect to an environment and create a simple table:*
> {code:java}
> Connecting To : jdbc:phoenix:localhost:63214{code}
> {code:java}
> CREATE TABLE IF NOT EXISTS SOME_NUMBERS
> (
>    ID                             VARCHAR    NOT NULL,
>    NAME                           VARCHAR    ,
>    ANOTHER_VALUE                  VARCHAR    ,
>    TRANSACTION_TIME               TIMESTAMP  ,
>    CONSTRAINT pk PRIMARY KEY(ID)
> ) IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN,
> UPDATE_CACHE_FREQUENCY=90,
> COLUMN_ENCODED_BYTES=NONE,
> IMMUTABLE_ROWS=true{code}
> *2) Optionally create a local index:*
>  
> If you want to reproduce the failure, create an index:
> {code:java}
> CREATE LOCAL INDEX index_01 ON SOME_NUMBERS(NAME, TRANSACTION_TIME DESC) 
> INCLUDE(ANOTHER_VALUE){code}
> Otherwise, skip this.
> *3) Insert a number of objects and verify their count*
> {code:java}
> System.out.println("\nInserting Some Items");
> DecimalFormat dmf = new DecimalFormat("");
> final String prefix = "ReferenceData.Country/";
> for (int i = 0; i < 5; i++)
> {
>   for (int j = 0; j < 2; j++)
>   {
> PreparedStatement prstmt = conn.prepareStatement("UPSERT INTO 
> SOME_NUMBERS VALUES(?,?,?,?)");
> prstmt.setString(1,UUID.randomUUID().toString());
> prstmt.setString(2,prefix + dmf.format(i));
> prstmt.setString(3,UUID.randomUUID().toString());
> prstmt.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
> prstmt.execute();
> conn.commit();3) Insert a number of objects and verify their count_
> prstmt.close();
>   }
> }{code}
> Verify the count afterwards with: 
> {code:java}
> SELECT COUNT(1) AS TOTAL_ITEMS FROM SOME_NUMBERS {code}
> *5) Run a Cursor Query*
> Run a cursor using the standard sequence of commands as appropriate:
> {code:java}
> DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM SOME_NUMBERS where 
> NAME like 'ReferenceData.Country/%' ORDER BY TRANSACTION_TIME DESC{code}
> {code:java}
> OPEN MyCursor{code}
> {code:java}
> FETCH NEXT 10 ROWS FROM MyCursor{code}
>  * Without an index it will return the correct number of rows
> {code:java}
> Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
> SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
> TRANSACTION_TIME DESC
> CLOSING THE CURSOR
> Result : 0
> ITEMS returned by count : 10 | Items Returned by Cursor : 10
> ALL GOOD - No Exception{code}
>  * With an index it will return far more than the number of rows (it appears 
> to be erroneously looping for ever - hence the test-case terminates it).
> {code:java}
> Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
> SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
> TRANSACTION_TIME DESC
> ITEMS returned by count : 10 | Items Returned by Cursor : 40
> Aborting the Cursor, as it is more than the count!
> Exception in thread "main" java.lang.RuntimeException: The cursor returned a 
> different number of rows from the count !! {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5639) Exception during DROP TABLE CASCADE with VIEW and INDEX in phoenix_sandbox

2021-05-26 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5639:
---
Fix Version/s: (was: 4.16.2)
   (was: 5.2.0)
   (was: 4.17.0)

> Exception during DROP TABLE CASCADE with VIEW and INDEX in phoenix_sandbox
> --
>
> Key: PHOENIX-5639
> URL: https://issues.apache.org/jira/browse/PHOENIX-5639
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Trivial
>
> {code:java}
> > CREATE TABLE TEST_5 (ID INTEGER NOT NULL PRIMARY KEY, HOST VARCHAR(10), 
> > FLAG BOOLEAN);
> > CREATE VIEW TEST_5_VIEW (col1 INTEGER, col2 INTEGER, col3 INTEGER, col4 
> > INTEGER, col5 INTEGER) AS SELECT * FROM TEST_5 WHERE ID>10;
> > CREATE INDEX TEST_5_INDEX ON TEST_5_VIEW(COL4);
> > DROP TABLE test_5 CASCASE;{code}
> Table, view, and index are dropped, but in the sandbox' log I see:
> {code:java}
> 19/12/18 07:17:41 WARN iterate.BaseResultIterators: Unable to find parent 
> table "TEST_5_VIEW" of table "TEST_5_INDEX" to determine 
> USE_STATS_FOR_PARALLELIZATION
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=TEST_5_VIEW
> at 
> org.apache.phoenix.schema.PMetaDataImpl.getTableRef(PMetaDataImpl.java:73)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.getTable(PhoenixConnection.java:584)
> at 
> org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getStatsForParallelizationProp(PhoenixConfigurationUtil.java:712)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:513)
> at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
> at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:69)
> at 
> org.apache.phoenix.execute.AggregatePlan.newIterator(AggregatePlan.java:287)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:365)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
> at 
> org.apache.phoenix.compile.PostDDLCompiler$PostDDLMutationPlan.execute(PostDDLCompiler.java:273)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:4426)
> at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.updateData(DelegateConnectionQueryServices.java:166)
> at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:3238)
> at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:3055)
> at org.apache.phoenix.util.ViewUtil.dropChildViews(ViewUtil.java:218)
> at 
> org.apache.phoenix.coprocessor.tasks.DropChildViewsTask.run(DropChildViewsTask.java:63)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.phoenix.coprocessor.TaskRegionObserver$SelfHealingTask.run(TaskRegionObserver.java:203)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Note that this only happens in the sandbox, so it's not important to fix.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6437) Delete marker for parent-child rows does not get replicated via SystemCatalogWALEntryFilter

2021-05-26 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6437:
---
Fix Version/s: 5.1.2

> Delete marker for parent-child rows does not get replicated via 
> SystemCatalogWALEntryFilter
> ---
>
> Key: PHOENIX-6437
> URL: https://issues.apache.org/jira/browse/PHOENIX-6437
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Ankit Jain
>Assignee: Ankit Jain
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 5.1.2, 4.16.2
>
>
> As part of PHOENIX-3639  SystemCatalogWALEntryFilter was introduced to 
> replicate tenant owned rows from system.catalog and ignore the non-tenant 
> rows. During recent testing it was realized that delete markers for 
> parent-child rows does not get replicated. As part of this Jira we want to 
> fix that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6435) Fix ViewTTLIT test flapper

2021-05-26 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6435:
---
Fix Version/s: 5.1.2

> Fix ViewTTLIT test flapper
> --
>
> Key: PHOENIX-6435
> URL: https://issues.apache.org/jira/browse/PHOENIX-6435
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Blocker
> Fix For: 4.16.1, 4.17.0, 5.2.0, 5.1.2
>
>
> [ERROR] Errors:
> [ERROR]   
> PermissionNSDisabledWithCustomAccessControllerIT>BasePermissionsIT.testAutomaticGrantWithIndexAndView:1278->BasePermissionsIT.verifyAllowed:769->BasePermissionsIT.verifyAllowed:776
>  ? UndeclaredThrowable
> [ERROR]   
> PermissionNSEnabledWithCustomAccessControllerIT>BasePermissionsIT.testAutomaticGrantWithIndexAndView:1279->BasePermissionsIT.verifyAllowed:769->BasePermissionsIT.verifyAllowed:776
>  ? UndeclaredThrowable
> [ERROR]   ImmutableIndexIT.testGlobalImmutableIndexDelete:407 ? StackOverflow
>  
> mvn verify failed with the above error. We have to address and fix test 
> flappers before releasing the next 4.16.1RC.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6447) Add support for SYSTEM.CHILD_LINK table in systemcatalogwalentryfilter

2021-05-26 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6447:
---
Fix Version/s: 5.1.2

> Add support for SYSTEM.CHILD_LINK table in systemcatalogwalentryfilter
> --
>
> Key: PHOENIX-6447
> URL: https://issues.apache.org/jira/browse/PHOENIX-6447
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Sandeep Pal
>Assignee: Sandeep Pal
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 5.1.2, 4.16.2
>
>
> In order to replicate system tables, we have a special filter for system 
> catalog table to just replicate tenant owner data in order NOT to mess up the 
> system catalog at the sink cluster. In 4.16, there is a new table getting 
> added (SYSTEM.CHILD_LINK) which will not be replicated completely from our 
> existing filter. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6085) Remove duplicate calls to getSysMutexPhysicalTableNameBytes() during the upgrade path

2021-05-14 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6085:
---
Fix Version/s: 5.1.2

> Remove duplicate calls to getSysMutexPhysicalTableNameBytes() during the 
> upgrade path
> -
>
> Key: PHOENIX-6085
> URL: https://issues.apache.org/jira/browse/PHOENIX-6085
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Richárd Antal
>Priority: Minor
>  Labels: phoenix-hardening, quality-improvement
> Fix For: 4.17.0, 5.2.0, 5.1.2
>
> Attachments: PHOENIX-6085.4.x.v1.patch, PHOENIX-6085.master.v1.patch
>
>
> We already make this call inside 
> [CQSI.acquireUpgradeMutex()|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L4220]
>  and then call writeMutexCell() which calls this again 
> [here|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L4244].
>  
> We should move this to inside writeMutexCell() itself and throw 
> UpgradeInProgressException if required there to avoid unnecessary expensive 
> HBase admin API calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5346) SaltedIndexIT is flapping

2021-05-14 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5346:
---
Fix Version/s: (was: 5.2.0)
   (was: 4.17.0)

> SaltedIndexIT is flapping
> -
>
> Key: PHOENIX-5346
> URL: https://issues.apache.org/jira/browse/PHOENIX-5346
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Lars Hofhansl
>Priority: Critical
>  Labels: disabled-test
>
> {code}
> [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 14.042 s <<< FAILURE! - in org.apache.phoenix.end2end.index.SaltedIndexIT
> [ERROR] 
> testMutableTableIndexMaintanenceSaltedSalted(org.apache.phoenix.end2end.index.SaltedIndexIT)
>   Time elapsed: 4.661 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[y]> but was:<[x]>
>   at 
> org.apache.phoenix.end2end.index.SaltedIndexIT.testMutableTableIndexMaintanence(SaltedIndexIT.java:129)
>   at 
> org.apache.phoenix.end2end.index.SaltedIndexIT.testMutableTableIndexMaintanenceSaltedSalted(SaltedIndexIT.java:74)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6437) Delete marker for parent-child rows does not get replicated via SystemCatalogWALEntryFilter

2021-05-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6437:
---
Fix Version/s: (was: 5.1.2)

> Delete marker for parent-child rows does not get replicated via 
> SystemCatalogWALEntryFilter
> ---
>
> Key: PHOENIX-6437
> URL: https://issues.apache.org/jira/browse/PHOENIX-6437
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Ankit Jain
>Assignee: Ankit Jain
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> As part of PHOENIX-3639  SystemCatalogWALEntryFilter was introduced to 
> replicate tenant owned rows from system.catalog and ignore the non-tenant 
> rows. During recent testing it was realized that delete markers for 
> parent-child rows does not get replicated. As part of this Jira we want to 
> fix that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6444) Extend Cell Tags to Delete object for Indexer coproc

2021-05-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6444:
---
Fix Version/s: (was: 5.1.2)

> Extend Cell Tags to Delete object for Indexer coproc
> 
>
> Key: PHOENIX-6444
> URL: https://issues.apache.org/jira/browse/PHOENIX-6444
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
>
> In PHOENIX-6213 we added support for adding source of operation cell tag to 
> Delete Markers. But we added the logic to create TagRewriteCell and add it to 
> DeleteMarker only in IndexRegionObserver coproc. I missed adding the same 
> logic to Indexer coproc. Thank you [~tkhurana] for finding this bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6447) Add support for SYSTEM.CHILD_LINK table in systemcatalogwalentryfilter

2021-05-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6447:
---
Fix Version/s: (was: 5.1.2)

> Add support for SYSTEM.CHILD_LINK table in systemcatalogwalentryfilter
> --
>
> Key: PHOENIX-6447
> URL: https://issues.apache.org/jira/browse/PHOENIX-6447
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Sandeep Pal
>Assignee: Sandeep Pal
>Priority: Major
> Fix For: 4.16.1, 4.17.0, 5.2.0
>
>
> In order to replicate system tables, we have a special filter for system 
> catalog table to just replicate tenant owner data in order NOT to mess up the 
> system catalog at the sink cluster. In 4.16, there is a new table getting 
> added (SYSTEM.CHILD_LINK) which will not be replicated completely from our 
> existing filter. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6437) Delete marker for parent-child rows does not get replicated via SystemCatalogWALEntryFilter

2021-05-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6437:
---
Fix Version/s: 5.1.2

> Delete marker for parent-child rows does not get replicated via 
> SystemCatalogWALEntryFilter
> ---
>
> Key: PHOENIX-6437
> URL: https://issues.apache.org/jira/browse/PHOENIX-6437
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Ankit Jain
>Assignee: Ankit Jain
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 5.1.2, 4.16.2
>
>
> As part of PHOENIX-3639  SystemCatalogWALEntryFilter was introduced to 
> replicate tenant owned rows from system.catalog and ignore the non-tenant 
> rows. During recent testing it was realized that delete markers for 
> parent-child rows does not get replicated. As part of this Jira we want to 
> fix that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6447) Add support for SYSTEM.CHILD_LINK table in systemcatalogwalentryfilter

2021-05-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6447:
---
Fix Version/s: 5.1.2

> Add support for SYSTEM.CHILD_LINK table in systemcatalogwalentryfilter
> --
>
> Key: PHOENIX-6447
> URL: https://issues.apache.org/jira/browse/PHOENIX-6447
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Sandeep Pal
>Assignee: Sandeep Pal
>Priority: Major
> Fix For: 4.16.1, 4.17.0, 5.2.0, 5.1.2
>
>
> In order to replicate system tables, we have a special filter for system 
> catalog table to just replicate tenant owner data in order NOT to mess up the 
> system catalog at the sink cluster. In 4.16, there is a new table getting 
> added (SYSTEM.CHILD_LINK) which will not be replicated completely from our 
> existing filter. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6444) Extend Cell Tags to Delete object for Indexer coproc

2021-05-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6444:
---
Fix Version/s: 5.1.2

> Extend Cell Tags to Delete object for Indexer coproc
> 
>
> Key: PHOENIX-6444
> URL: https://issues.apache.org/jira/browse/PHOENIX-6444
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 5.1.2
>
>
> In PHOENIX-6213 we added support for adding source of operation cell tag to 
> Delete Markers. But we added the logic to create TagRewriteCell and add it to 
> DeleteMarker only in IndexRegionObserver coproc. I missed adding the same 
> logic to Indexer coproc. Thank you [~tkhurana] for finding this bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6451) Update joni and jcodings versions

2021-05-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6451:
---
Fix Version/s: 5.1.2

> Update joni and jcodings versions
> -
>
> Key: PHOENIX-6451
> URL: https://issues.apache.org/jira/browse/PHOENIX-6451
> Project: Phoenix
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 5.1.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6457) Optionally store schema version string in SYSTEM.CATALOG

2021-05-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-6457.

Resolution: Fixed

> Optionally store schema version string in SYSTEM.CATALOG
> 
>
> Key: PHOENIX-6457
> URL: https://issues.apache.org/jira/browse/PHOENIX-6457
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
>
> In many environments, schema changes to Phoenix tables are applied in batches 
> associated with a version of an application. (For example, v1.0 of an app may 
> start with one set of CREATE statements, v1.1 then adds some ALTER 
> statements, etc.) 
> It can be useful to be able to look up the latest app version in which a 
> table or view was changed; this could potentially be added as a feature of 
> the Schema Tool. 
> This change would add an optional property to CREATE and ALTER statements, 
> SCHEMA_VERSION, which would take a user-supplied string. 
> This is also a pre-req for PHOENIX-6227, because we would want to pass the 
> schema version string, if any, to an external schema repository in 
> environments where we're integrating with one. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-6457) Optionally store schema version string in SYSTEM.CATALOG

2021-05-12 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reopened PHOENIX-6457:


What about branch-5.1?

> Optionally store schema version string in SYSTEM.CATALOG
> 
>
> Key: PHOENIX-6457
> URL: https://issues.apache.org/jira/browse/PHOENIX-6457
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
>
> In many environments, schema changes to Phoenix tables are applied in batches 
> associated with a version of an application. (For example, v1.0 of an app may 
> start with one set of CREATE statements, v1.1 then adds some ALTER 
> statements, etc.) 
> It can be useful to be able to look up the latest app version in which a 
> table or view was changed; this could potentially be added as a feature of 
> the Schema Tool. 
> This change would add an optional property to CREATE and ALTER statements, 
> SCHEMA_VERSION, which would take a user-supplied string. 
> This is also a pre-req for PHOENIX-6227, because we would want to pass the 
> schema version string, if any, to an external schema repository in 
> environments where we're integrating with one. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6434) Secondary Indexes on PHOENIX_ROW_TIMESTAMP()

2021-04-22 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6434:
---
Fix Version/s: 5.1.2
   5.2.0
   4.16.1

> Secondary Indexes on PHOENIX_ROW_TIMESTAMP()
> 
>
> Key: PHOENIX-6434
> URL: https://issues.apache.org/jira/browse/PHOENIX-6434
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Kadir Ozdemir
>Priority: Major
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
> Attachments: PHOENIX-6434.4.x.001.patch, PHOENIX-6434.4.x.002.patch, 
> PHOENIX-6434.4.x.003.patch, PHOENIX-6434.4.x.004.patch, 
> PHOENIX-6434.master.001.patch, PHOENIX-6434.master.002.patch
>
>
> PHOENIX-5629 introduced the function PHOENIX_ROW_TIMESTAMP() that returns the 
> last modified time of a row. PHOENIX_ROW_TIMESTAMP() can be used as a 
> projection column and referred in a WHERE clause. It is desirable to have 
> indexes on row timestamps. This will result in fast time range queries. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6445) Wrong query plans with functions

2021-04-14 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6445:
--

 Summary: Wrong query plans with functions
 Key: PHOENIX-6445
 URL: https://issues.apache.org/jira/browse/PHOENIX-6445
 Project: Phoenix
  Issue Type: Wish
Reporter: Lars Hofhansl


Phoenix seems to sometimes create incorrect query plans when functions are used.

I'll post these in the comments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6436) OrderedResultIterator overestimates memory requirements.

2021-04-05 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-6436.

Resolution: Fixed

Merged into master, 5.1, and 4.x branches.

> OrderedResultIterator overestimates memory requirements.
> 
>
> Key: PHOENIX-6436
> URL: https://issues.apache.org/jira/browse/PHOENIX-6436
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
>
> Just came across this.
> The size estimation is: {{(limit + offset) * estimatedEntrySize}}
> with just the passed limit and offset, and this estimate is applied for each 
> single scan.
> This is way too pessimistic when a large limit is passed as just a safety 
> measure.
> Assuming you pass 10.000.000. That is the overall limit, but Phoenix will 
> apply it to every scan (at least one per involved region) and take that much 
> memory of the pool.
> Not sure what a better estimate would be. Ideally we'd divide by the number 
> of involved regions with some fuss, or use a size estimate of the region. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6436) OrderedResultIterator overestimates memory requirements.

2021-04-05 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6436:
---
Fix Version/s: 5.1.2
   5.2.0
   4.16.1

> OrderedResultIterator overestimates memory requirements.
> 
>
> Key: PHOENIX-6436
> URL: https://issues.apache.org/jira/browse/PHOENIX-6436
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
>
> Just came across this.
> The size estimation is: {{(limit + offset) * estimatedEntrySize}}
> with just the passed limit and offset, and this estimate is applied for each 
> single scan.
> This is way too pessimistic when a large limit is passed as just a safety 
> measure.
> Assuming you pass 10.000.000. That is the overall limit, but Phoenix will 
> apply it to every scan (at least one per involved region) and take that much 
> memory of the pool.
> Not sure what a better estimate would be. Ideally we'd divide by the number 
> of involved regions with some fuss, or use a size estimate of the region. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6436) OrderedResultIterator overestimates memory requirements.

2021-04-05 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-6436:
--

Assignee: Lars Hofhansl

> OrderedResultIterator overestimates memory requirements.
> 
>
> Key: PHOENIX-6436
> URL: https://issues.apache.org/jira/browse/PHOENIX-6436
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
>
> Just came across this.
> The size estimation is: {{(limit + offset) * estimatedEntrySize}}
> with just the passed limit and offset, and this estimate is applied for each 
> single scan.
> This is way too pessimistic when a large limit is passed as just a safety 
> measure.
> Assuming you pass 10.000.000. That is the overall limit, but Phoenix will 
> apply it to every scan (at least one per involved region) and take that much 
> memory of the pool.
> Not sure what a better estimate would be. Ideally we'd divide by the number 
> of involved regions with some fuss, or use a size estimate of the region. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6436) OrderedResultIterator overestimates memory requirements.

2021-04-04 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6436:
---
Summary: OrderedResultIterator overestimates memory requirements.  (was: 
OrderedResultIterator does bad size estimation)

> OrderedResultIterator overestimates memory requirements.
> 
>
> Key: PHOENIX-6436
> URL: https://issues.apache.org/jira/browse/PHOENIX-6436
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Priority: Major
>
> Just came across this.
> The size estimation is: {{(limit + offset) * estimatedEntrySize}}
> with just the passed limit and offset, and this estimate is applied for each 
> single scan.
> This is way too pessimistic when a large limit is passed as just a safety 
> measure.
> Assuming you pass 10.000.000. That is the overall limit, but Phoenix will 
> apply it to every scan (at least one per involved region) and take that much 
> memory of the pool.
> Not sure what a better estimate would be. Ideally we'd divide by the number 
> of involved regions with some fuss, or use a size estimate of the region. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6436) OrderedResultIterator does bad size estimation

2021-04-02 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6436:
--

 Summary: OrderedResultIterator does bad size estimation
 Key: PHOENIX-6436
 URL: https://issues.apache.org/jira/browse/PHOENIX-6436
 Project: Phoenix
  Issue Type: Wish
Reporter: Lars Hofhansl


Just came across this.

The size estimation is: {{(limit + offset) * estimatedEntrySize}}
with just the passed limit and offset, and this estimate is applied for each 
single scan.

This is way too pessimistic when a large limit is passed as just a safety 
measure.
Assuming you pass 10.000.000. That is the overall limit, but Phoenix will apply 
it to every scan (at least one per involved region) and take that much memory 
of the pool.

Not sure what a better estimate would be. Ideally we'd divide by the number of 
involved regions with some fuss, or use a size estimate of the region. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6433) DISCUSS: Disllow creating new tables with duplicate column qualifiers by default.

2021-03-29 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6433:
--

 Summary: DISCUSS: Disllow creating new tables with duplicate 
column qualifiers by default.
 Key: PHOENIX-6433
 URL: https://issues.apache.org/jira/browse/PHOENIX-6433
 Project: Phoenix
  Issue Type: Wish
Reporter: Lars Hofhansl


Phoenix allows specifying columns to "reside" in specific column families. As 
long as the columns are unique you can simply refer to them via the column 
name. In that case the column families are just about the physical placement of 
the columns. No special SQL constructs are needed... This is similar to 
indexes, they are for optimization, but queries are unchanged.

However...

Currently Phoenix also allows creating tables with duplicate column qualifiers 
such as:
{{CREATE TABLE t (pk1 ..., x.v1, y.v1, ...)}} or
{{CREATE TABLE t (pk1 ..., v1, x.v1, ...)}}

In the first case you must specific any reference to {{v1}} with the column 
family or an {{AmbiguousColumnException}} is thrown. In the second case {{v1}} 
refers to the {{0.v1}}.

In both cases the physical optimization of the column storage now leaks into 
the SQL queries - unnecessarily, IMHO at least.

For tables not created in Phoenix, or with dynamic columns, this is not 
avoidable. 

I do think we should disallow creating new tables with duplicated (static) 
column names, to reduce confusion and surprises.

Related: PHOENIX-6343




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6424) SELECT cf1.* FAILS with a WHERE clause including cf2.

2021-03-23 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-6424.

Fix Version/s: 5.1.2
   5.2.0
   4.16.1
   Resolution: Fixed

> SELECT cf1.* FAILS with a WHERE clause including cf2.
> -
>
> Key: PHOENIX-6424
> URL: https://issues.apache.org/jira/browse/PHOENIX-6424
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
>
> {code}
> > create table test3(pk1 integer not null primary key, x.v1 varchar, y.v2 
> > integer);
> No rows affected (1.189 seconds)
> > upsert into test3 values(1,'test',2);
> 1 row affected (0.02 seconds)
> > select * from test3;
> +-+--++
> | PK1 |  V1  | V2 |
> +-+--++
> | 1   | test | 2  |
> +-+--++
> 1 row selected (0.026 seconds)
> -- so far so good
> > select y.* from test3 where x.v1 <> 'blah';
> +--+
> |  V2  |
> +--+
> | null |
> +--+
> 1 row selected (0.037 seconds)
> > select x.* from test3 where y.v2 = 2;
> ++
> | V1 |
> ++
> ||
> ++
> 1 row selected (0.036 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6425) SELECT count(cf.*) or count(cf.cq) fail with a parse exception

2021-03-22 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6425:
---
Description: 
It's not SQL standard, so we can certainly decide not to support it.

I do feel that if SELECT CF.* FROM ... works, then SELECT COUNT(CF.*) FROM ... 
should work too.

> SELECT count(cf.*) or count(cf.cq) fail with a parse exception
> --
>
> Key: PHOENIX-6425
> URL: https://issues.apache.org/jira/browse/PHOENIX-6425
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
>
> It's not SQL standard, so we can certainly decide not to support it.
> I do feel that if SELECT CF.* FROM ... works, then SELECT COUNT(CF.*) FROM 
> ... should work too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6425) SELECT count(cf.*) or count(cf.cq) fail with a parse exception

2021-03-22 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6425:
--

 Summary: SELECT count(cf.*) or count(cf.cq) fail with a parse 
exception
 Key: PHOENIX-6425
 URL: https://issues.apache.org/jira/browse/PHOENIX-6425
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6424) SELECT cf1.* FAILS with a WHERE clause including cf2.

2021-03-22 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-6424:
--

Assignee: Lars Hofhansl

> SELECT cf1.* FAILS with a WHERE clause including cf2.
> -
>
> Key: PHOENIX-6424
> URL: https://issues.apache.org/jira/browse/PHOENIX-6424
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
>
> {code}
> > create table test3(pk1 integer not null primary key, x.v1 varchar, y.v2 
> > integer);
> No rows affected (1.189 seconds)
> > upsert into test3 values(1,'test',2);
> 1 row affected (0.02 seconds)
> > select * from test3;
> +-+--++
> | PK1 |  V1  | V2 |
> +-+--++
> | 1   | test | 2  |
> +-+--++
> 1 row selected (0.026 seconds)
> -- so far so good
> > select y.* from test3 where x.v1 <> 'blah';
> +--+
> |  V2  |
> +--+
> | null |
> +--+
> 1 row selected (0.037 seconds)
> > select x.* from test3 where y.v2 = 2;
> ++
> | V1 |
> ++
> ||
> ++
> 1 row selected (0.036 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6421) Selecting an indexed array value from an uncovered column with local index returns NULL

2021-03-22 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-6421.

Resolution: Fixed

> Selecting an indexed array value from an uncovered column with local index 
> returns NULL
> ---
>
> Key: PHOENIX-6421
> URL: https://issues.apache.org/jira/browse/PHOENIX-6421
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
> Attachments: 6421-5.1.txt
>
>
> Another one of these:
> {code:java}
> > create table test1(pk1 integer not null primary key, v1 float, v2 
> > float[10]);
> No rows affected (0.673 seconds)
> > create local index l2 on test1(v1);   
> No rows affected (10.889 seconds)
> > upsert into test1 values(rand() * 1000, rand(), ARRAY[rand(),rand(), 
> > rand()]);
> 1 row affected (0.045 seconds)
> > select /*+ NO_INDEX */ v2[1] from test1 where v1 < 1;
> +---+
> | ARRAY_ELEM(V2, 1) |
> +---+
> | 0.49338496    |
> +---+
> 1 row selected (0.037 seconds)
> > select v2[1] from test1 where v1 < 1;
> +-+
> | ARRAY_ELEM("V2", 1) |
> +-+
> | null    |
> +-+
> 1 row selected (0.062 seconds)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6424) SELECT cf1.* FAILS with a WHERE clause including cf2.

2021-03-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6424:
---
Comment: was deleted

(was: The {{select y.v2 from test3 where x.v1 <> 'blah'}} case turns the y.v2 
reference into a {{ProjectedColumnExpression}} whereas the {{select y.* from 
test3 where x.v1 <> 'blah'}} turns it into a {{KeyValueColumnExpression}}.)

> SELECT cf1.* FAILS with a WHERE clause including cf2.
> -
>
> Key: PHOENIX-6424
> URL: https://issues.apache.org/jira/browse/PHOENIX-6424
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> > create table test3(pk1 integer not null primary key, x.v1 varchar, y.v2 
> > integer);
> No rows affected (1.189 seconds)
> > upsert into test3 values(1,'test',2);
> 1 row affected (0.02 seconds)
> > select * from test3;
> +-+--++
> | PK1 |  V1  | V2 |
> +-+--++
> | 1   | test | 2  |
> +-+--++
> 1 row selected (0.026 seconds)
> -- so far so good
> > select y.* from test3 where x.v1 <> 'blah';
> +--+
> |  V2  |
> +--+
> | null |
> +--+
> 1 row selected (0.037 seconds)
> > select x.* from test3 where y.v2 = 2;
> ++
> | V1 |
> ++
> ||
> ++
> 1 row selected (0.036 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6424) SELECT cf1.* FAILS with a WHERE clause including cf2.

2021-03-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6424:
---
Comment: was deleted

(was: The {{select y.* from test3}} also creates a 
{{ProjectedColumnExpression}} that seems to be the difference.)

> SELECT cf1.* FAILS with a WHERE clause including cf2.
> -
>
> Key: PHOENIX-6424
> URL: https://issues.apache.org/jira/browse/PHOENIX-6424
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> > create table test3(pk1 integer not null primary key, x.v1 varchar, y.v2 
> > integer);
> No rows affected (1.189 seconds)
> > upsert into test3 values(1,'test',2);
> 1 row affected (0.02 seconds)
> > select * from test3;
> +-+--++
> | PK1 |  V1  | V2 |
> +-+--++
> | 1   | test | 2  |
> +-+--++
> 1 row selected (0.026 seconds)
> -- so far so good
> > select y.* from test3 where x.v1 <> 'blah';
> +--+
> |  V2  |
> +--+
> | null |
> +--+
> 1 row selected (0.037 seconds)
> > select x.* from test3 where y.v2 = 2;
> ++
> | V1 |
> ++
> ||
> ++
> 1 row selected (0.036 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6423) Wildcard queries fail with mixed default and explicit column families.

2021-03-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-6423.

Fix Version/s: 5.1.2
   5.2.0
   4.16.1
 Assignee: Lars Hofhansl
   Resolution: Fixed

Thanks for the review [~vjasani]

> Wildcard queries fail with mixed default and explicit column families.
> --
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
> Attachments: 6423-5.1.txt
>
>
> This one is obscure.
> Edit: not so obscure, see below. Without a local index it won't throw an 
> exception, but select * will return an incorrect set of columns. *So this is 
> a general problem independent of local indexes.*
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> > varchar);
> No rows affected (1.179 seconds)
> > create local index l4 on test3(v1); 
> No rows affected (11.253 seconds)
> > select * from test3 where v1 < 1;
> Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
> column: V1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
> at 
> org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6424) SELECT cf1.* FAILS with a WHERE clause including cf2.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6424:
---
Description: 
{code}
> create table test3(pk1 integer not null primary key, x.v1 varchar, y.v2 
> integer);
No rows affected (1.189 seconds)

> upsert into test3 values(1,'test',2);
1 row affected (0.02 seconds)

> select * from test3;
+-+--++
| PK1 |  V1  | V2 |
+-+--++
| 1   | test | 2  |
+-+--++
1 row selected (0.026 seconds)

-- so far so good

> select y.* from test3 where x.v1 <> 'blah';
+--+
|  V2  |
+--+
| null |
+--+
1 row selected (0.037 seconds)

> select x.* from test3 where y.v2 = 2;
++
| V1 |
++
||
++
1 row selected (0.036 seconds)
{code}

  was:
{code}
> create table test3(pk1 integer not null primary key, x.v1 varchar, y.v1 
> integer);
No rows affected (1.189 seconds)

> upsert into test3 values(1,'test',2);
1 row affected (0.02 seconds)

> select * from test3;
+-+--++
| PK1 |  V1  | V1 |
+-+--++
| 1   | test | 2  |
+-+--++
1 row selected (0.026 seconds)

-- so far so good

> select y.* from test3 where x.v1 <> 'blah';
+--+
|  V1  |
+--+
| null |
+--+
1 row selected (0.037 seconds)

> select x.* from test3 where y.v1 = 2;
++
| V1 |
++
||
++
1 row selected (0.036 seconds)
{code}


> SELECT cf1.* FAILS with a WHERE clause including cf2.
> -
>
> Key: PHOENIX-6424
> URL: https://issues.apache.org/jira/browse/PHOENIX-6424
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> > create table test3(pk1 integer not null primary key, x.v1 varchar, y.v2 
> > integer);
> No rows affected (1.189 seconds)
> > upsert into test3 values(1,'test',2);
> 1 row affected (0.02 seconds)
> > select * from test3;
> +-+--++
> | PK1 |  V1  | V2 |
> +-+--++
> | 1   | test | 2  |
> +-+--++
> 1 row selected (0.026 seconds)
> -- so far so good
> > select y.* from test3 where x.v1 <> 'blah';
> +--+
> |  V2  |
> +--+
> | null |
> +--+
> 1 row selected (0.037 seconds)
> > select x.* from test3 where y.v2 = 2;
> ++
> | V1 |
> ++
> ||
> ++
> 1 row selected (0.036 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6424) SELECT cf1.* FAILS with a WHERE clause including cf2.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6424:
---
Description: 
{code}
> create table test3(pk1 integer not null primary key, x.v1 varchar, y.v1 
> integer);
No rows affected (1.189 seconds)

> upsert into test3 values(1,'test',2);
1 row affected (0.02 seconds)

> select * from test3;
+-+--++
| PK1 |  V1  | V1 |
+-+--++
| 1   | test | 2  |
+-+--++
1 row selected (0.026 seconds)

-- so far so good

> select y.* from test3 where x.v1 <> 'blah';
+--+
|  V1  |
+--+
| null |
+--+
1 row selected (0.037 seconds)

> select x.* from test3 where y.v1 = 2;
++
| V1 |
++
||
++
1 row selected (0.036 seconds)
{code}

  was:
{code}
> create table test3(pk1 integer not null primary key, x.v1varchar, y.v1 
> integer);
No rows affected (1.189 seconds)

> upsert into test3 values(1,'test',2);
1 row affected (0.02 seconds)

> select * from test3;
+-+--++
| PK1 |  V1  | V1 |
+-+--++
| 1   | test | 2  |
+-+--++
1 row selected (0.026 seconds)

-- so far so good

> select y.* from test3 where x.v1 <> 'blah';
+--+
|  V1  |
+--+
| null |
+--+
1 row selected (0.037 seconds)

> select x.* from test3 where y.v1 = 2;
++
| V1 |
++
||
++
1 row selected (0.036 seconds)
{code}


> SELECT cf1.* FAILS with a WHERE clause including cf2.
> -
>
> Key: PHOENIX-6424
> URL: https://issues.apache.org/jira/browse/PHOENIX-6424
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> > create table test3(pk1 integer not null primary key, x.v1 varchar, y.v1 
> > integer);
> No rows affected (1.189 seconds)
> > upsert into test3 values(1,'test',2);
> 1 row affected (0.02 seconds)
> > select * from test3;
> +-+--++
> | PK1 |  V1  | V1 |
> +-+--++
> | 1   | test | 2  |
> +-+--++
> 1 row selected (0.026 seconds)
> -- so far so good
> > select y.* from test3 where x.v1 <> 'blah';
> +--+
> |  V1  |
> +--+
> | null |
> +--+
> 1 row selected (0.037 seconds)
> > select x.* from test3 where y.v1 = 2;
> ++
> | V1 |
> ++
> ||
> ++
> 1 row selected (0.036 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6424) SELECT cf1.* FAILS with a WHERE clause including cf2.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6424:
---
Description: 
{code}
> create table test3(pk1 integer not null primary key, x.v1varchar, y.v1 
> integer);
No rows affected (1.189 seconds)

> upsert into test3 values(1,'test',2);
1 row affected (0.02 seconds)

> select * from test3;
+-+--++
| PK1 |  V1  | V1 |
+-+--++
| 1   | test | 2  |
+-+--++
1 row selected (0.026 seconds)

-- so far so good

> select y.* from test3 where x.v1 <> 'blah';
+--+
|  V1  |
+--+
| null |
+--+
1 row selected (0.037 seconds)

> select x.* from test3 where y.v1 = 2;
++
| V1 |
++
||
++
1 row selected (0.036 seconds)
{code}

> SELECT cf1.* FAILS with a WHERE clause including cf2.
> -
>
> Key: PHOENIX-6424
> URL: https://issues.apache.org/jira/browse/PHOENIX-6424
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> > create table test3(pk1 integer not null primary key, x.v1varchar, y.v1 
> > integer);
> No rows affected (1.189 seconds)
> > upsert into test3 values(1,'test',2);
> 1 row affected (0.02 seconds)
> > select * from test3;
> +-+--++
> | PK1 |  V1  | V1 |
> +-+--++
> | 1   | test | 2  |
> +-+--++
> 1 row selected (0.026 seconds)
> -- so far so good
> > select y.* from test3 where x.v1 <> 'blah';
> +--+
> |  V1  |
> +--+
> | null |
> +--+
> 1 row selected (0.037 seconds)
> > select x.* from test3 where y.v1 = 2;
> ++
> | V1 |
> ++
> ||
> ++
> 1 row selected (0.036 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6423) Wildcard queries fail with mixed default and explicit column families.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Attachment: 6423-5.1.txt

> Wildcard queries fail with mixed default and explicit column families.
> --
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Priority: Critical
> Attachments: 6423-5.1.txt
>
>
> This one is obscure.
> Edit: not so obscure, see below. Without a local index it won't throw an 
> exception, but select * will return an incorrect set of columns. *So this is 
> a general problem independent of local indexes.*
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> > varchar);
> No rows affected (1.179 seconds)
> > create local index l4 on test3(v1); 
> No rows affected (11.253 seconds)
> > select * from test3 where v1 < 1;
> Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
> column: V1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
> at 
> org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6423) Wildcard queries fail with mixed default and explicit column families.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Affects Version/s: 5.1.1

> Wildcard queries fail with mixed default and explicit column families.
> --
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.1
>Reporter: Lars Hofhansl
>Priority: Critical
>
> This one is obscure.
> Edit: not so obscure, see below. Without a local index it won't throw an 
> exception, but select * will return an incorrect set of columns. *So this is 
> a general problem independent of local indexes.*
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> > varchar);
> No rows affected (1.179 seconds)
> > create local index l4 on test3(v1); 
> No rows affected (11.253 seconds)
> > select * from test3 where v1 < 1;
> Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
> column: V1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
> at 
> org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6424) SELECT cf1.* FAILS with a WHERE clause including cf2.

2021-03-20 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6424:
--

 Summary: SELECT cf1.* FAILS with a WHERE clause including cf2.
 Key: PHOENIX-6424
 URL: https://issues.apache.org/jira/browse/PHOENIX-6424
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.1
Reporter: Lars Hofhansl






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6423) Wildcard queries fail with mixed default and explicit column families.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Summary: Wildcard queries fail with mixed default and explicit column 
families.  (was: Mixing default and explicit column families causes exceptions 
and wrong results.)

> Wildcard queries fail with mixed default and explicit column families.
> --
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Critical
>
> This one is obscure.
> Edit: not so obscure, see below. Without a local index it won't throw an 
> exception, but select * will return an incorrect set of columns. *So this is 
> a general problem independent of local indexes.*
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> > varchar);
> No rows affected (1.179 seconds)
> > create local index l4 on test3(v1); 
> No rows affected (11.253 seconds)
> > select * from test3 where v1 < 1;
> Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
> column: V1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
> at 
> org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6423) Mixing default and explicit column families causes exceptions and wrong results.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Description: 
This one is obscure.

Edit: not so obscure, see below. Without a local index it won't throw an 
exception, but select * will return an incorrect set of columns. *So this is a 
general problem independent of local indexes.*

{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 varchar);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> select * from test3 where v1 < 1;
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
column: V1 (state=22005,code=203)
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
{code}
 

  was:
This one is obscure.

Edit: not so obscure, see below. Without a local index it won't throw an 
exception, but select * will return an incorrect set of columns.

{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 varchar);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> select * from test3 where v1 < 1;
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
column: V1 (state=22005,code=203)
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
{code}
 


> Mixing default and explicit column families causes exceptions and wrong 
> results.
> 
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>   

[jira] [Updated] (PHOENIX-6423) Mixing default and explicit column families causes exceptions and wrong results.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Priority: Critical  (was: Major)

> Mixing default and explicit column families causes exceptions and wrong 
> results.
> 
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Critical
>
> This one is obscure.
> Edit: not so obscure, see below. Without a local index it won't throw an 
> exception, but select * will return an incorrect set of columns.
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> > varchar);
> No rows affected (1.179 seconds)
> > create local index l4 on test3(v1); 
> No rows affected (11.253 seconds)
> > select * from test3 where v1 < 1;
> Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
> column: V1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
> at 
> org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6423) Mixing default and explicit column families causes exceptions and wrong results.

2021-03-20 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Description: 
This one is obscure.

Edit: not so obscure, see below. Without a local index it won't throw an 
exception, but select * will return an incorrect set of columns.

{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 varchar);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> select * from test3 where v1 < 1;
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
column: V1 (state=22005,code=203)
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
{code}
 

  was:
This one is obscure:
{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 varchar);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> select * from test3 where v1 < 1;
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
column: V1 (state=22005,code=203)
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
{code}
 


> Mixing default and explicit column families causes exceptions and wrong 
> results.
> 
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
>
> This one is obscure.
> Edit: not so obscure, see below. Without a local index it won't throw an 
> exception, but 

[jira] [Updated] (PHOENIX-6423) Mixing default and explicit column families causes exceptions and wrong results.

2021-03-19 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Summary: Mixing default and explicit column families causes exceptions and 
wrong results.  (was: Mixed column families with uncovered local index columns 
causes an exception)

> Mixing default and explicit column families causes exceptions and wrong 
> results.
> 
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Minor
>
> This one is obscure:
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> > varchar);
> No rows affected (1.179 seconds)
> > create local index l4 on test3(v1); 
> No rows affected (11.253 seconds)
> > select * from test3 where v1 < 1;
> Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
> column: V1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
> at 
> org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6423) Mixing default and explicit column families causes exceptions and wrong results.

2021-03-19 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Priority: Major  (was: Minor)

> Mixing default and explicit column families causes exceptions and wrong 
> results.
> 
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
>
> This one is obscure:
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> > varchar);
> No rows affected (1.179 seconds)
> > create local index l4 on test3(v1); 
> No rows affected (11.253 seconds)
> > select * from test3 where v1 < 1;
> Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
> column: V1 (state=22005,code=203)
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
> at 
> org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6423) Mixed column families with uncovered local index columns causes an exception

2021-03-19 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Description: 
This one is obscure:
{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 varchar);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> select * from test3 where v1 < 1;
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
column: V1 (state=22005,code=203)
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
{code}
 

  was:
This one is obscure:
{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 varchar);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> select * from test3 where v1 < 1;
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
column: V1 (state=22005,code=203)
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
code}
 


> Mixed column families with uncovered local index columns causes an exception
> 
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Minor
>
> This one is obscure:
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> > varchar);
> No rows affected (1.179 seconds)
> > create local index l4 on test3(v1); 
> No rows affected (11.253 seconds)
> > select * from test3 where 

[jira] [Updated] (PHOENIX-6423) Mixed column families with uncovered local index columns causes an exception

2021-03-19 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6423:
---
Description: 
This one is obscure:
{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 varchar);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> select * from test3 where v1 < 1;
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: VARCHAR at 
column: V1 (state=22005,code=203)
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: VARCHAR at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
code}
 

  was:
This one is obscure:
{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 
> float[10]);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> upsert into test3 values (1,2,ARRAY[5,6]);
1 row affected (0.023 seconds)

> select * from test3 where y.v1[2] < 6;   
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: FLOAT ARRAY 
at column: V1 (state=22005,code=203)
  
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: FLOAT ARRAY at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
{code}
 


> Mixed column families with uncovered local index columns causes an exception
> 
>
> Key: PHOENIX-6423
> URL: https://issues.apache.org/jira/browse/PHOENIX-6423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Minor
>
> This one is obscure:
> {code}
> > create table test3(pk1 integer not null primary key, v1 float, y.v1 
> 

[jira] [Created] (PHOENIX-6423) Mixed column families with uncovered local index columns causes an exception

2021-03-19 Thread Lars Hofhansl (Jira)
Lars Hofhansl created PHOENIX-6423:
--

 Summary: Mixed column families with uncovered local index columns 
causes an exception
 Key: PHOENIX-6423
 URL: https://issues.apache.org/jira/browse/PHOENIX-6423
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl


This one is obscure:
{code}
> create table test3(pk1 integer not null primary key, v1 float, y.v1 
> float[10]);
No rows affected (1.179 seconds)

> create local index l4 on test3(v1); 
No rows affected (11.253 seconds)

> upsert into test3 values (1,2,ARRAY[5,6]);
1 row affected (0.023 seconds)

> select * from test3 where y.v1[2] < 6;   
Error: ERROR 203 (22005): Type mismatch. expected: FLOAT but was: FLOAT ARRAY 
at column: V1 (state=22005,code=203)
  
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: FLOAT but was: FLOAT ARRAY at column: V1
at 
org.apache.phoenix.compile.ProjectionCompiler.coerceIfNecessary(ProjectionCompiler.java:339)
at 
org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:258)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:393)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:757)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:676)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:253)
at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:178)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:347)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlansForSingleFlatQuery(QueryOptimizer.java:239)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:138)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:116)
at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:102)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:313)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:295)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:287)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1930)
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6421) Selecting an indexed array value from an uncovered column with local index returns NULL

2021-03-19 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6421:
---
Fix Version/s: 5.1.2
   5.2.0
   4.16.1

> Selecting an indexed array value from an uncovered column with local index 
> returns NULL
> ---
>
> Key: PHOENIX-6421
> URL: https://issues.apache.org/jira/browse/PHOENIX-6421
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
> Attachments: 6421-5.1.txt
>
>
> Another one of these:
> {code:java}
> > create table test1(pk1 integer not null primary key, v1 float, v2 
> > float[10]);
> No rows affected (0.673 seconds)
> > create local index l2 on test1(v1);   
> No rows affected (10.889 seconds)
> > upsert into test1 values(rand() * 1000, rand(), ARRAY[rand(),rand(), 
> > rand()]);
> 1 row affected (0.045 seconds)
> > select /*+ NO_INDEX */ v2[1] from test1 where v1 < 1;
> +---+
> | ARRAY_ELEM(V2, 1) |
> +---+
> | 0.49338496    |
> +---+
> 1 row selected (0.037 seconds)
> > select v2[1] from test1 where v1 < 1;
> +-+
> | ARRAY_ELEM("V2", 1) |
> +-+
> | null    |
> +-+
> 1 row selected (0.062 seconds)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6421) Selecting an indexed array value from an uncovered column with local index returns NULL

2021-03-19 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6421:
---
Attachment: 6421-5.1.txt

> Selecting an indexed array value from an uncovered column with local index 
> returns NULL
> ---
>
> Key: PHOENIX-6421
> URL: https://issues.apache.org/jira/browse/PHOENIX-6421
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
> Attachments: 6421-5.1.txt
>
>
> Another one of these:
> {code:java}
> > create table test1(pk1 integer not null primary key, v1 float, v2 
> > float[10]);
> No rows affected (0.673 seconds)
> > create local index l2 on test1(v1);   
> No rows affected (10.889 seconds)
> > upsert into test1 values(rand() * 1000, rand(), ARRAY[rand(),rand(), 
> > rand()]);
> 1 row affected (0.045 seconds)
> > select /*+ NO_INDEX */ v2[1] from test1 where v1 < 1;
> +---+
> | ARRAY_ELEM(V2, 1) |
> +---+
> | 0.49338496    |
> +---+
> 1 row selected (0.037 seconds)
> > select v2[1] from test1 where v1 < 1;
> +-+
> | ARRAY_ELEM("V2", 1) |
> +-+
> | null    |
> +-+
> 1 row selected (0.062 seconds)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6421) Selecting an indexed array value from an uncovered column with local index returns NULL

2021-03-19 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-6421:
--

Assignee: Lars Hofhansl

> Selecting an indexed array value from an uncovered column with local index 
> returns NULL
> ---
>
> Key: PHOENIX-6421
> URL: https://issues.apache.org/jira/browse/PHOENIX-6421
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.16.1, 5.2.0, 5.1.2
>
> Attachments: 6421-5.1.txt
>
>
> Another one of these:
> {code:java}
> > create table test1(pk1 integer not null primary key, v1 float, v2 
> > float[10]);
> No rows affected (0.673 seconds)
> > create local index l2 on test1(v1);   
> No rows affected (10.889 seconds)
> > upsert into test1 values(rand() * 1000, rand(), ARRAY[rand(),rand(), 
> > rand()]);
> 1 row affected (0.045 seconds)
> > select /*+ NO_INDEX */ v2[1] from test1 where v1 < 1;
> +---+
> | ARRAY_ELEM(V2, 1) |
> +---+
> | 0.49338496    |
> +---+
> 1 row selected (0.037 seconds)
> > select v2[1] from test1 where v1 < 1;
> +-+
> | ARRAY_ELEM("V2", 1) |
> +-+
> | null    |
> +-+
> 1 row selected (0.062 seconds)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6421) Selecting an indexed array value from an uncovered column with local index returns NULL

2021-03-18 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6421:
---
Description: 
Another one of these:
{code:java}
> create table test1(pk1 integer not null primary key, v1 float, v2 float[10]);
No rows affected (0.673 seconds)

> create local index l2 on test1(v1);   
No rows affected (10.889 seconds)

> upsert into test1 values(rand() * 1000, rand(), ARRAY[rand(),rand(), 
> rand()]);
1 row affected (0.045 seconds)

> select /*+ NO_INDEX */ v2[1] from test1 where v1 < 1;
+---+
| ARRAY_ELEM(V2, 1) |
+---+
| 0.49338496    |
+---+
1 row selected (0.037 seconds)

> select v2[1] from test1 where v1 < 1;
+-+
| ARRAY_ELEM("V2", 1) |
+-+
| null    |
+-+
1 row selected (0.062 seconds)
{code}
 

> Selecting an indexed array value from an uncovered column with local index 
> returns NULL
> ---
>
> Key: PHOENIX-6421
> URL: https://issues.apache.org/jira/browse/PHOENIX-6421
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
>
> Another one of these:
> {code:java}
> > create table test1(pk1 integer not null primary key, v1 float, v2 
> > float[10]);
> No rows affected (0.673 seconds)
> > create local index l2 on test1(v1);   
> No rows affected (10.889 seconds)
> > upsert into test1 values(rand() * 1000, rand(), ARRAY[rand(),rand(), 
> > rand()]);
> 1 row affected (0.045 seconds)
> > select /*+ NO_INDEX */ v2[1] from test1 where v1 < 1;
> +---+
> | ARRAY_ELEM(V2, 1) |
> +---+
> | 0.49338496    |
> +---+
> 1 row selected (0.037 seconds)
> > select v2[1] from test1 where v1 < 1;
> +-+
> | ARRAY_ELEM("V2", 1) |
> +-+
> | null    |
> +-+
> 1 row selected (0.062 seconds)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   8   9   10   >