[jira] [Resolved] (PHOENIX-7291) Bump up omid to 1.1.2

2024-03-31 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-7291.
--
Resolution: Fixed

Pushed to master, 5.x branches. Thanks for review [~stoty].

> Bump up omid to 1.1.2
> -
>
> Key: PHOENIX-7291
> URL: https://issues.apache.org/jira/browse/PHOENIX-7291
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.3.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7291) Bump up omid to 1.1.2

2024-03-31 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7291:
-
Fix Version/s: 5.3.0

> Bump up omid to 1.1.2
> -
>
> Key: PHOENIX-7291
> URL: https://issues.apache.org/jira/browse/PHOENIX-7291
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.3.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7291) Bump up omid to 1.1.2

2024-03-26 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7291:
-
Fix Version/s: 5.2.0
   5.1.4

> Bump up omid to 1.1.2
> -
>
> Key: PHOENIX-7291
> URL: https://issues.apache.org/jira/browse/PHOENIX-7291
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7291) Bump up omid to 1.1.2

2024-03-26 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7291:


 Summary: Bump up omid to 1.1.2
 Key: PHOENIX-7291
 URL: https://issues.apache.org/jira/browse/PHOENIX-7291
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-278) Change default waitStrategy to LOW_CPU

2024-03-26 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-278.


> Change default waitStrategy to LOW_CPU
> --
>
> Key: OMID-278
> URL: https://issues.apache.org/jira/browse/OMID-278
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.2
>
>
> The default config for TSO server causes it to burn 400% cpu.
> This is a shock for casual users, and not very environmentally friendly 
> either.
> I think that anyone who needs the kind performance benefit that 
> HIGH_THROUGHPUT brings is able to configure that explicitly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-280) Use Hbase 2.5 for building OMID

2024-03-26 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-280.


> Use Hbase 2.5 for building OMID
> ---
>
> Key: OMID-280
> URL: https://issues.apache.org/jira/browse/OMID-280
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 1.1.2
>
>
> We currently build with HBase 2.4.17, and Hadoop 3.1.4.
> Using 2.5.7-hadoop3 and Hadoop 3.2.4 would get rid of a lot of CVEs in the 
> binary assembly. (tso-server)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-266) Remove and ban unrelocated Guava from Omid

2024-03-26 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-266.


> Remove and ban unrelocated Guava from Omid
> --
>
> Key: OMID-266
> URL: https://issues.apache.org/jira/browse/OMID-266
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.2
>
>
> Guava used directly should be the phoenix-thirdparty Guava.
> All the other dependencies are supposed to include/pull in their own 
> pre-shaded guava.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-277) Omid 1.1.2 fails with Phoenix 5.2

2024-03-26 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-277.


> Omid 1.1.2 fails with Phoenix 5.2
> -
>
> Key: OMID-277
> URL: https://issues.apache.org/jira/browse/OMID-277
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.1, 1.1.2
>Reporter: Lars Hofhansl
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.2
>
>
> Try to create a transactional table with Phoenix 5.2 and Omid 1.1.2, and 
> you'll find this in the RS log:
> {code:java}
>  2024-02-28T20:26:13,055 ERROR [RS_OPEN_REGION-regionserver/think:16020-2] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor threw 
> java.lang.NoClassDefFoundE
> rror: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> at 
> org.apache.omid.transaction.OmidSnapshotFilter.start(OmidSnapshotFilter.java:85)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor.start(OmidTransactionalProcessor.java:44)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.hadoop.hbase.coprocessor.BaseEnvironment.startup(BaseEnvironment.java:69)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.checkAndLoadInstance(CoprocessorHost.java:285)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:249)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:388)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:278)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:859) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:734) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
>  ~[?:?]
> at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) 
> ~[?:?]
> at java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6971) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7184)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7161) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7120) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7076) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:149)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>  ~[?:?]
> at java.lang.Thread.run(Thread.java:1583) ~[?:?]
> Caused by: java.lang.ExceptionInInitializerError: Exception 
> java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/com/google/common/base/Charsets [in thread 
> "RS_OPEN_REGION-regionserver/think:16020-2"]
> at 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig.(HBaseCommitTableConfig.java:36)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at org.apache.omid.transaction.OmidCompactor.start(OmidCompactor.java:92) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidGCProcessor.start(OmidGCProcessor.java:43) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> ... 21 more{code}
>  
> As before I have no time to track this down as I do not work on Phoenix/HBase 
> anymore, but at least I can file an issue. :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-190) Update website for 1.0.2

2024-03-26 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-190:
-
Fix Version/s: 1.1.3
   (was: 1.1.2)

> Update website for 1.0.2
> 
>
> Key: OMID-190
> URL: https://issues.apache.org/jira/browse/OMID-190
> Project: Phoenix Omid
>  Issue Type: Improvement
>Affects Versions: 1.0.2
>Reporter: Istvan Toth
>Priority: Major
> Fix For: 1.1.3
>
>
> The site repo URL has changed, and the download links point to the old repo 
> and relase dirs.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7279) column not found exception when aliased column used in order by of union all query and first query in it also aliased

2024-03-15 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7279:
-
Priority: Critical  (was: Major)

> column not found exception when aliased column used in order by of union all 
> query and first query in it also aliased
> -
>
> Key: PHOENIX-7279
> URL: https://issues.apache.org/jira/browse/PHOENIX-7279
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 5.2.0, 5.3.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7279) column not found exception when aliased column used in order by of union all query and first query in it also aliased

2024-03-15 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7279:
-
Fix Version/s: 5.2.0
   5.3.0
   5.1.4

> column not found exception when aliased column used in order by of union all 
> query and first query in it also aliased
> -
>
> Key: PHOENIX-7279
> URL: https://issues.apache.org/jira/browse/PHOENIX-7279
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.3.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7279) column not found exception when aliased column used in order by of union all query and first query in it also aliased

2024-03-15 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7279:


 Summary: column not found exception when aliased column used in 
order by of union all query and first query in it also aliased
 Key: PHOENIX-7279
 URL: https://issues.apache.org/jira/browse/PHOENIX-7279
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7274) Possible column ambiguity error with union all queries when column used in any operators and result aliased to same column

2024-03-12 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7274:
-
Description: 
There is possible column ambiguity when column alias name is same as column 
name and used in some operators or cast.
DDLs created from joins examples below.
https://phoenix.apache.org/joins.html

{code:java}
0: jdbc:phoenix:> select * from (select cast(customerid as char(10)) as 
customerid from orders) as tt union all select * from (select cast(customerid 
as char(10)) as customerid from customers) as nn;
Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. 
columnName=CUSTOMERID (state=42702,code=502)
org.apache.phoenix.schema.AmbiguousColumnException: ERROR 502 (42702): Column 
reference ambiguous or duplicate names. columnName=CUSTOMERID
at 
org.apache.phoenix.parse.ParseNodeRewriter.visit(ParseNodeRewriter.java:461)
at 
org.apache.phoenix.compile.SubselectRewriter.visit(SubselectRewriter.java:578)
at 
org.apache.phoenix.compile.SubselectRewriter.visit(SubselectRewriter.java:58)
at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
at 
org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:60)
at 
org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:112)
at 
org.apache.phoenix.compile.SubselectRewriter.flatten(SubselectRewriter.java:570)
at 
org.apache.phoenix.compile.SubselectRewriter.flatten(SubselectRewriter.java:353)
at org.apache.phoenix.util.ParseNodeUtil.rewrite(ParseNodeUtil.java:175)
at 
org.apache.phoenix.compile.QueryCompiler.compileSubquery(QueryCompiler.java:644)
at 
org.apache.phoenix.compile.QueryCompiler.compileUnionAll(QueryCompiler.java:222)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:176)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:547)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:510)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:303)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:302)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:295)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2061)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206){code}


  was:
There is possible column ambiguity when column alias name is same as column 
name and used in some operators or cast similarly getting NPE also

{code:java}
0: jdbc:phoenix:> select * from (select cast(customerid as char(10)) as 
customerid from orders) as tt union all select * from (select cast(customerid 
as char(10)) as customerid from customers) as nn;
Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. 
columnName=CUSTOMERID (state=42702,code=502)
org.apache.phoenix.schema.AmbiguousColumnException: ERROR 502 (42702): Column 
reference ambiguous or duplicate names. columnName=CUSTOMERID
at 
org.apache.phoenix.parse.ParseNodeRewriter.visit(ParseNodeRewriter.java:461)
at 
org.apache.phoenix.compile.SubselectRewriter.visit(SubselectRewriter.java:578)
at 
org.apache.phoenix.compile.SubselectRewriter.visit(SubselectRewriter.java:58)
at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
at 
org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:60)
at 
org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:112)
at 
org.apache.phoenix.compile.SubselectRewriter.flatten(SubselectRewriter.java:570)
at 
org.apache.phoenix.compile.SubselectRewriter.flatten(SubselectRewriter.java:353)
at org.apache.phoenix.util.ParseNodeUtil.rewrite(ParseNodeUtil.java:175)
at 
org.apache.phoenix.compile.QueryCompiler.compileSubquery(QueryCompiler.java:644)
at 
org.apache.phoenix.compile.QueryCompiler.compileUnionAll(QueryCompiler.java:222)
at 

[jira] [Created] (PHOENIX-7274) Possible column ambiguity error with union all queries when column used in any operators and result aliased to same column

2024-03-12 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7274:


 Summary: Possible column ambiguity error with union all queries 
when column used in any operators and result aliased to same column
 Key: PHOENIX-7274
 URL: https://issues.apache.org/jira/browse/PHOENIX-7274
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


There is possible column ambiguity when column alias name is same as column 
name and used in some operators or cast similarly getting NPE also

{code:java}
0: jdbc:phoenix:> select * from (select cast(customerid as char(10)) as 
customerid from orders) as tt union all select * from (select cast(customerid 
as char(10)) as customerid from customers) as nn;
Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. 
columnName=CUSTOMERID (state=42702,code=502)
org.apache.phoenix.schema.AmbiguousColumnException: ERROR 502 (42702): Column 
reference ambiguous or duplicate names. columnName=CUSTOMERID
at 
org.apache.phoenix.parse.ParseNodeRewriter.visit(ParseNodeRewriter.java:461)
at 
org.apache.phoenix.compile.SubselectRewriter.visit(SubselectRewriter.java:578)
at 
org.apache.phoenix.compile.SubselectRewriter.visit(SubselectRewriter.java:58)
at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
at 
org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:60)
at 
org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:112)
at 
org.apache.phoenix.compile.SubselectRewriter.flatten(SubselectRewriter.java:570)
at 
org.apache.phoenix.compile.SubselectRewriter.flatten(SubselectRewriter.java:353)
at org.apache.phoenix.util.ParseNodeUtil.rewrite(ParseNodeUtil.java:175)
at 
org.apache.phoenix.compile.QueryCompiler.compileSubquery(QueryCompiler.java:644)
at 
org.apache.phoenix.compile.QueryCompiler.compileUnionAll(QueryCompiler.java:222)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:176)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:547)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:510)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:303)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:302)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:295)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2061)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206){code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Moved] (PHOENIX-7273) Add operator for converting decimal to character string

2024-03-12 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla moved OMID-288 to PHOENIX-7273:
---

 Key: PHOENIX-7273  (was: OMID-288)
Workflow: no-reopen-closed, patch-avail  (was: jira)
 Project: Phoenix  (was: Phoenix Omid)

> Add operator for converting decimal to character string
> ---
>
> Key: PHOENIX-7273
> URL: https://issues.apache.org/jira/browse/PHOENIX-7273
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nikita Pande
>Priority: Major
>
> Add operator for converting decimal to character string



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7263) Row value constructor split keys not allowed on indexes

2024-03-07 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7263:
-
Fix Version/s: 5.3.0
   (was: 5.2.1)

> Row value constructor split keys not allowed on indexes
> ---
>
> Key: PHOENIX-7263
> URL: https://issues.apache.org/jira/browse/PHOENIX-7263
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.3.0, 5.1.4
>
>
> While creating indexes if we pass row value constructor split keys getting 
> following error  same is passing with create table because while creating the 
> table properly building the split keys using expression compiler which is not 
> the case with index creation.
> {noformat}
> java.lang.ClassCastException: 
> org.apache.phoenix.expression.RowValueConstructorExpression cannot be cast to 
> org.apache.phoenix.expression.LiteralExpression
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler.compile(CreateIndexCompiler.java:77)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1205)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1191)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:425)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:424)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:412)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2009)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> {noformat}
> In create table:
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> ImmutableBytesWritable ptr = context.getTempPtr();
> ExpressionCompiler expressionCompiler = new 
> ExpressionCompiler(context);
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (node instanceof BindParseNode) {
> context.getBindManager().addParamMetaData((BindParseNode) 
> node, VARBINARY_DATUM);
> }
> if (node.isStateless()) {
> Expression expression = node.accept(expressionCompiler);
> if (expression.evaluate(null, ptr)) {;
> splits[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
> continue;
> }
> }
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> {code}
> Where as in indexing expecting only literals.
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (!node.isStateless()) {
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> LiteralExpression expression = 
> (LiteralExpression)node.accept(expressionCompiler);
> splits[i] = expression.getBytes();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7263) Row value constructor split keys not allowed on indexes

2024-03-07 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7263:
-
Fix Version/s: 5.2.0
   5.2.1
   5.1.4

> Row value constructor split keys not allowed on indexes
> ---
>
> Key: PHOENIX-7263
> URL: https://issues.apache.org/jira/browse/PHOENIX-7263
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.2.1, 5.1.4
>
>
> While creating indexes if we pass row value constructor split keys getting 
> following error  same is passing with create table because while creating the 
> table properly building the split keys using expression compiler which is not 
> the case with index creation.
> {noformat}
> java.lang.ClassCastException: 
> org.apache.phoenix.expression.RowValueConstructorExpression cannot be cast to 
> org.apache.phoenix.expression.LiteralExpression
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler.compile(CreateIndexCompiler.java:77)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1205)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1191)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:435)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:425)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:424)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:412)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2009)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> {noformat}
> In create table:
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> ImmutableBytesWritable ptr = context.getTempPtr();
> ExpressionCompiler expressionCompiler = new 
> ExpressionCompiler(context);
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (node instanceof BindParseNode) {
> context.getBindManager().addParamMetaData((BindParseNode) 
> node, VARBINARY_DATUM);
> }
> if (node.isStateless()) {
> Expression expression = node.accept(expressionCompiler);
> if (expression.evaluate(null, ptr)) {;
> splits[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
> continue;
> }
> }
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> {code}
> Where as in indexing expecting only literals.
> {code:java}
> final byte[][] splits = new byte[splitNodes.size()][];
> for (int i = 0; i < splits.length; i++) {
> ParseNode node = splitNodes.get(i);
> if (!node.isStateless()) {
> throw new 
> SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
> .setMessage("Node: " + node).build().buildException();
> }
> LiteralExpression expression = 
> (LiteralExpression)node.accept(expressionCompiler);
> splits[i] = expression.getBytes();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7263) Row value constructor split keys not allowed on indexes

2024-03-07 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7263:


 Summary: Row value constructor split keys not allowed on indexes
 Key: PHOENIX-7263
 URL: https://issues.apache.org/jira/browse/PHOENIX-7263
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


While creating indexes if we pass row value constructor split keys getting 
following error  same is passing with create table because while creating the 
table properly building the split keys using expression compiler which is not 
the case with index creation.
{noformat}
java.lang.ClassCastException: 
org.apache.phoenix.expression.RowValueConstructorExpression cannot be cast to 
org.apache.phoenix.expression.LiteralExpression
at 
org.apache.phoenix.compile.CreateIndexCompiler.compile(CreateIndexCompiler.java:77)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1205)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateIndexStatement.compilePlan(PhoenixStatement.java:1191)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:435)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:425)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:424)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:412)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2009)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
{noformat}

In create table:

{code:java}
final byte[][] splits = new byte[splitNodes.size()][];
ImmutableBytesWritable ptr = context.getTempPtr();
ExpressionCompiler expressionCompiler = new ExpressionCompiler(context);
for (int i = 0; i < splits.length; i++) {
ParseNode node = splitNodes.get(i);
if (node instanceof BindParseNode) {
context.getBindManager().addParamMetaData((BindParseNode) node, 
VARBINARY_DATUM);
}
if (node.isStateless()) {
Expression expression = node.accept(expressionCompiler);
if (expression.evaluate(null, ptr)) {;
splits[i] = ByteUtil.copyKeyBytesIfNecessary(ptr);
continue;
}
}
throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
.setMessage("Node: " + node).build().buildException();
}
{code}

Where as in indexing expecting only literals.

{code:java}
final byte[][] splits = new byte[splitNodes.size()][];
for (int i = 0; i < splits.length; i++) {
ParseNode node = splitNodes.get(i);
if (!node.isStateless()) {
throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.SPLIT_POINT_NOT_CONSTANT)
.setMessage("Node: " + node).build().buildException();
}
LiteralExpression expression = 
(LiteralExpression)node.accept(expressionCompiler);
splits[i] = expression.getBytes();
}
{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (OMID-287) Improve omid startup script to have all the options like pid file generation, log handling etc

2024-03-06 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created OMID-287:


 Summary: Improve omid startup script to have all the options like 
pid file generation, log handling etc
 Key: OMID-287
 URL: https://issues.apache.org/jira/browse/OMID-287
 Project: Phoenix Omid
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Nihal Jain


Currently omid startup script doesn't have any way to check the liveness using 
pid file generation, log handling like log rolling during the startup, passing 
custom log directory. Would be better to adopt hbase-env.sh script to get the 
environment variables like HBASE_PID_DIR, HBASE_LOG_DIR etc..same way like 
phoenix-queryserver

FYI [~nihaljain.cs] [~stoty] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7257) Writes to index tables on immutable table can be parallel for better performance

2024-03-05 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7257:


 Summary: Writes to index tables on immutable table can be parallel 
for better performance
 Key: PHOENIX-7257
 URL: https://issues.apache.org/jira/browse/PHOENIX-7257
 Project: Phoenix
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


3 phase commit has introduced to ensure strong consistency of between data 
table and index tables when the data table is immutable. 
Writing data to indexes happening serially which can be made parallel to get 
better performance. Here is the code snippet in MutationState.
{noformat}
private void sendMutations(Iterator>> 
mutationsIterator, Span span, ImmutableBytesWritable indexMetaDataPtr, boolean 
isVerifiedPhase)
throws SQLException {
while (mutationsIterator.hasNext()) {
Entry> pair = mutationsIterator.next();
TableInfo tableInfo = pair.getKey();
byte[] htableName = tableInfo.getHTableName().getBytes();
List mutationList = pair.getValue();
List> mutationBatchList =
getMutationBatchList(batchSize, batchSizeBytes, 
mutationList);


Table hTable = 
connection.getQueryServices().getTable(htableName);
try {
if (table.isTransactional()) {
// Track tables to which we've sent uncommitted data
if (tableInfo.isDataTable()) {

uncommittedPhysicalNames.add(table.getPhysicalName().getString());
phoenixTransactionContext.markDMLFence(table);
}
// Only pass true for last argument if the index is 
being written to on it's own (i.e. initial
// index population), not if it's being written to for 
normal maintenance due to writes to
// the data table. This case is different because the 
initial index population does not need
// to be done transactionally since the index is only 
made active after all writes have
// occurred successfully.
hTable = 
phoenixTransactionContext.getTransactionalTableWriter(connection, table, 
hTable, tableInfo.isDataTable() && table.getType() == PTableType.INDEX);
}
.

hTable.batch(mutationBatch, null);
}
   
}
}
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7254) Allow transactions on tables with column encoding

2024-03-05 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7254:


 Summary: Allow transactions on tables with column encoding 
 Key: PHOENIX-7254
 URL: https://issues.apache.org/jira/browse/PHOENIX-7254
 Project: Phoenix
  Issue Type: Improvement
  Components: omid
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently cannot make the table transactional when column encoding enabled. 
Would be better to avoid such constraints increase the Omid adoption.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (OMID-285) Enable build pipelines for Omid

2024-03-04 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created OMID-285:


 Summary: Enable build pipelines for Omid
 Key: OMID-285
 URL: https://issues.apache.org/jira/browse/OMID-285
 Project: Phoenix Omid
  Issue Type: Task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently no pipelines for Omid. 
Would be better to have both precommit and nightlies.

FYI [~stoty] [~nihaljain.cs] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-284) Use protobuf 3 in Omid

2024-03-04 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823452#comment-17823452
 ] 

Rajeshbabu Chintaguntla commented on OMID-284:
--

Agree with you [~stoty] . Better to bump up.

> Use protobuf 3 in Omid
> --
>
> Key: OMID-284
> URL: https://issues.apache.org/jira/browse/OMID-284
> Project: Phoenix Omid
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Istvan Toth
>Priority: Critical
>
> Omid uses Protobuf 2.5.0.
> It only uses protobuf for communicating with the TSO server, it does not 
> implement an HBase endpoint, so I see no reason not to use the latest version.
> This could be done in 1.2, provided that the switch does not cause 
> compatibility issues (I expect none)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-277) Omid 1.1.2 fails with Phoenix 5.2

2024-03-04 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823260#comment-17823260
 ] 

Rajeshbabu Chintaguntla commented on OMID-277:
--

[~stoty]  Yes Istvan. Testing in local clusters. Once it's fine will start RC.

> Omid 1.1.2 fails with Phoenix 5.2
> -
>
> Key: OMID-277
> URL: https://issues.apache.org/jira/browse/OMID-277
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.1, 1.1.2
>Reporter: Lars Hofhansl
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.2
>
>
> Try to create a transactional table with Phoenix 5.2 and Omid 1.1.2, and 
> you'll find this in the RS log:
> {code:java}
>  2024-02-28T20:26:13,055 ERROR [RS_OPEN_REGION-regionserver/think:16020-2] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor threw 
> java.lang.NoClassDefFoundE
> rror: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig
> at 
> org.apache.omid.transaction.OmidSnapshotFilter.start(OmidSnapshotFilter.java:85)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidTransactionalProcessor.start(OmidTransactionalProcessor.java:44)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.hadoop.hbase.coprocessor.BaseEnvironment.startup(BaseEnvironment.java:69)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.checkAndLoadInstance(CoprocessorHost.java:285)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:249)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:388)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:278)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:859) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:734) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
>  ~[?:?]
> at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) 
> ~[?:?]
> at java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6971) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7184)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7161) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7120) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7076) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:149)
>  ~[hbase-server-2.5.7.jar:2.5.7]
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) 
> ~[hbase-server-2.5.7.jar:2.5.7]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>  ~[?:?]
> at java.lang.Thread.run(Thread.java:1583) ~[?:?]
> Caused by: java.lang.ExceptionInInitializerError: Exception 
> java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/com/google/common/base/Charsets [in thread 
> "RS_OPEN_REGION-regionserver/think:16020-2"]
> at 
> org.apache.omid.committable.hbase.HBaseCommitTableConfig.(HBaseCommitTableConfig.java:36)
>  ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at org.apache.omid.transaction.OmidCompactor.start(OmidCompactor.java:92) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> at 
> org.apache.phoenix.coprocessor.OmidGCProcessor.start(OmidGCProcessor.java:43) 
> ~[phoenix-server-hbase-2.5-5.2.0.jar:5.2.0]
> ... 21 more{code}
>  
> As before I have no time to track this down as I do not work on Phoenix/HBase 
> anymore, but at least I can file an issue. :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-272) Support JDK17

2024-02-28 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821958#comment-17821958
 ] 

Rajeshbabu Chintaguntla commented on OMID-272:
--

The PR is still open that can be closed right [~stoty] .

> Support JDK17
> -
>
> Key: OMID-272
> URL: https://issues.apache.org/jira/browse/OMID-272
> Project: Phoenix Omid
>  Issue Type: Improvement
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.1
>
>
> Tests fail hard with JDK 17.
> Hopefully we get this running simply by copying the modules options from 
> HBase to the surefire command line and the startup scripts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7226) Fix flaky tests in 5.1 branch builds

2024-02-20 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7226:


 Summary: Fix flaky tests in 5.1 branch builds
 Key: PHOENIX-7226
 URL: https://issues.apache.org/jira/browse/PHOENIX-7226
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.1.4






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7225) Phoenix 5.1.4 release

2024-02-20 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7225:


 Summary: Phoenix 5.1.4 release
 Key: PHOENIX-7225
 URL: https://issues.apache.org/jira/browse/PHOENIX-7225
 Project: Phoenix
  Issue Type: Task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Here is the task created to track the 5.1.4 release related work. As part of 
this would like to work following items.

1) Make 5.1 build pipelines green by fixing the flaky tests

2) Create RC

3) Publish release artifacts.

4) Update website

5) Update the versions to 5.1.5-SNAPSHOT.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7224) Fix failing test case IndexMetadataIT#testAsyncRebuildAll in builds

2024-02-20 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7224:
-
Description: 
Observed the test case failure in 5.1 branch pipeline
{code:java}
org.junit.ComparisonFailure: expected:<[COMPLE]TED> but was:<[STAR]TED> at 
org.junit.Assert.assertEquals(Assert.java:117) at 
org.junit.Assert.assertEquals(Assert.java:146) at 
org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:698)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.apache.phoenix.SystemExitRule$1.evaluate(SystemExitRule.java:40) at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.junit.runners.Suite.runChild(Suite.java:128) at 
org.junit.runners.Suite.runChild(Suite.java:27) at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:49) at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:120)
 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:105)
 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:77)
 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:69)
 at 
org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:146)
 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507) at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495) // 
code placeholder
{code}

  was:
Currently there is failing test case in 5.1 pipeline.
{noformat}

org.junit.ComparisonFailure: expected:<[COMPLE]TED> but was:<[STAR]TED> at 
org.junit.Assert.assertEquals(Assert.java:117) at 
org.junit.Assert.assertEquals(Assert.java:146) at 
org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:698)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 

[jira] [Created] (PHOENIX-7224) Fix failing test case IndexMetadataIT#testAsyncRebuildAll in builds

2024-02-20 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7224:


 Summary: Fix failing test case IndexMetadataIT#testAsyncRebuildAll 
in builds
 Key: PHOENIX-7224
 URL: https://issues.apache.org/jira/browse/PHOENIX-7224
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Currently there is failing test case in 5.1 pipeline.
{noformat}

org.junit.ComparisonFailure: expected:<[COMPLE]TED> but was:<[STAR]TED> at 
org.junit.Assert.assertEquals(Assert.java:117) at 
org.junit.Assert.assertEquals(Assert.java:146) at 
org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:698)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.apache.phoenix.SystemExitRule$1.evaluate(SystemExitRule.java:40) at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.junit.runners.Suite.runChild(Suite.java:128) at 
org.junit.runners.Suite.runChild(Suite.java:27) at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:49) at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:120)
 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:105)
 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:77)
 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:69)
 at 
org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:146)
 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507) at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7189) Update Omid to 1.1.1

2024-02-11 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-7189.
--
Fix Version/s: 5.2.0
   5.1.4
 Assignee: Rajeshbabu Chintaguntla
   Resolution: Fixed

Pushed to master and 5.1 branch. Thanks for review [~stoty].

> Update Omid to 1.1.1
> 
>
> Key: PHOENIX-7189
> URL: https://issues.apache.org/jira/browse/PHOENIX-7189
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 5.2.0, 5.1.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-195) Add security system tests

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-195:
-
Fix Version/s: (was: 1.1.2)

> Add security system tests
> -
>
> Key: OMID-195
> URL: https://issues.apache.org/jira/browse/OMID-195
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> Currently not much system tests to coverage functionality when security 
> enabled. This is the JIRA to add the tests with security enabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-206) Half of the regions of commit table not getting used

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-206:
-
Fix Version/s: (was: 1.1.2)

> Half of the regions of commit table not getting used
> 
>
> Key: OMID-206
> URL: https://issues.apache.org/jira/browse/OMID-206
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Attachments: Screen Shot 2021-03-30 at 11.32.54 PM.png
>
>
> PFA image,
> only half regions are getting load remaining half not even getting single 
> request..



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-206) Half of the regions of commit table not getting used

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815920#comment-17815920
 ] 

Rajeshbabu Chintaguntla commented on OMID-206:
--

It's not fixed and open only [~stoty]. Will check and get back to this.

> Half of the regions of commit table not getting used
> 
>
> Key: OMID-206
> URL: https://issues.apache.org/jira/browse/OMID-206
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 1.1.2
>
> Attachments: Screen Shot 2021-03-30 at 11.32.54 PM.png
>
>
> PFA image,
> only half regions are getting load remaining half not even getting single 
> request..



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-241) Add logging to TSO server crash

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-241.


> Add logging to TSO server crash
> ---
>
> Key: OMID-241
> URL: https://issues.apache.org/jira/browse/OMID-241
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 1.1.1
>
>
> Add more detailed logging here:
> [https://github.com/apache/phoenix-omid/blob/26906021cd7e8685ab195b1220f141739cd749ca/tso-server/src/main/java/org/apache/omid/tso/TSOServer.java#L160]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-272) Support JDK17

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-272.


> Support JDK17
> -
>
> Key: OMID-272
> URL: https://issues.apache.org/jira/browse/OMID-272
> Project: Phoenix Omid
>  Issue Type: Improvement
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.1
>
>
> Tests fail hard with JDK 17.
> Hopefully we get this running simply by copying the modules options from 
> HBase to the surefire command line and the startup scripts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-246) Update Surefire plugin to 3.0.0 and switch to TCP forkNode implementation

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-246.


> Update Surefire plugin to 3.0.0 and switch to TCP forkNode implementation
> -
>
> Key: OMID-246
> URL: https://issues.apache.org/jira/browse/OMID-246
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-255) Upgrade guava to 32.1.3-jre

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-255.


> Upgrade guava to 32.1.3-jre
> ---
>
> Key: OMID-255
> URL: https://issues.apache.org/jira/browse/OMID-255
> Project: Phoenix Omid
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-245) Add dependency management for Guava to use 32.1.1

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-245.


> Add dependency management for Guava to use 32.1.1
> -
>
> Key: OMID-245
> URL: https://issues.apache.org/jira/browse/OMID-245
> Project: Phoenix Omid
>  Issue Type: Task
>Affects Versions: 1.1.0
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 1.1.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-253) Upgrade Netty to 4.1.100.Final

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-253.


> Upgrade Netty to 4.1.100.Final
> --
>
> Key: OMID-253
> URL: https://issues.apache.org/jira/browse/OMID-253
> Project: Phoenix Omid
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> Netty 4.1.86.Final has 
> [CVE-2023-34462|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-34462]
>  which has been fixed in version > 4.1.94.Final.
> This Jira is to bump to 4.1.100.Final



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-240) Transactional visibility is broken

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-240.


> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 1.1.1
>
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-258) Bump maven plugins/dependencies to latest

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-258.


> Bump maven plugins/dependencies to latest
> -
>
> Key: OMID-258
> URL: https://issues.apache.org/jira/browse/OMID-258
> Project: Phoenix Omid
>  Issue Type: Improvement
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> Plan is to bump following:
> ||Property||From||To||
> |apache parent pom version|23|30|
> |os.plugin.version|1.6.2|1.7.1|
> |google.findbugs.version|3.0.1|3.0.2|
> |maven-pmd-plugin.version|3.4|3.21.0|
> |maven-checkstyle-plugin.version|2.17|3.3.0|
> |maven-jxr-plugin.version|2.3|3.3.0|
> |maven-findbugs-maven-plugin.version|3.0.1|3.0.5|
> |maven-owasp-plugin.version|6.5.3|8.4.0|
> |maven-clover-plugin.version|4.4.1|4.5.0|
> |maven-sonar-plugin.version|3.9.1.2184|3.10.0.2594|
> Also need to remove and clean up all the version definitions that specify 
> versions older than the ones defined in the parent pom.
> CC: [~stoty] 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-250) Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-250.


> Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml
> -
>
> Key: OMID-250
> URL: https://issues.apache.org/jira/browse/OMID-250
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Anchal Kejriwal
>Priority: Trivial
> Fix For: 1.1.1
>
>
> hadoop-hdfs-client dependency is declared two times in main pom.
> {noformat}
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-248) Transactional Phoenix tests fail on Java 17 in getDefaultNetworkInterface

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-248.


> Transactional Phoenix tests fail on Java 17  in getDefaultNetworkInterface
> --
>
> Key: OMID-248
> URL: https://issues.apache.org/jira/browse/OMID-248
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.1
>
>
> When running the Phoenix test suite with JDK 17, we get errors like
> {noformat}
> [ERROR] 
> org.apache.phoenix.end2end.TransactionalViewIT.testInvalidRowsWithStats[TransactionalViewIT_transactionProvider=OMID]
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.IllegalArgumentException: No network 'en*'/'eth*' interfaces found
>     at 
> org.apache.omid.NetworkUtils.getDefaultNetworkInterface(NetworkUtils.java:68)
>     at org.apache.omid.tso.TSOServerConfig.(TSOServerConfig.java:88)
>     at org.apache.omid.tso.TSOServerConfig.(TSOServerConfig.java:56)
>     at 
> org.apache.phoenix.transaction.OmidTransactionService.startAndInjectOmidTransactionService(OmidTransactionService.java:62)
>     at 
> org.apache.phoenix.transaction.TransactionServiceManager.startTransactionService(TransactionServiceManager.java:33)
>     at 
> org.apache.phoenix.end2end.ConnectionQueryServicesTestImpl.initTransactionClient(ConnectionQueryServicesTestImpl.java:120)
>     at 
> org.apache.phoenix.transaction.OmidTransactionContext.(OmidTransactionContext.java:60)
>     at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionContext(OmidTransactionProvider.java:65)
>     at 
> org.apache.phoenix.execute.MutationState.startTransaction(MutationState.java:408)
>     at 
> org.apache.phoenix.util.TransactionUtil.getTableTimestamp(TransactionUtil.java:124)
>     at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2307)
>     at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1023)
>     at 
> org.apache.phoenix.compile.CreateTableCompiler$CreateTableMutationPlan.execute(CreateTableCompiler.java:421)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:559)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:525)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:524)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:512)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2206)
>     at 
> org.apache.phoenix.end2end.TransactionalViewIT.testInvalidRowsWithStats(TransactionalViewIT.java:108)
>     ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-236) Upgrade Netty to 4.1.86.Final

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-236.


> Upgrade Netty to 4.1.86.Final
> -
>
> Key: OMID-236
> URL: https://issues.apache.org/jira/browse/OMID-236
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 1.1.1
>
>
> Netty version - 4.1.86.Final has fix some CVEs.
> CVE-2022-41915,
> CVE-2022-41881
> Upgrade to latest version.
> h4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-242) Bump guice version to 5.1.0 to support JDK 17

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-242.


> Bump guice version to 5.1.0 to support JDK 17
> -
>
> Key: OMID-242
> URL: https://issues.apache.org/jira/browse/OMID-242
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 1.1.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-264) Fix deprecated WARNING in check-license stage

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-264.


> Fix deprecated WARNING in check-license stage 
> --
>
> Key: OMID-264
> URL: https://issues.apache.org/jira/browse/OMID-264
> Project: Phoenix Omid
>  Issue Type: Improvement
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
> Fix For: 1.1.1
>
>
> Post OMID-251 bump, we are getting deprecated WARNING in build.
> {code:java}
> [INFO] --- license:4.3:check (check-license) @ omid-hbase-tools ---
> [WARNING]  Parameter 'legacyConfigExcludes' (user property 
> 'license.excludes') is deprecated: use LicenseSet.excludes
> [WARNING]  Parameter 'legacyConfigHeader' (user property 'license.header') is 
> deprecated: use LicenseSet.header
> [WARNING]  Parameter 'legacyConfigIncludes' (user property 
> 'license.includes') is deprecated: use LicenseSet.includes
> [INFO] Checking licenses...
> {code}
> This JIRA is to fix/remove usage of deprecated property.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-244) Upgrade SnakeYaml version to 2.0

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-244.


> Upgrade SnakeYaml version to 2.0
> 
>
> Key: OMID-244
> URL: https://issues.apache.org/jira/browse/OMID-244
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 1.1.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-256) Bump hbase and other dependencies to latest version

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-256.


> Bump hbase and other dependencies to latest version
> ---
>
> Key: OMID-256
> URL: https://issues.apache.org/jira/browse/OMID-256
> Project: Phoenix Omid
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> This Jira will bump following:
> |*Property*|*From*|*To*|
> |hbase.version|2.4.13|2.4.17|
> |log4j2.version|2.18.0|2.21.0|
> |junit.version|4.13.1|4.13.2|
> |commons-lang3.version|3.12.0|3.13.0|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-275) Expose backing HBase Table from TTable

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-275.


> Expose backing HBase Table from TTable
> --
>
> Key: OMID-275
> URL: https://issues.apache.org/jira/browse/OMID-275
> Project: Phoenix Omid
>  Issue Type: Improvement
>Affects Versions: 1.1.1
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.1
>
>
> HBase 3.0 removes the HTableDescriptor class.
> However, OmidTransactionTable in Phoenix needs to implement 
> getTableDescriptor() when build for HBase 2.
> The way to break this to expose the backing HBase Table object from TTable.
> That way, we can still get getTableDescriptor from that for HBase 2.
> If we implement this for 1.1.1, then we can use this new API for 5.1.4 (and 
> possibly 5.2.0), which would hoepfully let them be built with future Omid 
> releases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-251) Bump license-maven-plugin to latest version

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-251.


> Bump license-maven-plugin to latest version
> ---
>
> Key: OMID-251
> URL: https://issues.apache.org/jira/browse/OMID-251
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
> Attachments: out_v2.11.txt, out_v4.3.txt
>
>
> In phoenix-omid pom.xml, {{maven-license-plugin.version}} is set to 
> {{{}2.11{}}}, which was last updated 5 years ago. The plugin 
> {{com.mycila:license-maven-plugin}} pulls log4j-1.2.x jar.
> See sample from run of {{mvn license:check}} with {{2.11}} is as follows:
> {code:java}
> Downloading from central: 
> [https://repo.maven.apache.org/maven2/log4j/log4j/1.2.12/log4j-1.2.12.jar]
> {code}
> In my org, when trying to build phoenix-omid, build fails as 
> {{log4j:logj:1.2.x}} is strictly banned in interanl artifactory.
> The goal of this JIRA is to bump the afore-mentioned plugin to latest 
> version, i.e. 
> [4.3|https://mvnrepository.com/artifact/com.mycila/license-maven-plugin], 
> which does not pull the log4j:log4j jar.
> Full run log, for reference, of {{mvn license:check}} command after clearning 
> \{{~/.m2/reposiitory} with version:
>  * {{{}2.11{}}}: [^out_v2.11.txt], which pulls {{log4j-1.2.x}} jar.
>  * {{{}4.3{}}}: [^out_v4.3.txt], which does not pull {{log4j-1.2.x}} jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-254) Upgrade to phoenix-thirdparty 2.1.0

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-254.


> Upgrade to phoenix-thirdparty 2.1.0
> ---
>
> Key: OMID-254
> URL: https://issues.apache.org/jira/browse/OMID-254
> Project: Phoenix Omid
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> Phoenix-thirdparty has been released, see 
> [https://www.mail-archive.com/user@phoenix.apache.org/msg08204.html]
> {quote}The recent release has upgraded Guava to version 32.1.3-jre from the 
> previous 31.0.1-android version. Initially, the 4.x branch maintained 
> compatibility with Java 7, necessitating the use of the Android variant of 
> Guava. However, with the end-of-life (EOL) status of the 4.x branch, the move 
> to the standard JRE version of Guava signifies a shift in compatibility 
> standards
> {quote}
> It's time we bump up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-249) Improve default network address logic

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-249.


> Improve default network address logic
> -
>
> Key: OMID-249
> URL: https://issues.apache.org/jira/browse/OMID-249
> Project: Phoenix Omid
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.1
>
>
> NetworkUtils.getNetworkInterface() seems to be only used for determining the 
> host and port when registering TSO to ZK for HA.
> We should get the TSO public IP from the ZK TCP connection on demand, and not 
> worry about the default network interface at all.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-247) Change TSO default port to be outside the ephemeral range

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-247.


> Change TSO default port to be outside the ephemeral range
> -
>
> Key: OMID-247
> URL: https://issues.apache.org/jira/browse/OMID-247
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 1.1.1
>
>
> The default TSO port, 54758 is in the epehemeral port range of every OS.
> This can cause TSO server to randomly error out on startup with an error 
> similar to the following:
> {noformat}
> Exception in thread "Thread-5" java.lang.IllegalStateException: Expected the 
> service TSOServer [FAILED] to be TERMINATED, but the service has FAILED
>         at 
> org.apache.phoenix.thirdparty.com.google.common.util.concurrent.AbstractService.checkCurrentState(AbstractService.java:366)
>         at 
> org.apache.phoenix.thirdparty.com.google.common.util.concurrent.AbstractService.awaitTerminated(AbstractService.java:329)
>         at 
> org.apache.phoenix.thirdparty.com.google.common.util.concurrent.AbstractIdleService.awaitTerminated(AbstractIdleService.java:175)
>         at org.apache.omid.tso.TSOServer$2.run(TSOServer.java:137)
> Caused by: org.apache.omid.tso.LeaseManagement$LeaseManagementException: 
> Error initializing Lease Manager
>         at 
> org.apache.omid.tso.VoidLeaseManager.startService(VoidLeaseManager.java:38)
>         at org.apache.omid.tso.TSOServer.startUp(TSOServer.java:102)
>         at 
> org.apache.phoenix.thirdparty.com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:60)
>         at 
> org.apache.phoenix.thirdparty.com.google.common.util.concurrent.Callables$4.run(Callables.java:119)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:433)
>         at sun.nio.ch.Net.bind(Net.java:425)
>         at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>         at 
> io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
>         at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550)
>         at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
>         at 
> io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
>         at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248)
>         at 
> io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)
>         at 
> io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
>         at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
>         at 
> io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>         at 
> io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         ... 1 more{noformat}
> The only way to fix this is to change the default port number to be outside 
> the ephemeral range.
> Anything under 32K seems to be safe.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-239) OMID TLS support

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-239.


> OMID TLS support
> 
>
> Key: OMID-239
> URL: https://issues.apache.org/jira/browse/OMID-239
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 1.1.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-273) TestTSOClientConnectionToTSO fails on Mac due to IPv6 connection failure

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-273.


> TestTSOClientConnectionToTSO fails on Mac due to IPv6 connection failure
> 
>
> Key: OMID-273
> URL: https://issues.apache.org/jira/browse/OMID-273
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Richárd Antal
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.1
>
>
> During the OMID 1.1.1 RC0 release we found that TestTSOClientConnectionToTSO 
> had failing tests:
> testSuccessfulConnectionToTSOThroughZK and
> testSuccessOfTSOClientReconnectionsToARestartedTSOWithZKPublishing
> It might be because it uses IPv6
> 2024-01-17T15:17:45,858 INFO 
> [TestNGInvoker-testSuccessfulConnectionToTSOThroughZK()] client.TSOClient: * 
> Current TSO host:port found in ZK: [fe80:0:0:0:aede:48ff:fe00:1122%en5]:52934 
> Epoch 0
> 2024-01-17T15:17:45,859 INFO  [tsofsm-0] client.TSOClient: Trying to
> connect to TSO [/fe80:0:0:0:aede:48ff:fe00:1122%en5:52934]
> 2024-01-17T15:17:45,863 ERROR [tsoclient-worker-0] client.TSOClient: Failed 
> connection attempt to TSO [/fe80:0:0:0:aede:48ff:fe00:1122%en5:52934] failed. 
> Channel [id: 0x9d8c0b44] 
> I've checked this en5 interface and it has only IPv6 and no IPv4
> I ran a git bisect but it wasn't very useful. Before OMID-248 
> TestTSOClientConnectionToTSO used IPv4 and after it, it is using IPv6 and 
> failing on my Mac



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-257) Upgrade bouncycastle and move from jdk15on to latest jdk18on

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-257.


> Upgrade bouncycastle and move from jdk15on to latest jdk18on
> 
>
> Key: OMID-257
> URL: https://issues.apache.org/jira/browse/OMID-257
> Project: Phoenix Omid
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> Omid has a test dependency on BouncyCastle 1.60 which is vulnerable with 
> following CVEs
>  * 
> [CVE-2023-33201|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-33201]
>  * 
> [CVE-2020-26939|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26939]
>  * 
> [CVE-2020-15522|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15522]
> Latest being, 
> [CVE-2023-33201|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-33201]
>  with advisory: [https://github.com/bcgit/bc-java/wiki/CVE-2023-33201]
> This JIRA's goal is to fix the following:
>  * Upgrade to v1.76, the latest version.
>  ** This requires  bcprov-jdk15on to be replaced with bcprov-jdk18on
>  ** See [https://www.bouncycastle.org/latest_releases.html]
>  *** 
> {quote}*Java Version Details* With the arrival of Java 15. jdk15 is not quite 
> as unambiguous as it was. The *jdk18on* jars are compiled to work with 
> *anything* from Java 1.8 up. They are also multi-release jars so do support 
> some features that were introduced in Java 9, Java 11, and Java 15. If you 
> have issues with multi-release jars see the jdk15to18 release jars below.
> *Packaging Change (users of 1.70 or earlier):* BC 1.71 changed the jdk15on 
> jars to jdk18on so the base has now moved to Java 8. For earlier JVMs, or 
> containers/applications that cannot cope with multi-release jars, you should 
> now use the jdk15to18 jars.
> {quote}
>  * Exclude bcprov-jdk15on from everywhere else to avoid conflicts with 
> bcprov-jdk18on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OMID-237) TestHBaseTransactionClient.testReadCommitTimestampFromCommitTable fails

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed OMID-237.


> TestHBaseTransactionClient.testReadCommitTimestampFromCommitTable fails
> ---
>
> Key: OMID-237
> URL: https://issues.apache.org/jira/browse/OMID-237
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 1.1.1
>
>
> On my machine, 
> TestHBaseTransactionClient.testReadCommitTimestampFromCommitTable failed in 3 
> out of 3 runs.
> {noformat}
> [ERROR] Tests run: 95, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 341.56 s <<< FAILURE! - in TestSuite
> [ERROR] 
> testReadCommitTimestampFromCommitTable(org.apache.omid.transaction.TestHBaseTransactionClient)
>   Time elapsed: 0.025 s  <<< FAILURE!
> java.lang.AssertionError: expected [false] but found [true]
>     at org.testng.Assert.fail(Assert.java:94)
>     at org.testng.Assert.failNotEquals(Assert.java:513)
>     at org.testng.Assert.assertFalse(Assert.java:63)
>     at org.testng.Assert.assertFalse(Assert.java:73)
>     at 
> org.apache.omid.transaction.TestHBaseTransactionClient.testReadCommitTimestampFromCommitTable(TestHBaseTransactionClient.java:144)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
>     at 
> org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:54)
>     at 
> org.testng.internal.InvokeMethodRunnable.run(InvokeMethodRunnable.java:44)
>     at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>     at java.base/java.lang.Thread.run(Thread.java:829){noformat}
> I don't see this failure on the CI, so this is possibly a timing issue 
> related to the host performance (or maybe the CI tests are just failing 
> earlier)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-206) Half of the regions of commit table not getting used

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-206:
-
Fix Version/s: 1.1.2
   (was: 1.1.1)

> Half of the regions of commit table not getting used
> 
>
> Key: OMID-206
> URL: https://issues.apache.org/jira/browse/OMID-206
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 1.1.2
>
> Attachments: Screen Shot 2021-03-30 at 11.32.54 PM.png
>
>
> PFA image,
> only half regions are getting load remaining half not even getting single 
> request..



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-190) Update website for 1.0.2

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-190:
-
Fix Version/s: 1.1.2
   (was: 1.1.1)

> Update website for 1.0.2
> 
>
> Key: OMID-190
> URL: https://issues.apache.org/jira/browse/OMID-190
> Project: Phoenix Omid
>  Issue Type: Improvement
>Affects Versions: 1.0.2
>Reporter: Istvan Toth
>Priority: Major
> Fix For: 1.1.2
>
>
> The site repo URL has changed, and the download links point to the old repo 
> and relase dirs.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-195) Add security system tests

2024-02-08 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-195:
-
Fix Version/s: 1.1.2
   (was: 1.1.1)

> Add security system tests
> -
>
> Key: OMID-195
> URL: https://issues.apache.org/jira/browse/OMID-195
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 1.1.2
>
>
> Currently not much system tests to coverage functionality when security 
> enabled. This is the JIRA to add the tests with security enabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Moved] (PHOENIX-7201) Support LEFT, RIGHT, STRIP, DIGITS, CHR, DAYS operators as built in functions

2024-02-07 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla moved HBASE-28351 to PHOENIX-7201:
--

Key: PHOENIX-7201  (was: HBASE-28351)
Project: Phoenix  (was: HBase)

> Support LEFT, RIGHT, STRIP, DIGITS, CHR, DAYS operators as built in functions
> -
>
> Key: PHOENIX-7201
> URL: https://issues.apache.org/jira/browse/PHOENIX-7201
> Project: Phoenix
>  Issue Type: Improvement
> Environment: strong text
>Reporter: Nikita Pande
>Priority: Major
>
> When we are validating phoenix with existing databases in our organisation, 
> there are few gaps identified wrt built in functions.
> 1. LEFT: [https://www.ibm.com/docs/en/db2-for-zos/12?topic=functions-left 
> |http://example.com]
> *Description*: The LEFT function returns a string that consists of the 
> specified number of 
> leftmost bytes of the specified string units.
> *Example*: Assume that host variable ALPHA has a value of 'ABCDEF'. The 
> following 
>  statement returns '*ABC*'
>  {code:java}
>  SELECT LEFT(:ALPHA,3) FROM SYSIBM.SYSDUMMY1;
>  {code}
> 2. RIGHT:[ 
> https://www.ibm.com/docs/en/db2-for-zos/12?topic=functions-right|http://example.com]
> *Description*: The RIGHT function returns a string that consists of the 
> specified number 
>  of rightmost bytes or specified string unit from a string.
> *Example*: Assume that host variable ALPHA has a value of 'ABCDEF'. The 
> following 
>  statement returns the value '*DEF*', which are the three rightmost 
> characters in ALPHA
>  {code:java}
>  SELECT RIGHT(ALPHA,3) FROM SYSIBM.SYSDUMMY1;
>  {code}
> 3. STRIP: [https://www.ibm.com/docs/en/db2-for-zos/12?topic=functions-strip]
> *Description*: The STRIP function removes blanks or another specified 
> character from 
>  the end, the beginning, or both ends of a string expression.
> *Example*: Remove a specific character from a string, o/p is *Hello World 
>  *
>  {code:java}
>  SELECT STRIP('---Hello World---', B, '-') AS StrippedString FROM 
> SYSIBM.SYSDUMMY1;
>  {code}
> 4. DIGITS: [https://www.ibm.com/docs/en/db2-for-zos/12?topic=functions-digits]
> *Description*: The DIGITS function returns a character string 
> representation of the 
>  absolute value of a number.
>  Example: Assume that COLUMNX has the data type DECIMAL(6,2), and that 
> one of its 
>  values is *-6.28*. For this value, the following statement returns the 
> value *'000628'.*
>  {code:java}
>  DIGITS(COLUMNX)
>  {code}
>  
> 5. CHR: [https://www.ibm.com/docs/en/db2-for-zos/12?topic=functions-chr]
> *Description*: The CHR function returns the character that has the ASCII 
> code value that 
>  is specified by the argument.
>  Example: Set :hv with the Euro symbol "€" in CCSID 923:
>  {code:java}
>  SET :hv = CHR(164);  -- x'A4'
>  {code}  
> 6. DAYS: [https://www.ibm.com/docs/en/db2-for-zos/12?topic=functions-days]
>  *Description*: The DAYS function converts each date to a number (the 
> number of days 
>  since '0001-01-01'), and subtracting these numbers gives the number of 
> days between 
>   the two dates. o/p is *364* since 2022 is not a leap year
>   *Example*:
>   {code:java}
>SELECT (DAYS('2022-12-31') - DAYS('2022-01-01')) AS days_difference
>FROM sysibm.sysdummy1;
>   {code}  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7198) support for multi row constructors in single upsert query

2024-02-07 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7198:


 Summary: support for multi row constructors in single upsert query
 Key: PHOENIX-7198
 URL: https://issues.apache.org/jira/browse/PHOENIX-7198
 Project: Phoenix
  Issue Type: New Feature
Reporter: Rajeshbabu Chintaguntla


Multi-row constructors in an INSERT query are part of the ANSI SQL:2003 
standard. They allow you to insert multiple rows with a single statement, as 
shown in the example below:

INSERT INTO table_name (column1, column2, column3, ...)
VALUES 
(value1, value2, value3, ...),
(value1, value2, value3, ...),
...

would be better to support. 




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7197) PhoenixMRJobSubmitter is failing with non-ha yarn cluster

2024-02-07 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla reassigned PHOENIX-7197:


Assignee: Anchal Kejriwal  (was: Rajeshbabu Chintaguntla)

> PhoenixMRJobSubmitter is failing with non-ha yarn cluster
> -
>
> Key: PHOENIX-7197
> URL: https://issues.apache.org/jira/browse/PHOENIX-7197
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Anchal Kejriwal
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Currently starting PhoenixMRJobSubmitter is expecting yarn HA should be 
> enabled otherwise it's failing
> {noformat}
> 2024-02-07 06:01:31,942 INFO  [main] zookeeper.ZooKeeper: Session: 
> 0x100293e630a0088 closed
> Exception in thread "main" 2024-02-07 06:01:31,942 INFO  [main-EventThread] 
> zookeeper.ClientCnxn: EventThread shut down for session: 0x100293e630a0088
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /yarn-leader-election
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:118)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>   at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2589)
>   at 
> org.apache.phoenix.util.PhoenixMRJobUtil.getActiveResourceManagerAddress(PhoenixMRJobUtil.java:103)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.getSubmittedYarnApps(PhoenixMRJobSubmitter.java:305)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.scheduleIndexBuilds(PhoenixMRJobSubmitter.java:251)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.main(PhoenixMRJobSubmitter.java:332)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7197) PhoenixMRJobSubmitter is failing in non-ha cluster

2024-02-07 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7197:


 Summary: PhoenixMRJobSubmitter is failing in non-ha cluster
 Key: PHOENIX-7197
 URL: https://issues.apache.org/jira/browse/PHOENIX-7197
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.3
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.2.0, 5.1.4


Currently starting PhoenixMRJobSubmitter is expecting yarn HA should be enabled 
otherwise it's failing
{noformat}
2024-02-07 06:01:31,942 INFO  [main] zookeeper.ZooKeeper: Session: 
0x100293e630a0088 closed

Exception in thread "main" 2024-02-07 06:01:31,942 INFO  [main-EventThread] 
zookeeper.ClientCnxn: EventThread shut down for session: 0x100293e630a0088

org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /yarn-leader-election

at org.apache.zookeeper.KeeperException.create(KeeperException.java:118)

at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)

at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2589)

at 
org.apache.phoenix.util.PhoenixMRJobUtil.getActiveResourceManagerAddress(PhoenixMRJobUtil.java:103)

at 
org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.getSubmittedYarnApps(PhoenixMRJobSubmitter.java:305)

at 
org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.scheduleIndexBuilds(PhoenixMRJobSubmitter.java:251)

at 
org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.main(PhoenixMRJobSubmitter.java:332)

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7197) PhoenixMRJobSubmitter is failing with non-ha yarn cluster

2024-02-07 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7197:
-
Summary: PhoenixMRJobSubmitter is failing with non-ha yarn cluster  (was: 
PhoenixMRJobSubmitter is failing in non-ha cluster)

> PhoenixMRJobSubmitter is failing with non-ha yarn cluster
> -
>
> Key: PHOENIX-7197
> URL: https://issues.apache.org/jira/browse/PHOENIX-7197
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Currently starting PhoenixMRJobSubmitter is expecting yarn HA should be 
> enabled otherwise it's failing
> {noformat}
> 2024-02-07 06:01:31,942 INFO  [main] zookeeper.ZooKeeper: Session: 
> 0x100293e630a0088 closed
> Exception in thread "main" 2024-02-07 06:01:31,942 INFO  [main-EventThread] 
> zookeeper.ClientCnxn: EventThread shut down for session: 0x100293e630a0088
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /yarn-leader-election
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:118)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>   at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2589)
>   at 
> org.apache.phoenix.util.PhoenixMRJobUtil.getActiveResourceManagerAddress(PhoenixMRJobUtil.java:103)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.getSubmittedYarnApps(PhoenixMRJobSubmitter.java:305)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.scheduleIndexBuilds(PhoenixMRJobSubmitter.java:251)
>   at 
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter.main(PhoenixMRJobSubmitter.java:332)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7187) Improvement of Integration test case with Explain Plan for Partial Index

2024-01-31 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-7187.
--
Resolution: Fixed

Merged the PR. Thanks for the patch [~nikitapande] and review [~tkhurana].

> Improvement of Integration test case with Explain Plan for Partial Index
> 
>
> Key: PHOENIX-7187
> URL: https://issues.apache.org/jira/browse/PHOENIX-7187
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Major
> Fix For: 5.2.0
>
>
> Currently in partialIndexIT.java, currently the schema name and table name of 
> the index table in the PhoenixResultSet context match the expected schema 
> name and table name is validated.
> Explain Plan validation for the partial index does not exist.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7187) Improvement of Integration test case with Explain Plan for Partial Index

2024-01-31 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7187:
-
Fix Version/s: 5.2.0

> Improvement of Integration test case with Explain Plan for Partial Index
> 
>
> Key: PHOENIX-7187
> URL: https://issues.apache.org/jira/browse/PHOENIX-7187
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Major
> Fix For: 5.2.0
>
>
> Currently in partialIndexIT.java, currently the schema name and table name of 
> the index table in the PhoenixResultSet context match the expected schema 
> name and table name is validated.
> Explain Plan validation for the partial index does not exist.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-4362) Add documentation on Phoenix ACLs.

2024-01-29 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla reassigned PHOENIX-4362:


Assignee: Nikita Pande  (was: Ankit Singhal)

> Add documentation on Phoenix ACLs.
> --
>
> Key: PHOENIX-4362
> URL: https://issues.apache.org/jira/browse/PHOENIX-4362
> Project: Phoenix
>  Issue Type: Task
>Reporter: Ankit Singhal
>Assignee: Nikita Pande
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7156) Run integration tests based on @Category

2024-01-09 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-7156.
--
  Assignee: Nikita Pande
Resolution: Fixed

Committed to 5.1 branch.

> Run integration tests based on @Category
> 
>
> Key: PHOENIX-7156
> URL: https://issues.apache.org/jira/browse/PHOENIX-7156
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Following are the categories described in integration tests of phoenix.
> * ParallelStatsEnabledTests
> * ParallelStatsDisabledTests
> * NeedTheirOwnClusterTests
> Currently all of them execute without an option to choose which one to run. 
> To enable this we should make it configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7156) Run integration tests based on @Category

2024-01-09 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7156:
-
Fix Version/s: 5.1.4

> Run integration tests based on @Category
> 
>
> Key: PHOENIX-7156
> URL: https://issues.apache.org/jira/browse/PHOENIX-7156
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nikita Pande
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Following are the categories described in integration tests of phoenix.
> * ParallelStatsEnabledTests
> * ParallelStatsDisabledTests
> * NeedTheirOwnClusterTests
> Currently all of them execute without an option to choose which one to run. 
> To enable this we should make it configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7156) Run integration tests based on @Category

2024-01-09 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7156:
-
Fix Version/s: 5.2.0

> Run integration tests based on @Category
> 
>
> Key: PHOENIX-7156
> URL: https://issues.apache.org/jira/browse/PHOENIX-7156
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nikita Pande
>Priority: Major
> Fix For: 5.2.0
>
>
> Following are the categories described in integration tests of phoenix.
> * ParallelStatsEnabledTests
> * ParallelStatsDisabledTests
> * NeedTheirOwnClusterTests
> Currently all of them execute without an option to choose which one to run. 
> To enable this we should make it configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-252) Analyse and fix possible vulnerabilities for 1.1.1 release

2024-01-03 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-252:
-
Fix Version/s: 1.1.1

> Analyse and fix possible vulnerabilities for 1.1.1 release
> --
>
> Key: OMID-252
> URL: https://issues.apache.org/jira/browse/OMID-252
> Project: Phoenix Omid
>  Issue Type: Task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> Here will try and analysis any vulnerabilities which can be fixed before 
> 1.1.1 release. Will create sub tasks as i identiffy them!
> CC: [~stoty] [~rajeshbabu] [~vjasani] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (OMID-254) Upgrade to phoenix-thirdparty 2.1.0

2024-01-03 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved OMID-254.
--
Fix Version/s: 1.1.1
   Resolution: Fixed

Committed to master branch. Thanks for the patch [~nihaljain.cs] and review 
[~stoty].

> Upgrade to phoenix-thirdparty 2.1.0
> ---
>
> Key: OMID-254
> URL: https://issues.apache.org/jira/browse/OMID-254
> Project: Phoenix Omid
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> Phoenix-thirdparty has been released, see 
> [https://www.mail-archive.com/user@phoenix.apache.org/msg08204.html]
> {quote}The recent release has upgraded Guava to version 32.1.3-jre from the 
> previous 31.0.1-android version. Initially, the 4.x branch maintained 
> compatibility with Java 7, necessitating the use of the Android variant of 
> Guava. However, with the end-of-life (EOL) status of the 4.x branch, the move 
> to the standard JRE version of Guava signifies a shift in compatibility 
> standards
> {quote}
> It's time we bump up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (PHOENIX-6992) Upgrade Guava to 32.1.1

2023-12-21 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed PHOENIX-6992.


> Upgrade Guava to 32.1.1
> ---
>
> Key: PHOENIX-6992
> URL: https://issues.apache.org/jira/browse/PHOENIX-6992
> Project: Phoenix
>  Issue Type: Task
>  Components: thirdparty
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: thirdparty-2.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (PHOENIX-7142) Bump phoenix-thirdparty version to 2.1

2023-12-21 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed PHOENIX-7142.


> Bump phoenix-thirdparty version to 2.1
> --
>
> Key: PHOENIX-7142
> URL: https://issues.apache.org/jira/browse/PHOENIX-7142
> Project: Phoenix
>  Issue Type: Task
>  Components: thirdparty
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: thirdparty-2.1.0
>
>
> Changing the Guava variant is a relatively big change.
> Bump the minor version to 2.1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (PHOENIX-7080) Switch phoenix-thirdparty to guava-jre from guava-android

2023-12-21 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla closed PHOENIX-7080.


> Switch phoenix-thirdparty to guava-jre from guava-android
> -
>
> Key: PHOENIX-7080
> URL: https://issues.apache.org/jira/browse/PHOENIX-7080
> Project: Phoenix
>  Issue Type: Improvement
>  Components: thirdparty
>Affects Versions: thirdparty-2.0.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: thirdparty-2.1.0
>
>
> As discussed in mail thread, here, we want to switch to guava-jre from 
> guava-android for phoenix thirdparty
> Some more context on why we want to do this (Copied from PHOENIX-6817) is as 
> below:
> {quote}We chose to include the -android variant of Guava, to ensure 
> compatibiity with Java 7, which was required by the 4.x branch.
> Now that the 4.x branch is EOL, we can switch to the more standard -jre 
> version.
> {quote}
> CC: [~stoty] [~rajeshbabu] [~vjasani] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (OMID-250) Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml

2023-12-14 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved OMID-250.
--
Resolution: Fixed

Committed to master. Thanks for the patch [~anchalk1]. Congratulations for the 
first commit.

> Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml
> -
>
> Key: OMID-250
> URL: https://issues.apache.org/jira/browse/OMID-250
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Anchal Kejriwal
>Priority: Trivial
> Fix For: 1.1.1
>
>
> hadoop-hdfs-client dependency is declared two times in main pom.
> {noformat}
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (OMID-250) Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml

2023-12-14 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla reassigned OMID-250:


Assignee: Anchal Kejriwal  (was: Rajeshbabu Chintaguntla)

> Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml
> -
>
> Key: OMID-250
> URL: https://issues.apache.org/jira/browse/OMID-250
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Anchal Kejriwal
>Priority: Trivial
> Fix For: 1.1.1
>
>
> hadoop-hdfs-client dependency is declared two times in main pom.
> {noformat}
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6992) Upgrade Guava to 32.1.1

2023-12-12 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-6992:
-
Fix Version/s: thirdparty-2.1.0
   (was: thirdparty-2.0.1)

> Upgrade Guava to 32.1.1
> ---
>
> Key: PHOENIX-6992
> URL: https://issues.apache.org/jira/browse/PHOENIX-6992
> Project: Phoenix
>  Issue Type: Task
>  Components: thirdparty
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: thirdparty-2.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7080) Switch phoenix-thirdparty to guava-jre from guava-android

2023-12-12 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7080:
-
Fix Version/s: thirdparty-2.1.0
   (was: thirdparty-2.0.1)

> Switch phoenix-thirdparty to guava-jre from guava-android
> -
>
> Key: PHOENIX-7080
> URL: https://issues.apache.org/jira/browse/PHOENIX-7080
> Project: Phoenix
>  Issue Type: Improvement
>  Components: thirdparty
>Affects Versions: thirdparty-2.0.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: thirdparty-2.1.0
>
>
> As discussed in mail thread, here, we want to switch to guava-jre from 
> guava-android for phoenix thirdparty
> Some more context on why we want to do this (Copied from PHOENIX-6817) is as 
> below:
> {quote}We chose to include the -android variant of Guava, to ensure 
> compatibiity with Java 7, which was required by the 4.x branch.
> Now that the 4.x branch is EOL, we can switch to the more standard -jre 
> version.
> {quote}
> CC: [~stoty] [~rajeshbabu] [~vjasani] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7142) Bump phoenix-thirdparty version to 2.1

2023-12-12 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7142:
-
Fix Version/s: thirdparty-2.1.0
   (was: 2.1.0)

> Bump phoenix-thirdparty version to 2.1
> --
>
> Key: PHOENIX-7142
> URL: https://issues.apache.org/jira/browse/PHOENIX-7142
> Project: Phoenix
>  Issue Type: Task
>  Components: thirdparty
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: thirdparty-2.1.0
>
>
> Changing the Guava variant is a relatively big change.
> Bump the minor version to 2.1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7142) Bump phoenix-thirdparty version to 2.1

2023-12-10 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7142:
-
Fix Version/s: 2.1.0

> Bump phoenix-thirdparty version to 2.1
> --
>
> Key: PHOENIX-7142
> URL: https://issues.apache.org/jira/browse/PHOENIX-7142
> Project: Phoenix
>  Issue Type: Task
>  Components: thirdparty
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 2.1.0
>
>
> Changing the Guava variant is a relatively big change.
> Bump the minor version to 2.1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-250) Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml

2023-12-04 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-250:
-
Priority: Trivial  (was: Major)

> Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml
> -
>
> Key: OMID-250
> URL: https://issues.apache.org/jira/browse/OMID-250
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Trivial
> Fix For: 1.1.1
>
>
> hadoop-hdfs-client dependency is declared two times in main pom.
> {noformat}
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-250) Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml

2023-12-04 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793154#comment-17793154
 ] 

Rajeshbabu Chintaguntla commented on OMID-250:
--

It's trivial change [~anchalk1] is interested to contribute.

> Remove duplicate declarations of hadoop-hdfs-client dependency in pom.xml
> -
>
> Key: OMID-250
> URL: https://issues.apache.org/jira/browse/OMID-250
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 1.1.1
>
>
> hadoop-hdfs-client dependency is declared two times in main pom.
> {noformat}
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
>             
>                 org.apache.hadoop
>                 hadoop-hdfs-client
>                 ${hadoop.version}
>             
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-267) Move CompactorUtil to omid-hbase-tools

2023-12-04 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792999#comment-17792999
 ] 

Rajeshbabu Chintaguntla commented on OMID-267:
--

Sure this can be taken care of in the upcoming releases. Will go ahead with 
1.1.1 release as of now without this.

> Move CompactorUtil to omid-hbase-tools
> --
>
> Key: OMID-267
> URL: https://issues.apache.org/jira/browse/OMID-267
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Istvan Toth
>Priority: Major
>
> CompactorUtil is a CLI tool, yet it is in omid-hbase-coprocessor.
> It also pulls Jcommander as a dependency, which is not needed for coprocessors
> Move CompactorUtil  to omid-hbase-tools, and remove the jcommander dependency 
> from omid-hbase-coprocessor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-12-04 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792998#comment-17792998
 ] 

Rajeshbabu Chintaguntla commented on OMID-240:
--

Committed to master. Thanks for review [~stoty]. 

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 1.1.1
>
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (OMID-240) Transactional visibility is broken

2023-12-04 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved OMID-240.
--
Resolution: Fixed

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 1.1.1
>
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-267) Move CompactorUtil to omid-hbase-tools

2023-12-03 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792670#comment-17792670
 ] 

Rajeshbabu Chintaguntla commented on OMID-267:
--

[~stoty] sure.

> Move CompactorUtil to omid-hbase-tools
> --
>
> Key: OMID-267
> URL: https://issues.apache.org/jira/browse/OMID-267
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Istvan Toth
>Priority: Major
>
> CompactorUtil is a CLI tool, yet it is in omid-hbase-coprocessor.
> It also pulls Jcommander as a dependency, which is not needed for coprocessors
> Move CompactorUtil  to omid-hbase-tools, and remove the jcommander dependency 
> from omid-hbase-coprocessor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-12-03 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792649#comment-17792649
 ] 

Rajeshbabu Chintaguntla commented on OMID-240:
--

[~stoty] any further review comments?

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 1.1.1
>
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (OMID-240) Transactional visibility is broken

2023-11-30 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated OMID-240:
-
Fix Version/s: 1.1.1

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 1.1.1
>
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7131) Honor omid.server.side.filter configuration while creating Omid Transactional Table

2023-11-29 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7131:
-
Fix Version/s: (was: 5.2.0)
   (was: 5.1.4)

> Honor omid.server.side.filter configuration while creating Omid Transactional 
> Table
> ---
>
> Key: PHOENIX-7131
> URL: https://issues.apache.org/jira/browse/PHOENIX-7131
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> Currently by default we are passing server side filter to be used as true 
> always would be better to honor the configuration value of 
> omid.server.side.filter while creating Omid TTable
> {noformat}
> public OmidTransactionTable(PhoenixTransactionContext ctx, Table hTable, 
> boolean isConflictFree, boolean addShadowCells) throws SQLException  {
>
> try {
> tTable = new TTable(hTable, true, isConflictFree);
> } catch (IOException e) {
>
> }
> {noformat}
> Would be better to use following constructor
> {noformat}
> public TTable(Table hTable, boolean conflictFree) throws IOException {
> this(hTable, 
> hTable.getConfiguration().getBoolean("omid.server.side.filter", false), 
> conflictFree);
> }
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7131) Honor omid.server.side.filter configuration while creating Omid Transactional Table

2023-11-29 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-7131.
--
Resolution: Won't Fix

As mentioned here 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17791445#comment-17791445

Without server side filtering phoenix queries are not working so would be 
better to go with server side filtering by default.

> Honor omid.server.side.filter configuration while creating Omid Transactional 
> Table
> ---
>
> Key: PHOENIX-7131
> URL: https://issues.apache.org/jira/browse/PHOENIX-7131
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> Currently by default we are passing server side filter to be used as true 
> always would be better to honor the configuration value of 
> omid.server.side.filter while creating Omid TTable
> {noformat}
> public OmidTransactionTable(PhoenixTransactionContext ctx, Table hTable, 
> boolean isConflictFree, boolean addShadowCells) throws SQLException  {
>
> try {
> tTable = new TTable(hTable, true, isConflictFree);
> } catch (IOException e) {
>
> }
> {noformat}
> Would be better to use following constructor
> {noformat}
> public TTable(Table hTable, boolean conflictFree) throws IOException {
> this(hTable, 
> hTable.getConfiguration().getBoolean("omid.server.side.filter", false), 
> conflictFree);
> }
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (OMID-240) Transactional visibility is broken

2023-11-29 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17791445#comment-17791445
 ] 

Rajeshbabu Chintaguntla edited comment on OMID-240 at 11/30/23 4:43 AM:


[~stoty] 
All kinds of select queries are failing without server side filtering with both 
inmemory commit storage module as well as hbase commit storage module. So would 
be better to stick to server side filtering and hbase commit storage table.
{noformat}
0: jdbc:phoenix:> select count(*) from test;
Error: java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:9223372036854775807, maxStamp:-9223372036854775808 
(state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: 
java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:9223372036854775807, maxStamp:-9223372036854775808
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:138)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1379)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1318)
at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:52)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:107)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:127)
at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:841)
at sqlline.BufferedRows.nextList(BufferedRows.java:109)
at sqlline.BufferedRows.(BufferedRows.java:52)
at sqlline.SqlLine.print(SqlLine.java:1672)
at sqlline.Commands.executeSingleQuery(Commands.java:1063)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:9223372036854775807, maxStamp:-9223372036854775808
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1374)
... 16 more
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:9223372036854775807, maxStamp:-9223372036854775808
at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:157)
at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:145)
at org.apache.hadoop.hbase.client.Get.setTimestamp(Get.java:238)
at 
org.apache.omid.transaction.HBaseTransactionManager$CommitTimestampLocatorImpl.readCommitTimestampFromShadowCell(HBaseTransactionManager.java:299)
at 
org.apache.omid.transaction.SnapshotFilterImpl.readCommitTimestampFromShadowCell(SnapshotFilterImpl.java:143)
at 
org.apache.omid.transaction.SnapshotFilterImpl.locateCellCommitTimestamp(SnapshotFilterImpl.java:188)
at 
org.apache.omid.transaction.SnapshotFilterImpl.tryToLocateCellCommitTimestamp(SnapshotFilterImpl.java:250)
at 
org.apache.omid.transaction.SnapshotFilterImpl.getCommitTimestamp(SnapshotFilterImpl.java:303)
at 
org.apache.omid.transaction.SnapshotFilterImpl.getTSIfInSnapshot(SnapshotFilterImpl.java:388)
at 
org.apache.omid.transaction.SnapshotFilterImpl.filterCellsForSnapshot(SnapshotFilterImpl.java:449)
at 
org.apache.omid.transaction.SnapshotFilterImpl$TransactionalClientScanner.next(SnapshotFilterImpl.java:633)
at 
org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:158)
at 
org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:172)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:55)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:67)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:81)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:138)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 

[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-11-29 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17791445#comment-17791445
 ] 

Rajeshbabu Chintaguntla commented on OMID-240:
--

[~stoty] 
All kinds of select queries are failing without server side filtering with both 
inmemory commit storage module as well as hbase commit storage module. So would 
be better to stick to server side filtering.
{noformat}
0: jdbc:phoenix:> select count(*) from test;
Error: java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:9223372036854775807, maxStamp:-9223372036854775808 
(state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: 
java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:9223372036854775807, maxStamp:-9223372036854775808
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:138)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1379)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1318)
at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:52)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:107)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:127)
at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:841)
at sqlline.BufferedRows.nextList(BufferedRows.java:109)
at sqlline.BufferedRows.(BufferedRows.java:52)
at sqlline.SqlLine.print(SqlLine.java:1672)
at sqlline.Commands.executeSingleQuery(Commands.java:1063)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:9223372036854775807, maxStamp:-9223372036854775808
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1374)
... 16 more
Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative. 
minStamp:9223372036854775807, maxStamp:-9223372036854775808
at org.apache.hadoop.hbase.io.TimeRange.check(TimeRange.java:157)
at org.apache.hadoop.hbase.io.TimeRange.(TimeRange.java:145)
at org.apache.hadoop.hbase.client.Get.setTimestamp(Get.java:238)
at 
org.apache.omid.transaction.HBaseTransactionManager$CommitTimestampLocatorImpl.readCommitTimestampFromShadowCell(HBaseTransactionManager.java:299)
at 
org.apache.omid.transaction.SnapshotFilterImpl.readCommitTimestampFromShadowCell(SnapshotFilterImpl.java:143)
at 
org.apache.omid.transaction.SnapshotFilterImpl.locateCellCommitTimestamp(SnapshotFilterImpl.java:188)
at 
org.apache.omid.transaction.SnapshotFilterImpl.tryToLocateCellCommitTimestamp(SnapshotFilterImpl.java:250)
at 
org.apache.omid.transaction.SnapshotFilterImpl.getCommitTimestamp(SnapshotFilterImpl.java:303)
at 
org.apache.omid.transaction.SnapshotFilterImpl.getTSIfInSnapshot(SnapshotFilterImpl.java:388)
at 
org.apache.omid.transaction.SnapshotFilterImpl.filterCellsForSnapshot(SnapshotFilterImpl.java:449)
at 
org.apache.omid.transaction.SnapshotFilterImpl$TransactionalClientScanner.next(SnapshotFilterImpl.java:633)
at 
org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:158)
at 
org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:172)
at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:55)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:67)
at 
org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:81)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:138)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
{noformat}

{noformat}
0: 

[jira] [Created] (PHOENIX-7131) Honor omid.server.side.filter configuration while creating Omid Transactional Table

2023-11-29 Thread Rajeshbabu Chintaguntla (Jira)
Rajeshbabu Chintaguntla created PHOENIX-7131:


 Summary: Honor omid.server.side.filter configuration while 
creating Omid Transactional Table
 Key: PHOENIX-7131
 URL: https://issues.apache.org/jira/browse/PHOENIX-7131
 Project: Phoenix
  Issue Type: New Feature
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.2.0, 5.1.4


Currently by default we are passing server side filter to be used as true 
always would be better to honor the configuration value of 
omid.server.side.filter while creating Omid TTable
{noformat}
public OmidTransactionTable(PhoenixTransactionContext ctx, Table hTable, 
boolean isConflictFree, boolean addShadowCells) throws SQLException  {
   
try {
tTable = new TTable(hTable, true, isConflictFree);
} catch (IOException e) {
   
}
{noformat}

Would be better to use following constructor
{noformat}
public TTable(Table hTable, boolean conflictFree) throws IOException {
this(hTable, 
hTable.getConfiguration().getBoolean("omid.server.side.filter", false), 
conflictFree);
}
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-11-29 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17791269#comment-17791269
 ] 

Rajeshbabu Chintaguntla commented on OMID-240:
--

bq.Can you confirm that the failing tests have omid.server.side.filter enabled ?

While creating transactional table object in the phoenix we are enabling 
serverSideFilter by default even though in omid it's false by default so we can 
control it through the configuration. 
 {noformat}
public OmidTransactionTable(PhoenixTransactionContext ctx, Table hTable, 
boolean isConflictFree, boolean addShadowCells) throws SQLException  {
 ...
tTable = new TTable(hTable, true, isConflictFree);
 
this.tx = omidTransactionContext.getTransaction();
}
{noformat}
bq.At the very least, we should print some strongly worded error messages in 
the coprocessor startup code when an invalid combination is set.
To identify the combination is invalid during coprocessor startup we need to do 
any of the following 3 steps looks like good amount of work might be required 
to these changes will take up as a new JIRA.
1. We need pass the TSO server configurations to each region server which will  
operational burden as  the cluster managers need to change the configurations 
and restart if anything required that's by currently by default creating the 
HBase commit table client and trying to check the commit timestamp from the 
client.
2. We need fetch the information from TSO server which might need to add RPC 
call and implementation of it to check which commit table information has 
configured at server based on the server filter configuration.
3. While starting the TSO server needs to verify the combinations properly 
seems like cannot conclude the right set properly without looking proper HBase 
configurations like server side filters etc..
bq.We should also print an error message if HA is configured, but any InMemory 
Module is used.
Yes we can do that.
bq.We should document these (for that, we would need fix the Omid web page), at 
the very least as a release note.
We can add the release notes and update the documentation what configurations 
needs to be used in which scenario.
bq.Ideally, we would run all tests with both InMemory modules and server-side 
filter off, and HBase modules and server-server-side filtering is enabled (not 
in this ticket's scope)
Will raise a JIRA for this and work on it later.
bq.Have you looked at the performance implications ?
Even with server side filtering commit timestamps are getting cached mostly 
only one call to HBase commit table may happen.
bq.How do hbase + server side filtering and in-memory and no server-side 
filtering compare ?
Trying to collect some basic details regarding this. Will get back to you.
bq.I would imagine that server-side filtering mostly makes a difference for hot 
rows, like application level locks and counters, and for slowly changing keys 
it doesn't do that much.
Yes better not to use server-side filtering if the changes happen rarely.
bq.Actually, I think we could just refuse to start TSO server for that invalid 
combination of in-memory modules and HA.
Will check and work as part of another JIRA.
bq.All in all, changing the default looks like the safe choice, but we should 
document this issue thoroughly.
Yes I have made a patch just doing some testing will upload the patch tomorrow.


> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> 

[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-11-24 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789386#comment-17789386
 ] 

Rajeshbabu Chintaguntla commented on OMID-240:
--

In memory modules are mainly for testing purpose. Currently HBaseCommitTable 
instance is getting used in coprocessor to fetch the commit timestamp which 
will be used to skip cells if those are by on going transaction. So configuring 
inmemory modules by default at TSO server is not good. 
{code:java}
@Override
public void start(CoprocessorEnvironment env) throws IOException {
LOG.info("Starting snapshot filter coprocessor");
this.env = (RegionCoprocessorEnvironment)env;
commitTableConf = new HBaseCommitTableConfig();
String commitTableName = 
env.getConfiguration().get(COMMIT_TABLE_NAME_KEY);
if (commitTableName != null) {
commitTableConf.setTableName(commitTableName);
}
connection = RegionConnectionFactory

.getConnection(RegionConnectionFactory.ConnectionType.READ_CONNECTION, 
(RegionCoprocessorEnvironment) env);
commitTableClient = new HBaseCommitTable(connection, 
commitTableConf).getClient();
LOG.info("Snapshot filter started");
}{code}

So would be better to make default commit module at 
default-omid-server-configuration.yml to DefaultHBaseCommitTableStorageModule 
to sync both coprocessor and tso server usage of commit tables.
{noformat}
# Default module configuration (No TSO High Availability & in-memory storage 
for timestamp and commit tables)
timestampStoreModule: !!org.apache.omid.tso.InMemoryTimestampStorageModule [ ]
commitTableStoreModule: !!org.apache.omid.tso.InMemoryCommitTableStorageModule 
[ ]
{noformat}

{noformat}
commitTableStoreModule: 
!!org.apache.omid.committable.hbase.DefaultHBaseCommitTableStorageModule
{noformat}

Timestamp module can still be in memory by default.

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7123) Support for multi-column split keys

2023-11-22 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla reassigned PHOENIX-7123:


Assignee: Nihal Jain

> Support for multi-column split keys
> ---
>
> Key: PHOENIX-7123
> URL: https://issues.apache.org/jira/browse/PHOENIX-7123
> Project: Phoenix
>  Issue Type: New Feature
>  Components: phoenix
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Nihal Jain
>Priority: Major
>
> Currently there is only way to create split keys by passing array of strings 
> which will be leading parts of rowkey. If a table have multi-column row key 
> or non varchar key application developers need to do lot of research to 
> prepare the split keys according to columns types and need to dig internal 
> details of how the row key formed with separators  in case of variable length 
> columns etc. We can support the multi column split keys by passing array of 
> arrays of identifiers so that split keys formed by considering data types, 
> fixed length or variable length column types etc.
> Even we can support splitting existing regions with alter kind of queries so 
> that need not relay on hbase APIs to split the regions.
> Example syntax:
> {code:sql}
> create table test(a integer not null, b varchar not null, c float, d bigint, 
> constraint pk primary key(a,b)) split on ([1, 'bob'],[5, 'fan'],[7,'nob'])
> {code}
> Similarly for dynamic splitting existing regions we can define alter command 
> also as below
> {code:sql}
> alter table test split on ([3, 'cob'],[4, 'end'])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7123) Support for multi-column split keys

2023-11-22 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7123:
-
Description: 
Currently there is only way to create split keys by passing array of strings 
which will be leading parts of rowkey. If a table have multi-column row key or 
non varchar key application developers need to do lot of research to prepare 
the split keys according to columns types and need to dig internal details of 
how the row key formed with separators  in case of variable length columns etc. 
We can support the multi column split keys by passing array of arrays of 
identifiers so that split keys formed by considering data types, fixed length 
or variable length column types etc.

Even we can support splitting existing regions with alter kind of queries so 
that need not relay on hbase APIs to split the regions.

Example syntax:
{code:sql}
create table test(a integer not null, b varchar not null, c float, d bigint, 
constraint pk primary key(a,b)) split on ([1, 'bob'],[5, 'fan'],[7,'nob'])
{code}
Similarly for dynamic splitting existing regions we can define alter command 
also as below
{code:sql}
alter table test split on ([3, 'cob'],[4, 'end'])
{code}

  was:
Currently there is only way to create split keys by passing array of strings 
which will be leading parts of rowkey. If a table have multi-column row key or 
non varchar key application developers need to do lot of research to prepare 
the split keys according to columns types and need to dig internal details of 
how the row key formed with separators  in case of variable length columns etc. 
We can support the multi column split keys by passing array of arrays of 
identifiers so that split keys formed to by considering data types, fixed 
length or variable length values etc.

Even we can support splitting existing regions with alter kind of queries so 
that need not relay on hbase APIs to split the regions.

Example syntax:
{code:sql}
create table test(a integer not null, b varchar not null, c float, d bigint, 
constraint pk primary key(a,b)) split on ([1, 'bob'],[5, 'fan'],[7,'nob'])
{code}
Similarly for dynamic splitting existing regions we can define alter command 
also as below
{code:sql}
alter table test split on ([3, 'cob'],[4, 'end'])
{code}


> Support for multi-column split keys
> ---
>
> Key: PHOENIX-7123
> URL: https://issues.apache.org/jira/browse/PHOENIX-7123
> Project: Phoenix
>  Issue Type: New Feature
>  Components: phoenix
>Reporter: Rajeshbabu Chintaguntla
>Priority: Major
>
> Currently there is only way to create split keys by passing array of strings 
> which will be leading parts of rowkey. If a table have multi-column row key 
> or non varchar key application developers need to do lot of research to 
> prepare the split keys according to columns types and need to dig internal 
> details of how the row key formed with separators  in case of variable length 
> columns etc. We can support the multi column split keys by passing array of 
> arrays of identifiers so that split keys formed by considering data types, 
> fixed length or variable length column types etc.
> Even we can support splitting existing regions with alter kind of queries so 
> that need not relay on hbase APIs to split the regions.
> Example syntax:
> {code:sql}
> create table test(a integer not null, b varchar not null, c float, d bigint, 
> constraint pk primary key(a,b)) split on ([1, 'bob'],[5, 'fan'],[7,'nob'])
> {code}
> Similarly for dynamic splitting existing regions we can define alter command 
> also as below
> {code:sql}
> alter table test split on ([3, 'cob'],[4, 'end'])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7123) Support for multi-column split keys

2023-11-22 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-7123:
-
Description: 
Currently there is only way to create split keys by passing array of strings 
which will be leading parts of rowkey. If a table have multi-column row key or 
non varchar key application developers need to do lot of research to prepare 
the split keys according to columns types and need to dig internal details of 
how the row key formed with separators  in case of variable length columns etc. 
We can support the multi column split keys by passing array of arrays of 
identifiers so that split keys formed to by considering data types, fixed 
length or variable length values etc.

Even we can support splitting existing regions with alter kind of queries so 
that need not relay on hbase APIs to split the regions.

Example syntax:
{code:sql}
create table test(a integer not null, b varchar not null, c float, d bigint, 
constraint pk primary key(a,b)) split on ([1, 'bob'],[5, 'fan'],[7,'nob'])
{code}
Similarly for dynamic splitting existing regions we can define alter command 
also as below
{code:sql}
alter table test split on ([3, 'cob'],[4, 'end'])
{code}

  was:
Currently there is only way to create split keys by passing array of strings 
which will be leading parts of rowkey. If a table have multi-column row key or 
non varchar key application developers need to do lot of research to prepare 
the split keys according to columns types and need to dig internal details of 
how the row key formed which with separators  in case of variable length 
columns etc. We can support the multi column split keys by passing array of 
arrays of identifiers so that split keys formed to by considering data types, 
fixed length or variable length values etc.


Even we can support splitting existing regions with alter kind of queries so 
that need not relay on hbase APIs to split the regions.

Example syntax:

{code:sql}
create table test(a integer not null, b varchar not null, c float, d bigint, 
constraint pk primary key(a,b)) split on ([1, 'bob'],[5, 'fan'],[7,'nob'])
{code}

Similarly for dynamic splitting existing regions we can define alter command 
also as below

{code:sql}
alter table test split on ([3, 'cob'],[4, 'end'])
{code}



> Support for multi-column split keys
> ---
>
> Key: PHOENIX-7123
> URL: https://issues.apache.org/jira/browse/PHOENIX-7123
> Project: Phoenix
>  Issue Type: New Feature
>  Components: phoenix
>Reporter: Rajeshbabu Chintaguntla
>Priority: Major
>
> Currently there is only way to create split keys by passing array of strings 
> which will be leading parts of rowkey. If a table have multi-column row key 
> or non varchar key application developers need to do lot of research to 
> prepare the split keys according to columns types and need to dig internal 
> details of how the row key formed with separators  in case of variable length 
> columns etc. We can support the multi column split keys by passing array of 
> arrays of identifiers so that split keys formed to by considering data types, 
> fixed length or variable length values etc.
> Even we can support splitting existing regions with alter kind of queries so 
> that need not relay on hbase APIs to split the regions.
> Example syntax:
> {code:sql}
> create table test(a integer not null, b varchar not null, c float, d bigint, 
> constraint pk primary key(a,b)) split on ([1, 'bob'],[5, 'fan'],[7,'nob'])
> {code}
> Similarly for dynamic splitting existing regions we can define alter command 
> also as below
> {code:sql}
> alter table test split on ([3, 'cob'],[4, 'end'])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >