[jira] [Comment Edited] (OMID-240) Transactional visibility is broken

2023-11-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788657#comment-17788657
 ] 

Istvan Toth edited comment on OMID-240 at 11/22/23 7:21 AM:


If it used and needed for tests, then making it functional seems to be the 
right thing to do, so that the testing environment behaves like a production 
one.


was (Author: stoty):
If it used and needed for tests implementation, then making it functional seems 
to be the right thing to do, so that the testing environment behaves like a 
production one.

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (OMID-240) Transactional visibility is broken

2023-11-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788657#comment-17788657
 ] 

Istvan Toth edited comment on OMID-240 at 11/22/23 7:21 AM:


If it used and needed for tests implementation, then making it functional seems 
to be the right thing to do, so that the testing environment behaves like a 
production one.


was (Author: stoty):
If it used and needed for tests implementation, then making it functional seems 
to be the right thing to do, so that we the testing environment behaves like a 
production one.

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-11-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788657#comment-17788657
 ] 

Istvan Toth commented on OMID-240:
--

If it used and needed for tests implementation, then making it functional seems 
to be the right thing to do, so that we the testing environment behaves like a 
production one.

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-11-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788654#comment-17788654
 ] 

Istvan Toth commented on OMID-240:
--

You have a better understanding of Omid than I do.

What is the purpose of the InMemory storage module ?
I think that having a non-functional implementation is not good.

My initial reaction is that if it's not functional, then we should either fix 
it or remove it.

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (OMID-240) Transactional visibility is broken

2023-11-21 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788611#comment-17788611
 ] 

Rajeshbabu Chintaguntla edited comment on OMID-240 at 11/22/23 4:16 AM:


Currently with default configurations client TSO server initialising with 
inmemory storage modules.
-  With InMemoryCommitTableStorageModule, NullCommitTable is getting used which 
doesn't maintain any commit timestamps information.

{noformat}
public class InMemoryCommitTableStorageModule extends AbstractModule {

@Override
public void configure() {
bind(CommitTable.class).to(NullCommitTable.class).in(Singleton.class);
}

}
{noformat}
{noformat}
public static class Writer implements CommitTable.Writer {
@Override
public void addCommittedTransaction(long startTimestamp, long 
commitTimestamp) {
// noop
}
{noformat}
- What ever the commit timestamp generated by Transaction Oracle persisted in 
shadow cells only. 

- At the same time in the server side HBase Commit Table is getting used to 
fetch the commit timestamp of the transactions which will not have any 
information because TSO not writing to Hbase commit table.
{noformat}
connection = RegionConnectionFactory

.getConnection(RegionConnectionFactory.ConnectionType.READ_CONNECTION, 
(RegionCoprocessorEnvironment) env);
commitTableClient = new HBaseCommitTable(connection, 
commitTableConf).getClient();
LOG.info("Snapshot filter started");
{noformat}
-  During the commit the tso client updates the shadow cells with the commit 
timestamp.
{noformat}
private void commitRegularTransaction(AbstractTransaction 
tx)
throws RollbackException, TransactionException
{

try {

long commitTs = tsoClient.commit(tx.getStartTimestamp(), 
tx.getWriteSet(), tx.getConflictFreeWriteSet()).get();
certifyCommitForTx(tx, commitTs);
updateShadowCellsAndRemoveCommitTableEntry(tx, postCommitter);
{noformat}

-  Omid Filters at the server checks for commit timestamp from shadow cells or 
commit table, while filtering chances of the cells will be skipped, if commit 
timestamp available neither in shadow cells not commit table. This is what 
happening when both commit and scan running at the same time and mismatch is 
number of records.
{noformat}
if (CellUtils.isShadowCell(v)) {
Long commitTs =  Bytes.toLong(CellUtil.cloneValue(v));
commitCache.put(v.getTimestamp(), commitTs);
// Continue getting shadow cells until one of them fits this 
transaction
if (hbaseTransaction.getStartTimestamp() >= commitTs) {
return ReturnCode.NEXT_COL;
} else {
return ReturnCode.SKIP;
}
}

Optional commitTS = getCommitIfInSnapshot(v, 
CellUtils.isFamilyDeleteCell(v));
if (commitTS.isPresent()) {
   
}

return ReturnCode.SKIP;
{noformat}

This issue is not happening when HBase Storage Modules is configured for TSO 
server where the omid filters can find the commit timestamp in case of shadow 
cells doens't have.

So it can be avoided with proper usage the HBase storage modules in TSO server.

[~stoty] I think this can be closed as not an issue. WDYT?


was (Author: rajeshbabu):
Currently with default configurations client TSO server initialising with 
inmemory storage modules.
-  With InMemoryCommitTableStorageModule, NullCommitTable is getting used which 
doesn't maintain any commit timestamps information.
- What ever the commit timestamp generated by Transaction Oracle persisted in 
shadow cells only. 
- At the same time in the server side HBase Commit Table is getting used to 
fetch the commit timestamp of the transactions which will not have any 
information because TSO not writing to Hbase commit table.
-  During the commit the tso client updates the shadow cells with the commit 
timestamp.
-  Omid Filters at the server checks for commit timestamp from shadow cells or 
commit table, while filtering chances of the cells will be skipped, if commit 
timestamp available neither in shadow cells not commit table. This is what 
happening when both commit and scan running at the same time and mismatch is 
number of records.

This issue is not happening when HBase Storage Modules is configured for TSO 
server where the omid filters can find the commit timestamp in case of shadow 
cells doens't have.

So it can be avoided with proper usage the HBase storage modules in TSO server.

[~stoty] I think this can be closed as not an issue. WDYT?

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>

[jira] [Commented] (OMID-240) Transactional visibility is broken

2023-11-21 Thread Rajeshbabu Chintaguntla (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788611#comment-17788611
 ] 

Rajeshbabu Chintaguntla commented on OMID-240:
--

Currently with default configurations client TSO server initialising with 
inmemory storage modules.
-  With InMemoryCommitTableStorageModule, NullCommitTable is getting used which 
doesn't maintain any commit timestamps information.
- What ever the commit timestamp generated by Transaction Oracle persisted in 
shadow cells only. 
- At the same time in the server side HBase Commit Table is getting used to 
fetch the commit timestamp of the transactions which will not have any 
information because TSO not writing to Hbase commit table.
-  During the commit the tso client updates the shadow cells with the commit 
timestamp.
-  Omid Filters at the server checks for commit timestamp from shadow cells or 
commit table, while filtering chances of the cells will be skipped, if commit 
timestamp available neither in shadow cells not commit table. This is what 
happening when both commit and scan running at the same time and mismatch is 
number of records.

This issue is not happening when HBase Storage Modules is configured for TSO 
server where the omid filters can find the commit timestamp in case of shadow 
cells doens't have.

So it can be avoided with proper usage the HBase storage modules in TSO server.

[~stoty] I think this can be closed as not an issue. WDYT?

> Transactional visibility is broken
> --
>
> Key: OMID-240
> URL: https://issues.apache.org/jira/browse/OMID-240
> Project: Phoenix Omid
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Lars Hofhansl
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Attachments: hbase-omid-client-config.yml, 
> omid-server-configuration.yml
>
>
> Client I:
> {code:java}
>  > create table test(x float primary key, y float) DISABLE_WAL=true, 
> TRANSACTIONAL=true;
> No rows affected (1.872 seconds)
> > !autocommit off
> Autocommit status: false
> > upsert into test values(rand(), rand());
> 1 row affected (0.018 seconds)
> > upsert into test select rand(), rand() from test;
> -- 18-20x
> > !commit{code}
>  
> Client II:
> {code:java}
> -- repeat quickly after the commit on client I
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 0        |
> +--+
> 1 row selected (1.408 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 259884   |
> +--+
> 1 row selected (2.959 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260145   |
> +--+
> 1 row selected (4.274 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.563 seconds)
> > select count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 260148   |
> +--+
> 1 row selected (5.573 seconds){code}
> The second client should either show 0 or 260148. But no other value!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7073) Parse JSON columns on the server side

2023-11-21 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri resolved PHOENIX-7073.
---
Resolution: Fixed

> Parse JSON columns on the server side
> -
>
> Key: PHOENIX-7073
> URL: https://issues.apache.org/jira/browse/PHOENIX-7073
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
>
> One of the optimizations mentioned in the Design spec is to _"pass the json 
> path to server side as part of the select and do the parsing on the server 
> side and then return the result to the client."_
> This is the task to track that change



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (OMID-256) Bump hbase and other dependencies to latest version

2023-11-21 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/OMID-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved OMID-256.
--
Resolution: Fixed

Merged to master. Thanks for the patch [~nihaljain.cs] and [~stoty] [~gjacoby] 
for reviews.

> Bump hbase and other dependencies to latest version
> ---
>
> Key: OMID-256
> URL: https://issues.apache.org/jira/browse/OMID-256
> Project: Phoenix Omid
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> This Jira will bump following:
> |*Property*|*From*|*To*|
> |hbase.version|2.4.13|2.4.17|
> |log4j2.version|2.18.0|2.21.0|
> |junit.version|4.13.1|4.13.2|
> |commons-lang3.version|3.12.0|3.13.0|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-256) Bump hbase and other dependencies to latest version

2023-11-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788489#comment-17788489
 ] 

ASF GitHub Bot commented on OMID-256:
-

chrajeshbabu merged PR #145:
URL: https://github.com/apache/phoenix-omid/pull/145




> Bump hbase and other dependencies to latest version
> ---
>
> Key: OMID-256
> URL: https://issues.apache.org/jira/browse/OMID-256
> Project: Phoenix Omid
>  Issue Type: Sub-task
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.1.1
>
>
> This Jira will bump following:
> |*Property*|*From*|*To*|
> |hbase.version|2.4.13|2.4.17|
> |log4j2.version|2.18.0|2.21.0|
> |junit.version|4.13.1|4.13.2|
> |commons-lang3.version|3.12.0|3.13.0|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7118) Fix Shading Regressions in Connectors

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7118:
-
Summary: Fix Shading Regressions in Connectors  (was: Fix Shading 
regressions in connectors)

> Fix Shading Regressions in Connectors
> -
>
> Key: PHOENIX-7118
> URL: https://issues.apache.org/jira/browse/PHOENIX-7118
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, hive-connector, spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> It seems like sometime during the refactors, we have broken the shading, 
> particularly the exclusions / provided settings.
> Only the Spark3 connector shading looks correct, both the Hive and Spark2 
> connector shadings seem to have regressed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7118) Fix Shading regressions in connectors

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7118:
-
Description: 
It seems like sometime during the refactors, we have broken the shading, 
particularly the exclusions / provided settings.
Only the Spark3 connector shading looks correct, both the Hive and Spark2 
connector shadings seem to have regressed.

  was:It seems like sometime during the refactors, we have broken the shading, 
particularly the exclusions / provided settings.


> Fix Shading regressions in connectors
> -
>
> Key: PHOENIX-7118
> URL: https://issues.apache.org/jira/browse/PHOENIX-7118
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, hive-connector, spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> It seems like sometime during the refactors, we have broken the shading, 
> particularly the exclusions / provided settings.
> Only the Spark3 connector shading looks correct, both the Hive and Spark2 
> connector shadings seem to have regressed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7118) Fix Shading regressions in connectors

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7118:
-
Component/s: hive-connector

> Fix Shading regressions in connectors
> -
>
> Key: PHOENIX-7118
> URL: https://issues.apache.org/jira/browse/PHOENIX-7118
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, hive-connector, spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> It seems like sometime during the refactors, we have broken the shading, 
> particularly the exclusions / provided settings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7118) Fix Shading regressions in connectors

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7118:
-
Description: It seems like sometime during the refactors, we have broken 
the shading, particularly the exclusions / provided settings.  (was: The shaded 
Spark2 connector is 136 MB, because it includes the Hadoop and HBase libraries.

Exclude those, and use the same shading that the Spark3 connector does.)

> Fix Shading regressions in connectors
> -
>
> Key: PHOENIX-7118
> URL: https://issues.apache.org/jira/browse/PHOENIX-7118
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> It seems like sometime during the refactors, we have broken the shading, 
> particularly the exclusions / provided settings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7118) Fix Shading regressions in connectors

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7118:
-
Summary: Fix Shading regressions in connectors  (was: Exclude Hadoop and 
HBase dependencies from the shaded Spark 2 connector)

> Fix Shading regressions in connectors
> -
>
> Key: PHOENIX-7118
> URL: https://issues.apache.org/jira/browse/PHOENIX-7118
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The shaded Spark2 connector is 136 MB, because it includes the Hadoop and 
> HBase libraries.
> Exclude those, and use the same shading that the Spark3 connector does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6939) Change phoenix-hive connector shading to work with hbase-shaded-client

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-6939:


Assignee: Istvan Toth

> Change phoenix-hive connector shading to work with hbase-shaded-client
> --
>
> Key: PHOENIX-6939
> URL: https://issues.apache.org/jira/browse/PHOENIX-6939
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors, hive-connector
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The Hive 3 Hbase classpath is a huge mess, and as a result, we need to 
> replace the HBase jars in Hive to ever have a chance to work.
> Provide a shaded phoenix hive connector JAR that uses existing 
> hbase-shaded-client JARs added to the hive classpath.
> This is the same shading needed by Hive 4 (which requires some more API 
> changes)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (PHOENIX-6939) Change phoenix-hive connector shading to work with hbase-shaded-client

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reopened PHOENIX-6939:
--

> Change phoenix-hive connector shading to work with hbase-shaded-client
> --
>
> Key: PHOENIX-6939
> URL: https://issues.apache.org/jira/browse/PHOENIX-6939
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors, hive-connector
>Reporter: Istvan Toth
>Priority: Major
>
> The Hive 3 Hbase classpath is a huge mess, and as a result, we need to 
> replace the HBase jars in Hive to ever have a chance to work.
> Provide a shaded phoenix hive connector JAR that uses existing 
> hbase-shaded-client JARs added to the hive classpath.
> This is the same shading needed by Hive 4 (which requires some more API 
> changes)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7119) Java ITs are not run for either Spark Connector

2023-11-21 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7119:


 Summary: Java ITs are not run for either Spark Connector
 Key: PHOENIX-7119
 URL: https://issues.apache.org/jira/browse/PHOENIX-7119
 Project: Phoenix
  Issue Type: Bug
  Components: connectors, spark-connector
Reporter: Istvan Toth
Assignee: Istvan Toth


These are probably broken ever since I've refactored the base classes in 
Phoenix not to have @Category annotations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7118) Exclude Hadoop and HBase dependencies from the shaded Spark 2 connector

2023-11-21 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7118:


 Summary: Exclude Hadoop and HBase dependencies from the shaded 
Spark 2 connector
 Key: PHOENIX-7118
 URL: https://issues.apache.org/jira/browse/PHOENIX-7118
 Project: Phoenix
  Issue Type: Bug
  Components: connectors, spark-connector
Affects Versions: connectors-6.0.0
Reporter: Istvan Toth
Assignee: Istvan Toth


The shaded Spark2 connector is 136 MB, because it includes the Hadoop and HBase 
libraries.

Exclude those, and use the same shading that the Spark3 connector does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6923) Always run Scala tests for Phoenix-Spark connector

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6923:
-
Description: 
When Richard fixed the Scala tests in PHOENIX-6361, these were gated behind a 
profile, as these tests really mess up the maven output.
However, those tests are too important to skip, so we should run them by 
default  anyway.

  was:
When Richard fixed the Scala tests in PHOENIX-6361, these were gated behind a 
profile, as these tests really mess up the maven output.
However, those tests are too important to skip, so we should run them always 
anyway.


> Always run Scala tests for Phoenix-Spark connector
> --
>
> Key: PHOENIX-6923
> URL: https://issues.apache.org/jira/browse/PHOENIX-6923
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> When Richard fixed the Scala tests in PHOENIX-6361, these were gated behind a 
> profile, as these tests really mess up the maven output.
> However, those tests are too important to skip, so we should run them by 
> default  anyway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7065) Spark3 connector tests fail with Spark 3.4.1

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7065:


Assignee: Istvan Toth

> Spark3 connector tests fail with Spark 3.4.1
> 
>
> Key: PHOENIX-7065
> URL: https://issues.apache.org/jira/browse/PHOENIX-7065
> Project: Phoenix
>  Issue Type: New Feature
>  Components: connectors, spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Probably some kind of dependency version conflict with minicluster.
> {noformat}
> [INFO] Running org.apache.phoenix.spark.SaltedTableIT
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.008 
> s <<< FAILURE! - in org.apache.phoenix.spark.SaltedTableIT
> [ERROR] org.apache.phoenix.spark.SaltedTableIT  Time elapsed: 0.002 s  <<< 
> ERROR!
> java.lang.RuntimeException: java.io.IOException: Failed to save in any 
> storage directories while saving namespace.
>     at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:549)
>     at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:449)
>     at 
> org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:435)
>     at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:517)
>     at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:512)
>     at 
> org.apache.phoenix.end2end.ParallelStatsDisabledIT.doSetup(ParallelStatsDisabledIT.java:62)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.apache.phoenix.SystemExitRule$1.evaluate(SystemExitRule.java:40)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runners.Suite.runChild(Suite.java:128)
>     at org.junit.runners.Suite.runChild(Suite.java:27)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:49)
>     at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:120)
>     at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:105)
>     at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:77)
>     at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:69)
>     at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:146)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)
> Caused by: java.io.IOException: Failed to save in any storage directories 
> while saving namespace.
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1192)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1149)
>     at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:175)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1138)
>     at 
> 

[jira] [Assigned] (PHOENIX-7114) Use Phoenix 5.1.3 for building Connectiors

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7114:


Assignee: Istvan Toth

> Use Phoenix 5.1.3 for building Connectiors
> --
>
> Key: PHOENIX-7114
> URL: https://issues.apache.org/jira/browse/PHOENIX-7114
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Blocker
> Fix For: connectors-6.0.0
>
>
> We're still building connectors with 5.1.2.
> There is no good (or any) reason for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7114) Use Phoenix 5.1.3 for building Connectiors

2023-11-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-7114.
--
Fix Version/s: connectors-6.0.0
   Resolution: Fixed

Committed.
Thanks for the review [~RichardAntal]

> Use Phoenix 5.1.3 for building Connectiors
> --
>
> Key: PHOENIX-7114
> URL: https://issues.apache.org/jira/browse/PHOENIX-7114
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors
>Reporter: Istvan Toth
>Priority: Blocker
> Fix For: connectors-6.0.0
>
>
> We're still building connectors with 5.1.2.
> There is no good (or any) reason for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)