[jira] [Updated] (PHOENIX-7132) HBase cannot load ClientRpcControllerFactory when adding connector with the --jar option to Spark
[ https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated PHOENIX-7132: - Description: I have noticed this today when working with the shaded spark connector jar: {noformat} 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory {noformat} -We should be able to avoid this by not relocating these classes at all. This is only a problem for shaded artifacts that do not include HBase, like the shaded connectors and the planned phoenix-client-byo-hbase variant. In the full-fat shaded clients phoenix and HBase has the same shading, and HBase is able to find the shaded class.- was: I have noticed this today when working with the shaded spark connector jar: {noformat} 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory {noformat} We should be able to avoid this by not relocating these classes at all. This is only a problem for shaded artifacts that do not include HBase, like the shaded connectors and the planned phoenix-client-byo-hbase variant. In the full-fat shaded clients phoenix and HBase has the same shading, and HBase is able to find the shaded class. > HBase cannot load ClientRpcControllerFactory when adding connector with the > --jar option to Spark > - > > Key: PHOENIX-7132 > URL: https://issues.apache.org/jira/browse/PHOENIX-7132 > Project: Phoenix > Issue Type: Bug > Components: connectors, core >Affects Versions: connectors-6.0.0, 5.2.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > I have noticed this today when working with the shaded spark connector jar: > {noformat} > 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured > "hbase.rpc.controllerfactory.class" > (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from > hbase-site.xml, falling back to use default RpcControllerFactory > {noformat} > -We should be able to avoid this by not relocating these classes at all. > This is only a problem for shaded artifacts that do not include HBase, like > the shaded connectors and the planned phoenix-client-byo-hbase variant. > In the full-fat shaded clients phoenix and HBase has the same shading, and > HBase is able to find the shaded class.- -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (PHOENIX-7132) HBase cannot load ClientRpcControllerFactory when adding connector with the --jar option to Spark
[ https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated PHOENIX-7132: - Summary: HBase cannot load ClientRpcControllerFactory when adding connector with the --jar option to Spark (was: HBase cannot load ) > HBase cannot load ClientRpcControllerFactory when adding connector with the > --jar option to Spark > - > > Key: PHOENIX-7132 > URL: https://issues.apache.org/jira/browse/PHOENIX-7132 > Project: Phoenix > Issue Type: Bug > Components: connectors, core >Affects Versions: connectors-6.0.0, 5.2.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > I have noticed this today when working with the shaded spark connector jar: > {noformat} > 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured > "hbase.rpc.controllerfactory.class" > (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from > hbase-site.xml, falling back to use default RpcControllerFactory > {noformat} > We should be able to avoid this by not relocating these classes at all. > This is only a problem for shaded artifacts that do not include HBase, like > the shaded connectors and the planned phoenix-client-byo-hbase variant. > In the full-fat shaded clients phoenix and HBase has the same shading, and > HBase is able to find the shaded class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (PHOENIX-7132) HBase cannot load ClientRpcControllerFactory when adding connector with the --jar option to Spark
[ https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated PHOENIX-7132: - Description: I have noticed this today when working with the shaded spark connector jar: {noformat} 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory {noformat} -We should be able to avoid this by not relocating these classes at all.- -This is only a problem for shaded artifacts that do not include HBase, like the shaded connectors and the planned phoenix-client-byo-hbase variant.- -In the full-fat shaded clients phoenix and HBase has the same shading, and HBase is able to find the shaded class.- was: I have noticed this today when working with the shaded spark connector jar: {noformat} 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory {noformat} -We should be able to avoid this by not relocating these classes at all. This is only a problem for shaded artifacts that do not include HBase, like the shaded connectors and the planned phoenix-client-byo-hbase variant. In the full-fat shaded clients phoenix and HBase has the same shading, and HBase is able to find the shaded class.- > HBase cannot load ClientRpcControllerFactory when adding connector with the > --jar option to Spark > - > > Key: PHOENIX-7132 > URL: https://issues.apache.org/jira/browse/PHOENIX-7132 > Project: Phoenix > Issue Type: Bug > Components: connectors, core >Affects Versions: connectors-6.0.0, 5.2.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > I have noticed this today when working with the shaded spark connector jar: > {noformat} > 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured > "hbase.rpc.controllerfactory.class" > (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from > hbase-site.xml, falling back to use default RpcControllerFactory > {noformat} > -We should be able to avoid this by not relocating these classes at all.- > -This is only a problem for shaded artifacts that do not include HBase, like > the shaded connectors and the planned phoenix-client-byo-hbase variant.- > -In the full-fat shaded clients phoenix and HBase has the same shading, and > HBase is able to find the shaded class.- -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (PHOENIX-7132) HBase cannot load
[ https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated PHOENIX-7132: - Summary: HBase cannot load (was: Do not relocate classes to be directly referred by hbase-site.xml) > HBase cannot load > -- > > Key: PHOENIX-7132 > URL: https://issues.apache.org/jira/browse/PHOENIX-7132 > Project: Phoenix > Issue Type: Bug > Components: connectors, core >Affects Versions: connectors-6.0.0, 5.2.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > I have noticed this today when working with the shaded spark connector jar: > {noformat} > 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured > "hbase.rpc.controllerfactory.class" > (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from > hbase-site.xml, falling back to use default RpcControllerFactory > {noformat} > We should be able to avoid this by not relocating these classes at all. > This is only a problem for shaded artifacts that do not include HBase, like > the shaded connectors and the planned phoenix-client-byo-hbase variant. > In the full-fat shaded clients phoenix and HBase has the same shading, and > HBase is able to find the shaded class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (PHOENIX-7132) Do not relocate classes to be directly referred by hbase-site.xml
[ https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated PHOENIX-7132: - Affects Version/s: connectors-6.0.0 5.2.0 > Do not relocate classes to be directly referred by hbase-site.xml > - > > Key: PHOENIX-7132 > URL: https://issues.apache.org/jira/browse/PHOENIX-7132 > Project: Phoenix > Issue Type: Bug > Components: connectors, core >Affects Versions: connectors-6.0.0, 5.2.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > I have noticed this today when working with the shaded spark connector jar: > {noformat} > 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured > "hbase.rpc.controllerfactory.class" > (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from > hbase-site.xml, falling back to use default RpcControllerFactory > {noformat} > We should be able to avoid this by not relocating these classes at all. > This is only a problem for shaded artifacts that do not include HBase, like > the shaded connectors and the planned phoenix-client-byo-hbase variant. > In the full-fat shaded clients phoenix and HBase has the same shading, and > HBase is able to find the shaded class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (PHOENIX-7132) Do not relocate classes to be directly referred by hbase-site.xml
[ https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated PHOENIX-7132: - Description: I have noticed this today when working with the shaded spark connector jar: {noformat} 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory {noformat} We should be able to avoid this by not relocating these classes at all. This is only a problem for shaded artifacts that do not include HBase, like the shaded connectors and the planned phoenix-client-byo-hbase variant. In the full-fat shaded clients phoenix and HBase has the same shading, and HBase is able to find the shaded class. was: I have noticed this today: {noformat} 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory {noformat} We should be able to avoid this by not relocating these classes at all. This is only a problem for shaded artifacts that do not include HBase, like the shaded connectors and the planned phoenix-client-byo-hbase variant. In the full-fat shaded clients phoenix and HBase has the same shading, and HBase is able to find the shaded class. > Do not relocate classes to be directly referred by hbase-site.xml > - > > Key: PHOENIX-7132 > URL: https://issues.apache.org/jira/browse/PHOENIX-7132 > Project: Phoenix > Issue Type: Bug > Components: connectors, core >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > I have noticed this today when working with the shaded spark connector jar: > {noformat} > 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured > "hbase.rpc.controllerfactory.class" > (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from > hbase-site.xml, falling back to use default RpcControllerFactory > {noformat} > We should be able to avoid this by not relocating these classes at all. > This is only a problem for shaded artifacts that do not include HBase, like > the shaded connectors and the planned phoenix-client-byo-hbase variant. > In the full-fat shaded clients phoenix and HBase has the same shading, and > HBase is able to find the shaded class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (PHOENIX-7132) Do not relocate classes to be directly referred by hbase-site.xml
Istvan Toth created PHOENIX-7132: Summary: Do not relocate classes to be directly referred by hbase-site.xml Key: PHOENIX-7132 URL: https://issues.apache.org/jira/browse/PHOENIX-7132 Project: Phoenix Issue Type: Bug Components: connectors, core Reporter: Istvan Toth Assignee: Istvan Toth I have noticed this today: {noformat} 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory {noformat} We should be able to avoid this by not relocating these classes at all. This is only a problem for shaded artifacts that do not include HBase, like the shaded connectors and the planned phoenix-client-byo-hbase variant. In the full-fat shaded clients phoenix and HBase has the same shading, and HBase is able to find the shaded class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (PHOENIX-7101) Explain plan to output local index name if it is used
[ https://issues.apache.org/jira/browse/PHOENIX-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani resolved PHOENIX-7101. --- Resolution: Fixed > Explain plan to output local index name if it is used > - > > Key: PHOENIX-7101 > URL: https://issues.apache.org/jira/browse/PHOENIX-7101 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 5.1.3 >Reporter: Viraj Jasani >Assignee: Jing Yu >Priority: Major > Fix For: 5.2.0, 5.1.4 > > > When we create local index on the table, we use different column family to > store the index data. When we use Explain plan for any query that uses local > index, since the hbase table name remains same for the local index, we only > output that table name. To provide more clarity, we should output local index > name in the Explain plan output if the local index is to be used. > When local index is used, we should output in the format > "${local_index}(${physical_table_name})" instead of "${physical_table_name}". -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (PHOENIX-7106) Invalid rowkey returned by coproc can cause data integrity issues
[ https://issues.apache.org/jira/browse/PHOENIX-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated PHOENIX-7106: -- Fix Version/s: 5.1.4 > Invalid rowkey returned by coproc can cause data integrity issues > - > > Key: PHOENIX-7106 > URL: https://issues.apache.org/jira/browse/PHOENIX-7106 > Project: Phoenix > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 5.2.0, 5.1.4 > > > HBase scanner interface expects server to perform scan of the cells from > HFile or Block cache and return consistent data i.e. rowkey of the cells > returned should stay in the range of the scan boundaries. When a region moves > and scanner needs reset, or if the current row is too large and the server > returns partial row, the subsequent scanner#next is supposed to return > remaining cells. When this happens, cell rowkeys returned by servers i.e. any > coprocessors is expected to be in the scan boundary range so that server can > reliably perform its validation and return remaining cells as expected. > Phoenix client initiates serial or parallel scans from the aggregators based > on the region boundaries and the scan boundaries are sometimes adjusted based > on where optimizer provided key ranges, to include tenant boundaries, salt > boundaries etc. After the client opens the scanner and performs scan > operation, some of the coprocs return invalid rowkey for the following cases: > # Grouped aggregate queries > # Ungrouped aggregate queries (not all of them) > # Offset queries > # Some dummy cells returned with empty rowkey > # Update statistics queries > # Local indexes > Since many of these cases return reserved rowkeys, they are likely not going > to match scan or region boundaries. It has potential to cause data integrity > issues in certain scenarios as explained above. Empty rowkey returned by > server can be treated as end of the region scan by HBase client. > With the paging feature enabled, if the page size is kept low, we have higher > chances of scanners returning dummy cell, resulting in increased num of RPC > calls for better latency and timeouts. We should return only valid rowkey in > the scan range for all the cases where we perform above mentioned operations > like complex aggregate or offset queries etc. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (PHOENIX-7106) Invalid rowkey returned by coproc can cause data integrity issues
[ https://issues.apache.org/jira/browse/PHOENIX-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated PHOENIX-7106: -- Fix Version/s: 5.2.0 > Invalid rowkey returned by coproc can cause data integrity issues > - > > Key: PHOENIX-7106 > URL: https://issues.apache.org/jira/browse/PHOENIX-7106 > Project: Phoenix > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 5.2.0 > > > HBase scanner interface expects server to perform scan of the cells from > HFile or Block cache and return consistent data i.e. rowkey of the cells > returned should stay in the range of the scan boundaries. When a region moves > and scanner needs reset, or if the current row is too large and the server > returns partial row, the subsequent scanner#next is supposed to return > remaining cells. When this happens, cell rowkeys returned by servers i.e. any > coprocessors is expected to be in the scan boundary range so that server can > reliably perform its validation and return remaining cells as expected. > Phoenix client initiates serial or parallel scans from the aggregators based > on the region boundaries and the scan boundaries are sometimes adjusted based > on where optimizer provided key ranges, to include tenant boundaries, salt > boundaries etc. After the client opens the scanner and performs scan > operation, some of the coprocs return invalid rowkey for the following cases: > # Grouped aggregate queries > # Ungrouped aggregate queries (not all of them) > # Offset queries > # Some dummy cells returned with empty rowkey > # Update statistics queries > # Local indexes > Since many of these cases return reserved rowkeys, they are likely not going > to match scan or region boundaries. It has potential to cause data integrity > issues in certain scenarios as explained above. Empty rowkey returned by > server can be treated as end of the region scan by HBase client. > With the paging feature enabled, if the page size is kept low, we have higher > chances of scanners returning dummy cell, resulting in increased num of RPC > calls for better latency and timeouts. We should return only valid rowkey in > the scan range for all the cases where we perform above mentioned operations > like complex aggregate or offset queries etc. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (OMID-240) Transactional visibility is broken
[ https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajeshbabu Chintaguntla updated OMID-240: - Fix Version/s: 1.1.1 > Transactional visibility is broken > -- > > Key: OMID-240 > URL: https://issues.apache.org/jira/browse/OMID-240 > Project: Phoenix Omid > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Lars Hofhansl >Assignee: Rajeshbabu Chintaguntla >Priority: Critical > Fix For: 1.1.1 > > Attachments: hbase-omid-client-config.yml, > omid-server-configuration.yml > > > Client I: > {code:java} > > create table test(x float primary key, y float) DISABLE_WAL=true, > TRANSACTIONAL=true; > No rows affected (1.872 seconds) > > !autocommit off > Autocommit status: false > > upsert into test values(rand(), rand()); > 1 row affected (0.018 seconds) > > upsert into test select rand(), rand() from test; > -- 18-20x > > !commit{code} > > Client II: > {code:java} > -- repeat quickly after the commit on client I > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 0 | > +--+ > 1 row selected (1.408 seconds) > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 259884 | > +--+ > 1 row selected (2.959 seconds) > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 260145 | > +--+ > 1 row selected (4.274 seconds) > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 260148 | > +--+ > 1 row selected (5.563 seconds) > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 260148 | > +--+ > 1 row selected (5.573 seconds){code} > The second client should either show 0 or 260148. But no other value! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (OMID-240) Transactional visibility is broken
[ https://issues.apache.org/jira/browse/OMID-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17791713#comment-17791713 ] ASF GitHub Bot commented on OMID-240: - chrajeshbabu commented on code in PR #149: URL: https://github.com/apache/phoenix-omid/pull/149#discussion_r1410970001 ## tso-server/bin/omid-env.sh: ## @@ -22,6 +22,14 @@ # Check if HADOOP_CONF_DIR and HBASE_CONF_DIR are set # - -if [ -z ${HADOOP_CONF_DIR+x} ]; then echo "WARNING: HADOOP_CONF_DIR is unset"; else echo "HADOOP_CONF_DIR is set to '$HADOOP_CONF_DIR'"; fi -if [ -z ${HBASE_CONF_DIR+x} ]; then echo "WARNING: HBASE_CONF_DIR is unset"; else echo "HBASE_CONF_DIR is set to '$HBASE_CONF_DIR'"; fi +if [ -z ${HADOOP_CONF_DIR+x} ]; + then echo "WARNING: HADOOP_CONF_DIR is unset"; + HADOOP_CONF_DIR=/etc/hadoop/conf Review Comment: @stoty handled the review comments in the new commit. Thanks > Transactional visibility is broken > -- > > Key: OMID-240 > URL: https://issues.apache.org/jira/browse/OMID-240 > Project: Phoenix Omid > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Lars Hofhansl >Assignee: Rajeshbabu Chintaguntla >Priority: Critical > Attachments: hbase-omid-client-config.yml, > omid-server-configuration.yml > > > Client I: > {code:java} > > create table test(x float primary key, y float) DISABLE_WAL=true, > TRANSACTIONAL=true; > No rows affected (1.872 seconds) > > !autocommit off > Autocommit status: false > > upsert into test values(rand(), rand()); > 1 row affected (0.018 seconds) > > upsert into test select rand(), rand() from test; > -- 18-20x > > !commit{code} > > Client II: > {code:java} > -- repeat quickly after the commit on client I > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 0 | > +--+ > 1 row selected (1.408 seconds) > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 259884 | > +--+ > 1 row selected (2.959 seconds) > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 260145 | > +--+ > 1 row selected (4.274 seconds) > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 260148 | > +--+ > 1 row selected (5.563 seconds) > > select count(*) from test; > +--+ > | COUNT(1) | > +--+ > | 260148 | > +--+ > 1 row selected (5.573 seconds){code} > The second client should either show 0 or 260148. But no other value! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (PHOENIX-6939) Change phoenix-hive connector shading to work with hbase-shaded-mapreduce
[ https://issues.apache.org/jira/browse/PHOENIX-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth resolved PHOENIX-6939. -- Fix Version/s: connectors-6.0.0 Resolution: Fixed Committed to master. Thanks for the review [~richardantal]. > Change phoenix-hive connector shading to work with hbase-shaded-mapreduce > - > > Key: PHOENIX-6939 > URL: https://issues.apache.org/jira/browse/PHOENIX-6939 > Project: Phoenix > Issue Type: Improvement > Components: connectors, hive-connector >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: connectors-6.0.0 > > > The Hive 3 Hbase classpath is a huge mess, and as a result, we need to > replace the HBase jars in Hive to ever have a chance to work. > Provide a shaded phoenix hive connector JAR that uses existing > hbase-shaded-mapreduce JARs added to the hive classpath. > This is the same shading needed by Hive 4 (which requires some more API > changes) -- This message was sent by Atlassian Jira (v8.20.10#820010)