[jira] [Updated] (HIVE-27859) Backport HIVE-27817: Disable ssl hostname verification for 127.0.0.1
[ https://issues.apache.org/jira/browse/HIVE-27859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-27859: --- Affects Version/s: 2.3.9 (was: 2.3.8) > Backport HIVE-27817: Disable ssl hostname verification for 127.0.0.1 > > > Key: HIVE-27859 > URL: https://issues.apache.org/jira/browse/HIVE-27859 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.3.9 >Reporter: Yuming Wang >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27859) Backport HIVE-27817: Disable ssl hostname verification for 127.0.0.1
Yuming Wang created HIVE-27859: -- Summary: Backport HIVE-27817: Disable ssl hostname verification for 127.0.0.1 Key: HIVE-27859 URL: https://issues.apache.org/jira/browse/HIVE-27859 Project: Hive Issue Type: Improvement Affects Versions: 2.3.8 Reporter: Yuming Wang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27818) Fix compilation failure in AccumuloPredicateHandler
[ https://issues.apache.org/jira/browse/HIVE-27818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-27818: --- Description: {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-accumulo-handler: Compilation failure [ERROR] /Users/yumwang/opensource/hive/accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/predicate/AccumuloPredicateHandler.java:[263,23] unreported exception org.apache.hadoop.hive.ql.metadata.HiveException; must be caught or declared to be thrown [ERROR] {noformat} {noformat} yumwang@G9L07H60PK hive % java -version openjdk version "1.8.0_382" OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-macos-aarch64) (build 1.8.0_382-b05) OpenJDK 64-Bit Server VM (Zulu 8.72.0.17-CA-macos-aarch64) (build 25.382-b05, mixed mode) yumwang@G9L07H60PK hive % mvn -version Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9) Maven home: /Users/yumwang/software/apache-maven-3.8.8 Java version: 1.8.0_382, vendor: Azul Systems, Inc., runtime: /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre Default locale: en_CN, platform encoding: UTF-8 OS name: "mac os x", version: "13.6", arch: "aarch64", family: "mac" {noformat} was: {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-accumulo-handler: Compilation failure [ERROR] /Users/yumwang/opensource/hive/accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/predicate/AccumuloPredicateHandler.java:[263,23] unreported exception org.apache.hadoop.hive.ql.metadata.HiveException; must be caught or declared to be thrown [ERROR] {noformat} > Fix compilation failure in AccumuloPredicateHandler > --- > > Key: HIVE-27818 > URL: https://issues.apache.org/jira/browse/HIVE-27818 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Priority: Major > > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hive-accumulo-handler: Compilation failure > [ERROR] > /Users/yumwang/opensource/hive/accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/predicate/AccumuloPredicateHandler.java:[263,23] > unreported exception org.apache.hadoop.hive.ql.metadata.HiveException; must > be caught or declared to be thrown > [ERROR] > {noformat} > {noformat} > yumwang@G9L07H60PK hive % java -version > openjdk version "1.8.0_382" > OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-macos-aarch64) (build > 1.8.0_382-b05) > OpenJDK 64-Bit Server VM (Zulu 8.72.0.17-CA-macos-aarch64) (build 25.382-b05, > mixed mode) > yumwang@G9L07H60PK hive % mvn -version > Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9) > Maven home: /Users/yumwang/software/apache-maven-3.8.8 > Java version: 1.8.0_382, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > Default locale: en_CN, platform encoding: UTF-8 > OS name: "mac os x", version: "13.6", arch: "aarch64", family: "mac" > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27818) Fix compilation failure in AccumuloPredicateHandler
Yuming Wang created HIVE-27818: -- Summary: Fix compilation failure in AccumuloPredicateHandler Key: HIVE-27818 URL: https://issues.apache.org/jira/browse/HIVE-27818 Project: Hive Issue Type: Bug Reporter: Yuming Wang {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-accumulo-handler: Compilation failure [ERROR] /Users/yumwang/opensource/hive/accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/predicate/AccumuloPredicateHandler.java:[263,23] unreported exception org.apache.hadoop.hive.ql.metadata.HiveException; must be caught or declared to be thrown [ERROR] {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27817) Disable ssl hostname verification for 127.0.0.1
[ https://issues.apache.org/jira/browse/HIVE-27817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-27817: --- Description: {code:java} diff --git a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java index e12f245871..632980e7cd 100644 --- a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java +++ b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java @@ -71,7 +71,11 @@ public static TTransport getSSLSocket(String host, int port, int loginTimeout, private static TSocket getSSLSocketWithHttps(TSocket tSSLSocket) throws TTransportException { SSLSocket sslSocket = (SSLSocket) tSSLSocket.getSocket(); SSLParameters sslParams = sslSocket.getSSLParameters(); -sslParams.setEndpointIdentificationAlgorithm("HTTPS"); +if (sslSocket.getLocalAddress().getHostAddress().equals("127.0.0.1")) { + sslParams.setEndpointIdentificationAlgorithm(null); +} else { + sslParams.setEndpointIdentificationAlgorithm("HTTPS"); +} sslSocket.setSSLParameters(sslParams); return new TSocket(sslSocket); } {code} was: {code:diff} diff --git a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java index e12f245871..632980e7cd 100644 --- a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java +++ b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java @@ -71,7 +71,11 @@ public static TTransport getSSLSocket(String host, int port, int loginTimeout, private static TSocket getSSLSocketWithHttps(TSocket tSSLSocket) throws TTransportException { SSLSocket sslSocket = (SSLSocket) tSSLSocket.getSocket(); SSLParameters sslParams = sslSocket.getSSLParameters(); -sslParams.setEndpointIdentificationAlgorithm("HTTPS"); +if (sslSocket.getLocalAddress().getHostAddress().equals("127.0.0.1")) { + sslParams.setEndpointIdentificationAlgorithm(null); +} else { + sslParams.setEndpointIdentificationAlgorithm("HTTPS"); +} sslSocket.setSSLParameters(sslParams); return new TSocket(sslSocket); } {code} > Disable ssl hostname verification for 127.0.0.1 > --- > > Key: HIVE-27817 > URL: https://issues.apache.org/jira/browse/HIVE-27817 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 2.3.0 >Reporter: Yuming Wang >Priority: Major > > {code:java} > diff --git > a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java > b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java > index e12f245871..632980e7cd 100644 > --- a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java > +++ b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java > @@ -71,7 +71,11 @@ public static TTransport getSSLSocket(String host, int > port, int loginTimeout, >private static TSocket getSSLSocketWithHttps(TSocket tSSLSocket) throws > TTransportException { > SSLSocket sslSocket = (SSLSocket) tSSLSocket.getSocket(); > SSLParameters sslParams = sslSocket.getSSLParameters(); > -sslParams.setEndpointIdentificationAlgorithm("HTTPS"); > +if (sslSocket.getLocalAddress().getHostAddress().equals("127.0.0.1")) { > + sslParams.setEndpointIdentificationAlgorithm(null); > +} else { > + sslParams.setEndpointIdentificationAlgorithm("HTTPS"); > +} > sslSocket.setSSLParameters(sslParams); > return new TSocket(sslSocket); >} > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27817) Disable ssl hostname verification for 127.0.0.1
[ https://issues.apache.org/jira/browse/HIVE-27817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-27817: --- Description: {code:diff} diff --git a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java index e12f245871..632980e7cd 100644 --- a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java +++ b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java @@ -71,7 +71,11 @@ public static TTransport getSSLSocket(String host, int port, int loginTimeout, private static TSocket getSSLSocketWithHttps(TSocket tSSLSocket) throws TTransportException { SSLSocket sslSocket = (SSLSocket) tSSLSocket.getSocket(); SSLParameters sslParams = sslSocket.getSSLParameters(); -sslParams.setEndpointIdentificationAlgorithm("HTTPS"); +if (sslSocket.getLocalAddress().getHostAddress().equals("127.0.0.1")) { + sslParams.setEndpointIdentificationAlgorithm(null); +} else { + sslParams.setEndpointIdentificationAlgorithm("HTTPS"); +} sslSocket.setSSLParameters(sslParams); return new TSocket(sslSocket); } {code} > Disable ssl hostname verification for 127.0.0.1 > --- > > Key: HIVE-27817 > URL: https://issues.apache.org/jira/browse/HIVE-27817 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 2.3.0 >Reporter: Yuming Wang >Priority: Major > > {code:diff} > diff --git > a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java > b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java > index e12f245871..632980e7cd 100644 > --- a/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java > +++ b/common/src/java/org/apache/hadoop/hive/common/auth/HiveAuthUtils.java > @@ -71,7 +71,11 @@ public static TTransport getSSLSocket(String host, int > port, int loginTimeout, >private static TSocket getSSLSocketWithHttps(TSocket tSSLSocket) throws > TTransportException { > SSLSocket sslSocket = (SSLSocket) tSSLSocket.getSocket(); > SSLParameters sslParams = sslSocket.getSSLParameters(); > -sslParams.setEndpointIdentificationAlgorithm("HTTPS"); > +if (sslSocket.getLocalAddress().getHostAddress().equals("127.0.0.1")) { > + sslParams.setEndpointIdentificationAlgorithm(null); > +} else { > + sslParams.setEndpointIdentificationAlgorithm("HTTPS"); > +} > sslSocket.setSSLParameters(sslParams); > return new TSocket(sslSocket); >} > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27817) Disable ssl hostname verification for 127.0.0.1
Yuming Wang created HIVE-27817: -- Summary: Disable ssl hostname verification for 127.0.0.1 Key: HIVE-27817 URL: https://issues.apache.org/jira/browse/HIVE-27817 Project: Hive Issue Type: Improvement Components: Hive Affects Versions: 4.0.0-beta-1 Reporter: Yuming Wang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27817) Disable ssl hostname verification for 127.0.0.1
[ https://issues.apache.org/jira/browse/HIVE-27817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-27817: --- Affects Version/s: 2.3.0 (was: 4.0.0-beta-1) > Disable ssl hostname verification for 127.0.0.1 > --- > > Key: HIVE-27817 > URL: https://issues.apache.org/jira/browse/HIVE-27817 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 2.3.0 >Reporter: Yuming Wang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27815) Support collect numModifiedRows
Yuming Wang created HIVE-27815: -- Summary: Support collect numModifiedRows Key: HIVE-27815 URL: https://issues.apache.org/jira/browse/HIVE-27815 Project: Hive Issue Type: Improvement Affects Versions: 2.3.8 Reporter: Yuming Wang Backport part of HIVE-14388. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27665) Change Filter Parser on HMS to allow backticks
[ https://issues.apache.org/jira/browse/HIVE-27665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764944#comment-17764944 ] Yuming Wang commented on HIVE-27665: PR: https://github.com/apache/hive/pull/4667 > Change Filter Parser on HMS to allow backticks > -- > > Key: HIVE-27665 > URL: https://issues.apache.org/jira/browse/HIVE-27665 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Steve Carlin >Assignee: Steve Carlin >Priority: Major > > The ParititonFilter parser on HMS does not allow backticks. This is > currently causing for a customer that has a column name of 'date' which is a > keyword. > There is more work to be done if we want the HS2 client to support filters > with backticked columns, but that will be done in a different Jira -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27667) Fix get partitions with max_parts
[ https://issues.apache.org/jira/browse/HIVE-27667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761512#comment-17761512 ] Yuming Wang commented on HIVE-27667: https://github.com/apache/hive/pull/4662 > Fix get partitions with max_parts > - > > Key: HIVE-27667 > URL: https://issues.apache.org/jira/browse/HIVE-27667 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 4.0.0-beta-1 >Reporter: Yuming Wang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27667) Fix get partitions with max_parts
Yuming Wang created HIVE-27667: -- Summary: Fix get partitions with max_parts Key: HIVE-27667 URL: https://issues.apache.org/jira/browse/HIVE-27667 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 4.0.0-beta-1 Reporter: Yuming Wang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27660) Update some test results for branch-2.3
Yuming Wang created HIVE-27660: -- Summary: Update some test results for branch-2.3 Key: HIVE-27660 URL: https://issues.apache.org/jira/browse/HIVE-27660 Project: Hive Issue Type: Test Components: Test Affects Versions: 2.3.10 Reporter: Yuming Wang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27659) Make partition order configurable if we are not returning all partitions
Yuming Wang created HIVE-27659: -- Summary: Make partition order configurable if we are not returning all partitions Key: HIVE-27659 URL: https://issues.apache.org/jira/browse/HIVE-27659 Project: Hive Issue Type: Improvement Components: Metastore Affects Versions: 4.0.0-beta-1 Reporter: Yuming Wang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27581) Backport jackson upgrade related patch to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-27581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-27581: --- Description: 2.9.4 -> 2.9.5: https://github.com/apache/hive/commit/33e208c0709fac5bd6380aacfba49448412d112b 2.9.5 -> 2.9.8: https://github.com/apache/hive/commit/2fa22bf360898dc8fd1408bfcc96e1c6aeaf9a53 2.9.8 -> 2.9.9: https://github.com/apache/hive/commit/7fc5a88a149cf0767a5846cbb6ace22d8e99a63c 2.9.9 -> 2.10.0: https://github.com/apache/hive/commit/31935896a78f95ae0792ae7f29960d1b604fbe9d 2.10.0 -> 2.10.5: https://github.com/apache/hive/commit/aa5b6b7968d90d027c5336bf430719acbff70f68 2.10.5 -> 2.12.0: https://github.com/apache/hive/commit/1e8cc12f2d60973b7674813ae82c8f3372423d54 --- 2.12.0 -> 2.12.7: https://github.com/apache/hive/commit/568ded4b22a020f4d2d3567f15b287b25a3f2b71 2.12.7 -> 2.13.5: https://github.com/apache/hive/commit/8236426ed7aa87430e82d47effe946e38fa1f7f2 was: 2.9.4 -> 2.9.5: https://github.com/apache/hive/commit/33e208c0709fac5bd6380aacfba49448412d112b 2.9.5 -> 2.9.8: https://github.com/apache/hive/commit/2fa22bf360898dc8fd1408bfcc96e1c6aeaf9a53 2.9.8 -> 2.9.9: https://github.com/apache/hive/commit/7fc5a88a149cf0767a5846cbb6ace22d8e99a63c 2.9.9 -> 2.10.0: https://github.com/apache/hive/commit/31935896a78f95ae0792ae7f29960d1b604fbe9d 2.10.0 -> 2.10.5: https://github.com/apache/hive/commit/aa5b6b7968d90d027c5336bf430719acbff70f68 2.10.5 -> 2.12.0: https://github.com/apache/hive/commit/1e8cc12f2d60973b7674813ae82c8f3372423d54 2.12.0 -> 2.12.7: https://github.com/apache/hive/commit/568ded4b22a020f4d2d3567f15b287b25a3f2b71 2.12.7 -> 2.13.5: https://github.com/apache/hive/commit/8236426ed7aa87430e82d47effe946e38fa1f7f2 > Backport jackson upgrade related patch to branch-2.3 > > > Key: HIVE-27581 > URL: https://issues.apache.org/jira/browse/HIVE-27581 > Project: Hive > Issue Type: Task >Reporter: Yuming Wang >Priority: Major > > 2.9.4 -> 2.9.5: > https://github.com/apache/hive/commit/33e208c0709fac5bd6380aacfba49448412d112b > 2.9.5 -> 2.9.8: > https://github.com/apache/hive/commit/2fa22bf360898dc8fd1408bfcc96e1c6aeaf9a53 > 2.9.8 -> 2.9.9: > https://github.com/apache/hive/commit/7fc5a88a149cf0767a5846cbb6ace22d8e99a63c > 2.9.9 -> 2.10.0: > https://github.com/apache/hive/commit/31935896a78f95ae0792ae7f29960d1b604fbe9d > 2.10.0 -> 2.10.5: > https://github.com/apache/hive/commit/aa5b6b7968d90d027c5336bf430719acbff70f68 > 2.10.5 -> 2.12.0: > https://github.com/apache/hive/commit/1e8cc12f2d60973b7674813ae82c8f3372423d54 > --- > 2.12.0 -> 2.12.7: > https://github.com/apache/hive/commit/568ded4b22a020f4d2d3567f15b287b25a3f2b71 > 2.12.7 -> 2.13.5: > https://github.com/apache/hive/commit/8236426ed7aa87430e82d47effe946e38fa1f7f2 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27581) Backport jackson upgrade related patch to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-27581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-27581: --- Description: 2.9.4 -> 2.9.5: https://github.com/apache/hive/commit/33e208c0709fac5bd6380aacfba49448412d112b 2.9.5 -> 2.9.8: https://github.com/apache/hive/commit/2fa22bf360898dc8fd1408bfcc96e1c6aeaf9a53 2.9.8 -> 2.9.9: https://github.com/apache/hive/commit/7fc5a88a149cf0767a5846cbb6ace22d8e99a63c 2.9.9 -> 2.10.0: https://github.com/apache/hive/commit/31935896a78f95ae0792ae7f29960d1b604fbe9d 2.10.0 -> 2.10.5: https://github.com/apache/hive/commit/aa5b6b7968d90d027c5336bf430719acbff70f68 2.10.5 -> 2.12.0: https://github.com/apache/hive/commit/1e8cc12f2d60973b7674813ae82c8f3372423d54 2.12.0 -> 2.12.7: https://github.com/apache/hive/commit/568ded4b22a020f4d2d3567f15b287b25a3f2b71 2.12.7 -> 2.13.5: https://github.com/apache/hive/commit/8236426ed7aa87430e82d47effe946e38fa1f7f2 > Backport jackson upgrade related patch to branch-2.3 > > > Key: HIVE-27581 > URL: https://issues.apache.org/jira/browse/HIVE-27581 > Project: Hive > Issue Type: Task >Reporter: Yuming Wang >Priority: Major > > 2.9.4 -> 2.9.5: > https://github.com/apache/hive/commit/33e208c0709fac5bd6380aacfba49448412d112b > 2.9.5 -> 2.9.8: > https://github.com/apache/hive/commit/2fa22bf360898dc8fd1408bfcc96e1c6aeaf9a53 > 2.9.8 -> 2.9.9: > https://github.com/apache/hive/commit/7fc5a88a149cf0767a5846cbb6ace22d8e99a63c > 2.9.9 -> 2.10.0: > https://github.com/apache/hive/commit/31935896a78f95ae0792ae7f29960d1b604fbe9d > 2.10.0 -> 2.10.5: > https://github.com/apache/hive/commit/aa5b6b7968d90d027c5336bf430719acbff70f68 > 2.10.5 -> 2.12.0: > https://github.com/apache/hive/commit/1e8cc12f2d60973b7674813ae82c8f3372423d54 > 2.12.0 -> 2.12.7: > https://github.com/apache/hive/commit/568ded4b22a020f4d2d3567f15b287b25a3f2b71 > 2.12.7 -> 2.13.5: > https://github.com/apache/hive/commit/8236426ed7aa87430e82d47effe946e38fa1f7f2 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27581) Backport jackson upgrade related patch to branch-2.3
Yuming Wang created HIVE-27581: -- Summary: Backport jackson upgrade related patch to branch-2.3 Key: HIVE-27581 URL: https://issues.apache.org/jira/browse/HIVE-27581 Project: Hive Issue Type: Task Reporter: Yuming Wang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27580) Backport HIVE-20071: Migrate to jackson 2.x and prevent usage
Yuming Wang created HIVE-27580: -- Summary: Backport HIVE-20071: Migrate to jackson 2.x and prevent usage Key: HIVE-27580 URL: https://issues.apache.org/jira/browse/HIVE-27580 Project: Hive Issue Type: Task Reporter: Yuming Wang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27579) Backport HIVE-18433: Upgrade version of com.fasterxml.jackson
Yuming Wang created HIVE-27579: -- Summary: Backport HIVE-18433: Upgrade version of com.fasterxml.jackson Key: HIVE-27579 URL: https://issues.apache.org/jira/browse/HIVE-27579 Project: Hive Issue Type: Task Reporter: Yuming Wang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27176) EXPLAIN SKEW
[ https://issues.apache.org/jira/browse/HIVE-27176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17706227#comment-17706227 ] Yuming Wang commented on HIVE-27176: +1. Our internal Spark also supports similar feature: https://issues.apache.org/jira/browse/SPARK-35837 > EXPLAIN SKEW > > > Key: HIVE-27176 > URL: https://issues.apache.org/jira/browse/HIVE-27176 > Project: Hive > Issue Type: Improvement >Reporter: László Bodor >Priority: Major > > Thinking about a new explain feature, which is actually not an explain, > instead a set of analytical queries: considering a very complicated and large > SQL statement (this below is a simple one, just for example's sake): > {code} > SELECT a FROM (SELECT b ... JOIN c on b.x = c.y) d JOIN e ON d.v = e.w > {code} > EXPLAIN SKEW under the hood should run a query like: > {code} > SELECT "b", "x", x, count (distinct b.x) as count order by count desc limit 50 > UNION ALL > SELECT "c", "y", y, count (distinct c.y) as count order by count desc limit 50 > UNION ALL > SELECT "d", "v", v count (distinct d.v) as count order by count desc limit 50 > UNION ALL > SELECT "e", "w", w, count (distinct e.w) as count order by count desc limit 50 > {code} > collecting some cardinality info about all the join columns found in the > query, so result might be like: > {code} > table_name column_name column_value count > b "x" x_skew_value1 100431234 > b "x" x_skew_value2 234 > c "y" y_skew_value1 35 > c "y" x_skew_value2 45 > c "y" x_skew_value3 42 > ... > {code} > this doesn't solve the problem, instead shows data skew immediately for > further analysis, also it doesn't suffer from incomplete stats problem, as it > really has to query data on the cluster > +1 thing to check: reducer key is not always a join column, e.g. in case of > PTF > maybe we should make a plan, and simply iterate on all reduce sink keys > instead of join columns > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-14112) Join a HBase mapped big table shouldn't convert to MapJoin
[ https://issues.apache.org/jira/browse/HIVE-14112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang resolved HIVE-14112. Resolution: Won't Fix > Join a HBase mapped big table shouldn't convert to MapJoin > -- > > Key: HIVE-14112 > URL: https://issues.apache.org/jira/browse/HIVE-14112 > Project: Hive > Issue Type: Bug > Components: StorageHandler >Affects Versions: 1.1.0, 1.2.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Minor > Attachments: HIVE-14112.1.patch > > > Two tables, {{hbasetable_risk_control_defense_idx_uid}} is HBase mapped table: > {noformat} > [root@dev01 ~]# hadoop fs -du -s -h > /hbase/data/tandem/hbase-table-risk-control-defense-idx-uid > 3.0 G 9.0 G /hbase/data/tandem/hbase-table-risk-control-defense-idx-uid > [root@dev01 ~]# hadoop fs -du -s -h /user/hive/warehouse/openapi_invoke_base > 6.6 G 19.7 G /user/hive/warehouse/openapi_invoke_base > {noformat} > The smallest table is 3.0G, is greater than > _hive.mapjoin.smalltable.filesize_ and > _hive.auto.convert.join.noconditionaltask.size_. When join these tables, Hive > auto convert it to mapjoin: > {noformat} > hive> select count(*) from hbasetable_risk_control_defense_idx_uid t1 join > openapi_invoke_base t2 on (t1.key=t2.merchantid); > Query ID = root_2016062809_9f9d3f25-857b-412c-8a75-3d9228bd5ee5 > Total jobs = 1 > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; > support was removed in 8.0 > Execution log at: > /tmp/root/root_2016062809_9f9d3f25-857b-412c-8a75-3d9228bd5ee5.log > 2016-06-28 09:22:10 Starting to launch local task to process map join; > maximum memory = 1908932608 > {noformat} > the root cause is hive use > {{/user/hive/warehouse/hbasetable_risk_control_defense_idx_uid}} as it > location, but it empty. so hive auto convert it to mapjoin. > My opinion is set right location when mapping HBase table. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-25996) Backport HIVE-25098
[ https://issues.apache.org/jira/browse/HIVE-25996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-25996: --- Summary: Backport HIVE-25098 (was: Backport HIVE-21498 and HIVE-25098 to fix CVE-2020-13949) > Backport HIVE-25098 > --- > > Key: HIVE-25996 > URL: https://issues.apache.org/jira/browse/HIVE-25996 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.3.9 >Reporter: Yuming Wang >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (HIVE-25869) Add GitHub Action job to publish snapshot
[ https://issues.apache.org/jira/browse/HIVE-25869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-25869: --- Description: Publish Hive snapshots: https://repository.apache.org/content/repositories/snapshots/org/apache/hive/ > Add GitHub Action job to publish snapshot > - > > Key: HIVE-25869 > URL: https://issues.apache.org/jira/browse/HIVE-25869 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Priority: Major > > Publish Hive snapshots: > https://repository.apache.org/content/repositories/snapshots/org/apache/hive/ -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HIVE-25855) Make a branch-3 release
[ https://issues.apache.org/jira/browse/HIVE-25855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17472400#comment-17472400 ] Yuming Wang commented on HIVE-25855: +1 for 3.2.0. > Make a branch-3 release > > > Key: HIVE-25855 > URL: https://issues.apache.org/jira/browse/HIVE-25855 > Project: Hive > Issue Type: Task > Components: Hive >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > > This jira is to track commits for a hive release off branch-3 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (HIVE-25635) Upgrade Thrift to 0.15.0
[ https://issues.apache.org/jira/browse/HIVE-25635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-25635: --- Description: To addresses CVEs: ||Component Name||Component Version Name||Vulnerability||Fixed version|| |Apache Thrift|0.11.0-4.|[CVE-2020-13949|https://github.com/advisories/GHSA-g2fg-mr77-6vrm]|0.14.1| > Upgrade Thrift to 0.15.0 > > > Key: HIVE-25635 > URL: https://issues.apache.org/jira/browse/HIVE-25635 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Priority: Major > > To addresses CVEs: > ||Component Name||Component Version Name||Vulnerability||Fixed version|| > |Apache > Thrift|0.11.0-4.|[CVE-2020-13949|https://github.com/advisories/GHSA-g2fg-mr77-6vrm]|0.14.1| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-21521) Upgrade ORC to 1.5.5
[ https://issues.apache.org/jira/browse/HIVE-21521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21521: --- Resolution: Won't Fix Status: Resolved (was: Patch Available) > Upgrade ORC to 1.5.5 > > > Key: HIVE-21521 > URL: https://issues.apache.org/jira/browse/HIVE-21521 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.3.4 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21521.01.patch > > > ORC 1.5.5 release notes: > [https://issues.apache.org/jira/sr/jira.issueviews:searchrequest-printable/temp/SearchRequest.html?jqlQuery=project+%3D+ORC+AND+status+%3D+Closed+AND+fixVersion+%3D+%221.5.5%22=500] > ORC-476 Make SearchAgument kryo buffer size configurable which can avoid Kryo > buffer overflow([more > details|https://issues.apache.org/jira/browse/SPARK-27107]). > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24893) Download data from Thriftserver through JDBC
[ https://issues.apache.org/jira/browse/HIVE-24893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303181#comment-17303181 ] Yuming Wang commented on HIVE-24893: We have implement this feature. We can contribute this feature to community if the community needs. > Download data from Thriftserver through JDBC > > > Key: HIVE-24893 > URL: https://issues.apache.org/jira/browse/HIVE-24893 > Project: Hive > Issue Type: New Feature > Components: HiveServer2, JDBC >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Priority: Major > > It is very useful to support downloading large amounts of data (such as more > than 50GB) through JDBC. > Snowflake has similar support : > https://docs.snowflake.com/en/user-guide/jdbc-using.html#label-jdbc-download-from-stage-to-stream > https://github.com/snowflakedb/snowflake-jdbc/blob/95a7d8a03316093430dc3960df6635643208b6fd/src/main/java/net/snowflake/client/jdbc/SnowflakeConnectionV1.java#L886 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24893) Download data from Thriftserver through JDBC
[ https://issues.apache.org/jira/browse/HIVE-24893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-24893: --- Description: It is very useful to support downloading large amounts of data (such as more than 50GB) through JDBC. Snowflake has similar support : https://docs.snowflake.com/en/user-guide/jdbc-using.html#label-jdbc-download-from-stage-to-stream https://github.com/snowflakedb/snowflake-jdbc/blob/95a7d8a03316093430dc3960df6635643208b6fd/src/main/java/net/snowflake/client/jdbc/SnowflakeConnectionV1.java#L886 was: Snowflake support Download Data Files Directly from an Internal Stage to a Stream: https://docs.snowflake.com/en/user-guide/jdbc-using.html#label-jdbc-download-from-stage-to-stream https://github.com/snowflakedb/snowflake-jdbc/blob/95a7d8a03316093430dc3960df6635643208b6fd/src/main/java/net/snowflake/client/jdbc/SnowflakeConnectionV1.java#L886 > Download data from Thriftserver through JDBC > > > Key: HIVE-24893 > URL: https://issues.apache.org/jira/browse/HIVE-24893 > Project: Hive > Issue Type: New Feature > Components: HiveServer2, JDBC >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Priority: Major > > It is very useful to support downloading large amounts of data (such as more > than 50GB) through JDBC. > Snowflake has similar support : > https://docs.snowflake.com/en/user-guide/jdbc-using.html#label-jdbc-download-from-stage-to-stream > https://github.com/snowflakedb/snowflake-jdbc/blob/95a7d8a03316093430dc3960df6635643208b6fd/src/main/java/net/snowflake/client/jdbc/SnowflakeConnectionV1.java#L886 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-24797) Disable validate default values when parsing Avro schemas
[ https://issues.apache.org/jira/browse/HIVE-24797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287446#comment-17287446 ] Yuming Wang edited comment on HIVE-24797 at 2/20/21, 1:14 AM: -- [~iemejia] Could you help verify other incompatible changes: [https://issues.apache.org/jira/issues/?jql=project%20%3D%20AVRO%20AND%20resolution%20in%20(Fixed)%20AND%20cf%5B12310191%5D%20%3D%20%22Incompatible%20change%22%20AND%20fixVersion%20in%20(1.9.0%2C%201.9.1%2C%201.10.0%2C%201.10.1)%20%20%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC] was (Author: q79969786): [~SteveNiemitz] Could you help verify other incompatible changes: [https://issues.apache.org/jira/issues/?jql=project%20%3D%20AVRO%20AND%20resolution%20in%20(Fixed)%20AND%20cf%5B12310191%5D%20%3D%20%22Incompatible%20change%22%20AND%20fixVersion%20in%20(1.9.0%2C%201.9.1%2C%201.10.0%2C%201.10.1)%20%20%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC] > Disable validate default values when parsing Avro schemas > - > > Key: HIVE-24797 > URL: https://issues.apache.org/jira/browse/HIVE-24797 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > It will throw exceptions when upgrading Avro to 1.10.1 for this schema: > {code:json} > { > "type": "record", > "name": "EventData", > "doc": "event data", > "fields": [ > {"name": "ARRAY_WITH_DEFAULT", "type": {"type": "array", "items": > "string"}, "default": null } > ] > } > {code} > {noformat} > org.apache.avro.AvroTypeException: Invalid default for field > ARRAY_WITH_DEFAULT: null not a {"type":"array","items":"string"} > at org.apache.avro.Schema.validateDefault(Schema.java:1571) > at org.apache.avro.Schema.access$500(Schema.java:87) > at org.apache.avro.Schema$Field.(Schema.java:544) > at org.apache.avro.Schema.parse(Schema.java:1678) > at org.apache.avro.Schema$Parser.parse(Schema.java:1425) > at org.apache.avro.Schema$Parser.parse(Schema.java:1396) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFor(AvroSerdeUtils.java:287) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFromFS(AvroSerdeUtils.java:170) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:139) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.determineSchemaOrReturnErrorSchema(AvroSerDe.java:187) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:107) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:83) > at > org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:493) > at > org.apache.hadoop.hive.ql.metadata.Partition.getDeserializer(Partition.java:225) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24797) Disable validate default values when parsing Avro schemas
[ https://issues.apache.org/jira/browse/HIVE-24797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287446#comment-17287446 ] Yuming Wang commented on HIVE-24797: [~SteveNiemitz] Could you help verify other incompatible changes: [https://issues.apache.org/jira/issues/?jql=project%20%3D%20AVRO%20AND%20resolution%20in%20(Fixed)%20AND%20cf%5B12310191%5D%20%3D%20%22Incompatible%20change%22%20AND%20fixVersion%20in%20(1.9.0%2C%201.9.1%2C%201.10.0%2C%201.10.1)%20%20%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC] > Disable validate default values when parsing Avro schemas > - > > Key: HIVE-24797 > URL: https://issues.apache.org/jira/browse/HIVE-24797 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > It will throw exceptions when upgrading Avro to 1.10.1 for this schema: > {code:json} > { > "type": "record", > "name": "EventData", > "doc": "event data", > "fields": [ > {"name": "ARRAY_WITH_DEFAULT", "type": {"type": "array", "items": > "string"}, "default": null } > ] > } > {code} > {noformat} > org.apache.avro.AvroTypeException: Invalid default for field > ARRAY_WITH_DEFAULT: null not a {"type":"array","items":"string"} > at org.apache.avro.Schema.validateDefault(Schema.java:1571) > at org.apache.avro.Schema.access$500(Schema.java:87) > at org.apache.avro.Schema$Field.(Schema.java:544) > at org.apache.avro.Schema.parse(Schema.java:1678) > at org.apache.avro.Schema$Parser.parse(Schema.java:1425) > at org.apache.avro.Schema$Parser.parse(Schema.java:1396) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFor(AvroSerdeUtils.java:287) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFromFS(AvroSerdeUtils.java:170) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:139) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.determineSchemaOrReturnErrorSchema(AvroSerDe.java:187) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:107) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:83) > at > org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:493) > at > org.apache.hadoop.hive.ql.metadata.Partition.getDeserializer(Partition.java:225) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24797) Disable validate default values when parsing Avro schemas
[ https://issues.apache.org/jira/browse/HIVE-24797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-24797: --- Description: It will throw exceptions when upgrading Avro to 1.10.1 for this schema: {code:json} { "type": "record", "name": "EventData", "doc": "event data", "fields": [ {"name": "ARRAY_WITH_DEFAULT", "type": {"type": "array", "items": "string"}, "default": null } ] } {code} {noformat} org.apache.avro.AvroTypeException: Invalid default for field ARRAY_WITH_DEFAULT: null not a {"type":"array","items":"string"} at org.apache.avro.Schema.validateDefault(Schema.java:1571) at org.apache.avro.Schema.access$500(Schema.java:87) at org.apache.avro.Schema$Field.(Schema.java:544) at org.apache.avro.Schema.parse(Schema.java:1678) at org.apache.avro.Schema$Parser.parse(Schema.java:1425) at org.apache.avro.Schema$Parser.parse(Schema.java:1396) at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFor(AvroSerdeUtils.java:287) at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFromFS(AvroSerdeUtils.java:170) at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:139) at org.apache.hadoop.hive.serde2.avro.AvroSerDe.determineSchemaOrReturnErrorSchema(AvroSerDe.java:187) at org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:107) at org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:83) at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:493) at org.apache.hadoop.hive.ql.metadata.Partition.getDeserializer(Partition.java:225) {noformat} was: It will throw exceptions when upgrading Avro to 1.10.1 for this schema: {code:json} { "type": "record", "name": "EventData", "doc": "event data", "fields": [ {"name": "ARRAY_WITH_DEFAULT", "type": {"type": "array", "items": "string"}, "default": null } ] } {code} {noformat} org.apache.avro.AvroTypeException: Invalid default for field USERACTIONS: null not a {"type":"array","items":"string"} at org.apache.avro.Schema.validateDefault(Schema.java:1571) at org.apache.avro.Schema.access$500(Schema.java:87) at org.apache.avro.Schema$Field.(Schema.java:544) at org.apache.avro.Schema.parse(Schema.java:1678) at org.apache.avro.Schema$Parser.parse(Schema.java:1425) at org.apache.avro.Schema$Parser.parse(Schema.java:1396) at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFor(AvroSerdeUtils.java:287) at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFromFS(AvroSerdeUtils.java:170) at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:139) at org.apache.hadoop.hive.serde2.avro.AvroSerDe.determineSchemaOrReturnErrorSchema(AvroSerDe.java:187) at org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:107) at org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:83) at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:493) at org.apache.hadoop.hive.ql.metadata.Partition.getDeserializer(Partition.java:225) {noformat} > Disable validate default values when parsing Avro schemas > - > > Key: HIVE-24797 > URL: https://issues.apache.org/jira/browse/HIVE-24797 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > It will throw exceptions when upgrading Avro to 1.10.1 for this schema: > {code:json} > { > "type": "record", > "name": "EventData", > "doc": "event data", > "fields": [ > {"name": "ARRAY_WITH_DEFAULT", "type": {"type": "array", "items": > "string"}, "default": null } > ] > } > {code} > {noformat} > org.apache.avro.AvroTypeException: Invalid default for field > ARRAY_WITH_DEFAULT: null not a {"type":"array","items":"string"} > at org.apache.avro.Schema.validateDefault(Schema.java:1571) > at org.apache.avro.Schema.access$500(Schema.java:87) > at org.apache.avro.Schema$Field.(Schema.java:544) > at org.apache.avro.Schema.parse(Schema.java:1678) > at org.apache.avro.Schema$Parser.parse(Schema.java:1425) > at org.apache.avro.Schema$Parser.parse(Schema.java:1396) > at >
[jira] [Updated] (HIVE-24797) Disable validate default values when parsing Avro schemas
[ https://issues.apache.org/jira/browse/HIVE-24797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-24797: --- Summary: Disable validate default values when parsing Avro schemas (was: Disable validate default values when parsing Avro schemas.) > Disable validate default values when parsing Avro schemas > - > > Key: HIVE-24797 > URL: https://issues.apache.org/jira/browse/HIVE-24797 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Priority: Major > > It will throw exceptions when upgrading Avro to 1.10.1 for this schema: > {code:json} > { > "type": "record", > "name": "EventData", > "doc": "event data", > "fields": [ > {"name": "ARRAY_WITH_DEFAULT", "type": {"type": "array", "items": > "string"}, "default": null } > ] > } > {code} > {noformat} > org.apache.avro.AvroTypeException: Invalid default for field USERACTIONS: > null not a {"type":"array","items":"string"} > at org.apache.avro.Schema.validateDefault(Schema.java:1571) > at org.apache.avro.Schema.access$500(Schema.java:87) > at org.apache.avro.Schema$Field.(Schema.java:544) > at org.apache.avro.Schema.parse(Schema.java:1678) > at org.apache.avro.Schema$Parser.parse(Schema.java:1425) > at org.apache.avro.Schema$Parser.parse(Schema.java:1396) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFor(AvroSerdeUtils.java:287) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.getSchemaFromFS(AvroSerdeUtils.java:170) > at > org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:139) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.determineSchemaOrReturnErrorSchema(AvroSerDe.java:187) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:107) > at > org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:83) > at > org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:493) > at > org.apache.hadoop.hive.ql.metadata.Partition.getDeserializer(Partition.java:225) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-9452) Use HBase to store Hive metadata
[ https://issues.apache.org/jira/browse/HIVE-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17273214#comment-17273214 ] Yuming Wang commented on HIVE-9452: --- [~igreenfi] Please see https://issues.apache.org/jira/browse/HIVE-17234 for more details. > Use HBase to store Hive metadata > > > Key: HIVE-9452 > URL: https://issues.apache.org/jira/browse/HIVE-9452 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: hbase-metastore-branch >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HBaseMetastoreApproach.pdf > > > qThis is an umbrella JIRA for a project to explore using HBase to store the > Hive data catalog (ie the metastore). This project has several goals: > # The current metastore implementation is slow when tables have thousands or > more partitions. With Tez and Spark engines we are pushing Hive to a point > where queries only take a few seconds to run. But planning the query can > take as long as running it. Much of this time is spent in metadata > operations. > # Due to scale limitations we have never allowed tasks to communicate > directly with the metastore. However, with the development of LLAP this > requirement will have to be relaxed. If we can relax this there are other > use cases that could benefit from this. > # Eating our own dogfood. Rather than using external systems to store our > metadata there are benefits to using other components in the Hadoop system. > The proposal is to create a new branch and work on the prototype there. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21961) Update jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HIVE-21961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17254621#comment-17254621 ] Yuming Wang commented on HIVE-21961: Any update? > Update jetty version to 9.4.x > - > > Key: HIVE-21961 > URL: https://issues.apache.org/jira/browse/HIVE-21961 > Project: Hive > Issue Type: Task >Reporter: Oleksiy Sayankin >Assignee: László Bodor >Priority: Major > Attachments: HIVE-21961.02.patch, HIVE-21961.03.patch, > HIVE-21961.patch > > > Update jetty version to 9.4.x -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22981) DataFileReader is not closed in AvroGenericRecordReader#extractWriterTimezoneFromMetadata
[ https://issues.apache.org/jira/browse/HIVE-22981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17244663#comment-17244663 ] Yuming Wang commented on HIVE-22981: [~sunchao] Backcport this to branch-2.3? > DataFileReader is not closed in > AvroGenericRecordReader#extractWriterTimezoneFromMetadata > - > > Key: HIVE-22981 > URL: https://issues.apache.org/jira/browse/HIVE-22981 > Project: Hive > Issue Type: Bug >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22981.01.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Method looks like : > {code} > private ZoneId extractWriterTimezoneFromMetadata(JobConf job, FileSplit > split, > GenericDatumReader gdr) throws IOException { > if (job == null || gdr == null || split == null || split.getPath() == > null) { > return null; > } > try { > DataFileReader dataFileReader = > new DataFileReader(new FsInput(split.getPath(), > job), gdr); > [...return...] > } > } catch (IOException e) { > // Can't access metadata, carry on. > } > return null; > } > {code} > The DataFileReader is never closed which can cause a memory leak. We need a > try-with-resources here. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24305) avro decimal schema is not properly populating scale/precision if value is enclosed in quote
[ https://issues.apache.org/jira/browse/HIVE-24305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17244661#comment-17244661 ] Yuming Wang commented on HIVE-24305: [~sunchao] Could we back port this to branch-2.3? {code:sql} spark-sql> > > CREATE TABLE test_quoted_scale_precision STORED AS AVRO TBLPROPERTIES ('avro.schema.literal'='{"type":"record","name":"DecimalTest","namespace":"com.example.test","fields":[{"name":"Decimal24_6","type":["null",{"type":"bytes","logicalType":"decimal","precision":24,"scale":"6"}]}]}'); spark-sql> desc test_quoted_scale_precision; decimal24_6 decimal(24,0) spark-sql> {code} > avro decimal schema is not properly populating scale/precision if value is > enclosed in quote > > > Key: HIVE-24305 > URL: https://issues.apache.org/jira/browse/HIVE-24305 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > {code:java} > CREATE TABLE test_quoted_scale_precision STORED AS AVRO TBLPROPERTIES > ('avro.schema.literal'='{"type":"record","name":"DecimalTest","namespace":"com.example.test","fields":[{"name":"Decimal24_6","type":["null",{"type":"bytes","logicalType":"decimal","precision":24,"scale":"6"}]}]}'); > > desc test_quoted_scale_precision; > // current output > decimal24_6 decimal(24,0) > // expected output > decimal24_6 decimal(24,6){code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24393) Connecting Spark SQL to Hive Metastore (with Remote Metastore Server)
[ https://issues.apache.org/jira/browse/HIVE-24393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233485#comment-17233485 ] Yuming Wang commented on HIVE-24393: You Spark version is 2.3, and your Hive metastore version is 2.3? > Connecting Spark SQL to Hive Metastore (with Remote Metastore Server) > - > > Key: HIVE-24393 > URL: https://issues.apache.org/jira/browse/HIVE-24393 > Project: Hive > Issue Type: Bug > Components: Configuration >Affects Versions: 2.3.3 >Reporter: Mani >Priority: Major > Fix For: storage-2.7.1 > > Original Estimate: 48h > Remaining Estimate: 48h > > HI There, > I'm working on Integrating Apache big data solution on SAP HANA DB, > > I successfully installed the following versions on Linux OS. > |JAVA|openjdk version "1.8.0_232| > |Hadoop|apache hadoop-2.7.1| > |HIVE|apache-hive-1.2.1| > |Derby|apache db-derby-10.12.1.1| > |Spark|Apache spark-2.3.2| > > currently, i, having an issue to access hive meta store using spark-submit. > getting the below error. > spark.catalog.listTables.show > 20/11/17 08:53:58 INFO HiveUtils: Initializing HiveMetastoreConnection > version 1.2.1 using file:/home/spark/spark-2.3.2/lib/*.jar > java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: > org/apache/hadoop/hive/conf/HiveConf when creating Hive client using > classpath: file:/home/spark/spark-2.3.2/lib/*.jar > *Please make sure that jars for your version of hive and hadoop are included > in the paths passed to spark.sql.hive.metastore.jars* > > please help us > Regards, > Mani > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21588) Remove HBase dependency from hive-metastore
[ https://issues.apache.org/jira/browse/HIVE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225351#comment-17225351 ] Yuming Wang commented on HIVE-21588: cc [~sunchao] > Remove HBase dependency from hive-metastore > --- > > Key: HIVE-21588 > URL: https://issues.apache.org/jira/browse/HIVE-21588 > Project: Hive > Issue Type: Task > Components: HBase Metastore >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21588.01.patch, HIVE-21588.02.patch > > > HIVE-17234 has removed HBase metastore from master. But maven dependency have > not been removed. We should remove it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22097) Incompatible java.util.ArrayList for java 11
[ https://issues.apache.org/jira/browse/HIVE-22097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946501#comment-16946501 ] Yuming Wang commented on HIVE-22097: Could we backport this patch to branch-3.0 and branch-3.1? > Incompatible java.util.ArrayList for java 11 > > > Key: HIVE-22097 > URL: https://issues.apache.org/jira/browse/HIVE-22097 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Affects Versions: 3.0.0, 3.1.1 >Reporter: Yuming Wang >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22097.1.patch, JDK1.8.png, JDK11.png > > > {noformat} > export JAVA_HOME=/usr/lib/jdk-11.0.3 > export PATH=${JAVA_HOME}/bin:${PATH} > hive> create table t(id int); > Time taken: 0.035 seconds > hive> insert into t values(1); > Query ID = root_20190811155400_7c0e0494-eecb-4c54-a9fd-942ab52a0794 > Total jobs = 3 > Launching Job 1 out of 3 > Number of reduce tasks determined at compile time: 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > java.lang.RuntimeException: java.lang.NoSuchFieldException: parentOffset > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:390) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$1.create(SerializationUtilities.java:235) > at > org.apache.hive.com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.borrowKryo(SerializationUtilities.java:280) > at > org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:595) > at > org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:587) > at > org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:579) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:357) > at > org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:159) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2317) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1969) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1636) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1396) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1390) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:242) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:189) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:838) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:777) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:696) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236) > Caused by: java.lang.NoSuchFieldException: parentOffset > at java.base/java.lang.Class.getDeclaredField(Class.java:2412) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:384) > ... 29 more > Job Submission failed with exception > 'java.lang.RuntimeException(java.lang.NoSuchFieldException: parentOffset)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask. java.lang.NoSuchFieldException: > parentOffset > {noformat} > The reason is Java removed {{parentOffset}}: > !JDK1.8.png! > !JDK11.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21237) [JDK 11] SessionState can't be initialized due to classloader problem
[ https://issues.apache.org/jira/browse/HIVE-21237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935716#comment-16935716 ] Yuming Wang commented on HIVE-21237: You need to remove {{metastore_db}} first: {code:sh} rm -rf ${HIVE_HOME}/metastore_db {code} > [JDK 11] SessionState can't be initialized due to classloader problem > - > > Key: HIVE-21237 > URL: https://issues.apache.org/jira/browse/HIVE-21237 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.1 > Environment: JDK11, Hadoop-3, Hive 3.1.1 >Reporter: Uma Maheswara Rao G >Priority: Major > > When I start Hive with JDK11 > {{2019-02-08 22:29:51,500 INFO SessionState: Hive Session ID = > cecd9c34-d61a-44d0-9e52-a0a7d6413e49 > Exception in thread "main" java.lang.ClassCastException: class > jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class > java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and > java.net.URLClassLoader are in module java.base of loader 'bootstrap') > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:410) > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:386) > at > org.apache.hadoop.hive.cli.CliSessionState.(CliSessionState.java:60) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236)}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22229) Backport HIVE-8472 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-9: -- Assignee: (was: Yuming Wang) > Backport HIVE-8472 to branch-2.3 > > > Key: HIVE-9 > URL: https://issues.apache.org/jira/browse/HIVE-9 > Project: Hive > Issue Type: Improvement > Components: Database/Schema >Affects Versions: 2.3.6 >Reporter: Yuming Wang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22229) Backport HIVE-8472 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934902#comment-16934902 ] Yuming Wang commented on HIVE-9: [* DDL – Alter Database|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterDatabase] Please note, Fix Version/s should include 2.3.7 if we backport HIVE-8472 to branch-2.3. > Backport HIVE-8472 to branch-2.3 > > > Key: HIVE-9 > URL: https://issues.apache.org/jira/browse/HIVE-9 > Project: Hive > Issue Type: Improvement > Components: Database/Schema >Affects Versions: 2.3.6 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22229) Backport HIVE-8472 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-9: -- > Backport HIVE-8472 to branch-2.3 > > > Key: HIVE-9 > URL: https://issues.apache.org/jira/browse/HIVE-9 > Project: Hive > Issue Type: Improvement > Components: Database/Schema >Affects Versions: 2.3.6 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-21237) [JDK 11] SessionState can't be initialized due to classloader problem
[ https://issues.apache.org/jira/browse/HIVE-21237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934274#comment-16934274 ] Yuming Wang edited comment on HIVE-21237 at 9/20/19 10:17 AM: -- Please try to initialize metastore: {code:sh} bin/schematool -dbType derby -initSchema --verbose {code} was (Author: q79969786): Please try to initialize metartore: {code:sh} bin/schematool -dbType derby -initSchema --verbose {code} > [JDK 11] SessionState can't be initialized due to classloader problem > - > > Key: HIVE-21237 > URL: https://issues.apache.org/jira/browse/HIVE-21237 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.1 > Environment: JDK11, Hadoop-3, Hive 3.1.1 >Reporter: Uma Maheswara Rao G >Priority: Major > > When I start Hive with JDK11 > {{2019-02-08 22:29:51,500 INFO SessionState: Hive Session ID = > cecd9c34-d61a-44d0-9e52-a0a7d6413e49 > Exception in thread "main" java.lang.ClassCastException: class > jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class > java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and > java.net.URLClassLoader are in module java.base of loader 'bootstrap') > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:410) > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:386) > at > org.apache.hadoop.hive.cli.CliSessionState.(CliSessionState.java:60) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236)}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21237) [JDK 11] SessionState can't be initialized due to classloader problem
[ https://issues.apache.org/jira/browse/HIVE-21237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934274#comment-16934274 ] Yuming Wang commented on HIVE-21237: Please try to initialize metartore: {code:sh} bin/schematool -dbType derby -initSchema --verbose {code} > [JDK 11] SessionState can't be initialized due to classloader problem > - > > Key: HIVE-21237 > URL: https://issues.apache.org/jira/browse/HIVE-21237 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.1 > Environment: JDK11, Hadoop-3, Hive 3.1.1 >Reporter: Uma Maheswara Rao G >Priority: Major > > When I start Hive with JDK11 > {{2019-02-08 22:29:51,500 INFO SessionState: Hive Session ID = > cecd9c34-d61a-44d0-9e52-a0a7d6413e49 > Exception in thread "main" java.lang.ClassCastException: class > jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class > java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and > java.net.URLClassLoader are in module java.base of loader 'bootstrap') > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:410) > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:386) > at > org.apache.hadoop.hive.cli.CliSessionState.(CliSessionState.java:60) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236)}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21237) [JDK 11] SessionState can't be initialized due to classloader problem
[ https://issues.apache.org/jira/browse/HIVE-21237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934213#comment-16934213 ] Yuming Wang commented on HIVE-21237: Please check the Hive log: \{{/tmp/${USER}/hive.log}} > [JDK 11] SessionState can't be initialized due to classloader problem > - > > Key: HIVE-21237 > URL: https://issues.apache.org/jira/browse/HIVE-21237 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.1 > Environment: JDK11, Hadoop-3, Hive 3.1.1 >Reporter: Uma Maheswara Rao G >Priority: Major > > When I start Hive with JDK11 > {{2019-02-08 22:29:51,500 INFO SessionState: Hive Session ID = > cecd9c34-d61a-44d0-9e52-a0a7d6413e49 > Exception in thread "main" java.lang.ClassCastException: class > jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class > java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and > java.net.URLClassLoader are in module java.base of loader 'bootstrap') > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:410) > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:386) > at > org.apache.hadoop.hive.cli.CliSessionState.(CliSessionState.java:60) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236)}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21237) [JDK 11] SessionState can't be initialized due to classloader problem
[ https://issues.apache.org/jira/browse/HIVE-21237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933939#comment-16933939 ] Yuming Wang commented on HIVE-21237: Yes. Hive 3.x has another issue HIVE-22097 that can't run on JDK 11. > [JDK 11] SessionState can't be initialized due to classloader problem > - > > Key: HIVE-21237 > URL: https://issues.apache.org/jira/browse/HIVE-21237 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.1 > Environment: JDK11, Hadoop-3, Hive 3.1.1 >Reporter: Uma Maheswara Rao G >Priority: Major > > When I start Hive with JDK11 > {{2019-02-08 22:29:51,500 INFO SessionState: Hive Session ID = > cecd9c34-d61a-44d0-9e52-a0a7d6413e49 > Exception in thread "main" java.lang.ClassCastException: class > jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class > java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and > java.net.URLClassLoader are in module java.base of loader 'bootstrap') > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:410) > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:386) > at > org.apache.hadoop.hive.cli.CliSessionState.(CliSessionState.java:60) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236)}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21237) [JDK 11] SessionState can't be initialized due to classloader problem
[ https://issues.apache.org/jira/browse/HIVE-21237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933602#comment-16933602 ] Yuming Wang commented on HIVE-21237: [~dawood.m] It seems related to HIVE-21584. Could you try Hive 2.3.6? > [JDK 11] SessionState can't be initialized due to classloader problem > - > > Key: HIVE-21237 > URL: https://issues.apache.org/jira/browse/HIVE-21237 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.1 > Environment: JDK11, Hadoop-3, Hive 3.1.1 >Reporter: Uma Maheswara Rao G >Priority: Major > > When I start Hive with JDK11 > {{2019-02-08 22:29:51,500 INFO SessionState: Hive Session ID = > cecd9c34-d61a-44d0-9e52-a0a7d6413e49 > Exception in thread "main" java.lang.ClassCastException: class > jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class > java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and > java.net.URLClassLoader are in module java.base of loader 'bootstrap') > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:410) > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:386) > at > org.apache.hadoop.hive.cli.CliSessionState.(CliSessionState.java:60) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236)}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21237) [JDK 11] SessionState can't be initialized due to classloader problem
[ https://issues.apache.org/jira/browse/HIVE-21237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929902#comment-16929902 ] Yuming Wang commented on HIVE-21237: The issue should fixed by HIVE-21584. > [JDK 11] SessionState can't be initialized due to classloader problem > - > > Key: HIVE-21237 > URL: https://issues.apache.org/jira/browse/HIVE-21237 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.1 > Environment: JDK11, Hadoop-3, Hive 3.1.1 >Reporter: Uma Maheswara Rao G >Priority: Major > > When I start Hive with JDK11 > {{2019-02-08 22:29:51,500 INFO SessionState: Hive Session ID = > cecd9c34-d61a-44d0-9e52-a0a7d6413e49 > Exception in thread "main" java.lang.ClassCastException: class > jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class > java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and > java.net.URLClassLoader are in module java.base of loader 'bootstrap') > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:410) > at > org.apache.hadoop.hive.ql.session.SessionState.(SessionState.java:386) > at > org.apache.hadoop.hive.cli.CliSessionState.(CliSessionState.java:60) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236)}} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (HIVE-22139) Will not pad Decimal numbers with trailing zeros if select from value
[ https://issues.apache.org/jira/browse/HIVE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22139: --- Description: How to reproduce: {code:sql} hive> SELECT CAST(1 AS decimal(38, 18)); OK 1 Time taken: 0.226 seconds, Fetched: 1 row(s) hive> CREATE TABLE HIVE_22139 AS SELECT CAST(1 AS decimal(38, 18)) as c; OK Time taken: 2.278 seconds hive> SELECT * FROM HIVE_22139; OK 1.00 Time taken: 0.07 seconds, Fetched: 1 row(s) {code} was: How to reproduce: {code:sql} // code placeholder {code} > Will not pad Decimal numbers with trailing zeros if select from value > - > > Key: HIVE-22139 > URL: https://issues.apache.org/jira/browse/HIVE-22139 > Project: Hive > Issue Type: Bug > Components: SQL >Affects Versions: 3.1.1 >Reporter: Yuming Wang >Priority: Major > > How to reproduce: > {code:sql} > hive> SELECT CAST(1 AS decimal(38, 18)); > OK > 1 > Time taken: 0.226 seconds, Fetched: 1 row(s) > hive> CREATE TABLE HIVE_22139 AS SELECT CAST(1 AS decimal(38, 18)) as c; > OK > Time taken: 2.278 seconds > hive> SELECT * FROM HIVE_22139; > OK > 1.00 > Time taken: 0.07 seconds, Fetched: 1 row(s) > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (HIVE-22097) Incompatible java.util.ArrayList for java 11
[ https://issues.apache.org/jira/browse/HIVE-22097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22097: --- Attachment: JDK11.png > Incompatible java.util.ArrayList for java 11 > > > Key: HIVE-22097 > URL: https://issues.apache.org/jira/browse/HIVE-22097 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Affects Versions: 3.0.0, 3.1.1 >Reporter: Yuming Wang >Priority: Major > Attachments: JDK1.8.png, JDK11.png > > > {noformat} > export JAVA_HOME=/usr/lib/jdk-11.0.3 > export PATH=${JAVA_HOME}/bin:${PATH} > hive> create table t(id int); > Time taken: 0.035 seconds > hive> insert into t values(1); > Query ID = root_20190811155400_7c0e0494-eecb-4c54-a9fd-942ab52a0794 > Total jobs = 3 > Launching Job 1 out of 3 > Number of reduce tasks determined at compile time: 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > java.lang.RuntimeException: java.lang.NoSuchFieldException: parentOffset > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:390) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$1.create(SerializationUtilities.java:235) > at > org.apache.hive.com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.borrowKryo(SerializationUtilities.java:280) > at > org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:595) > at > org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:587) > at > org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:579) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:357) > at > org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:159) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2317) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1969) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1636) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1396) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1390) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:242) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:189) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:838) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:777) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:696) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236) > Caused by: java.lang.NoSuchFieldException: parentOffset > at java.base/java.lang.Class.getDeclaredField(Class.java:2412) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:384) > ... 29 more > Job Submission failed with exception > 'java.lang.RuntimeException(java.lang.NoSuchFieldException: parentOffset)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask. java.lang.NoSuchFieldException: > parentOffset > {noformat} > The reason is Java remove {{parentOffset}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22097) Incompatible java.util.ArrayList for java 11
[ https://issues.apache.org/jira/browse/HIVE-22097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22097: --- Description: {noformat} export JAVA_HOME=/usr/lib/jdk-11.0.3 export PATH=${JAVA_HOME}/bin:${PATH} hive> create table t(id int); Time taken: 0.035 seconds hive> insert into t values(1); Query ID = root_20190811155400_7c0e0494-eecb-4c54-a9fd-942ab52a0794 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= java.lang.RuntimeException: java.lang.NoSuchFieldException: parentOffset at org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:390) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$1.create(SerializationUtilities.java:235) at org.apache.hive.com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.borrowKryo(SerializationUtilities.java:280) at org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:595) at org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:587) at org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:579) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:357) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:159) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2317) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1969) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1636) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1396) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1390) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:242) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:189) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:838) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:777) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:696) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) Caused by: java.lang.NoSuchFieldException: parentOffset at java.base/java.lang.Class.getDeclaredField(Class.java:2412) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:384) ... 29 more Job Submission failed with exception 'java.lang.RuntimeException(java.lang.NoSuchFieldException: parentOffset)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. java.lang.NoSuchFieldException: parentOffset {noformat} The reason is Java removed {{parentOffset}}: !JDK1.8.png! !JDK11.png! was: {noformat} export JAVA_HOME=/usr/lib/jdk-11.0.3 export PATH=${JAVA_HOME}/bin:${PATH} hive> create table t(id int); Time taken: 0.035 seconds hive> insert into t values(1); Query ID = root_20190811155400_7c0e0494-eecb-4c54-a9fd-942ab52a0794 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= java.lang.RuntimeException: java.lang.NoSuchFieldException: parentOffset at org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:390) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$1.create(SerializationUtilities.java:235) at
[jira] [Updated] (HIVE-22097) Incompatible java.util.ArrayList for java 11
[ https://issues.apache.org/jira/browse/HIVE-22097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22097: --- Attachment: JDK1.8.png > Incompatible java.util.ArrayList for java 11 > > > Key: HIVE-22097 > URL: https://issues.apache.org/jira/browse/HIVE-22097 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Affects Versions: 3.0.0, 3.1.1 >Reporter: Yuming Wang >Priority: Major > Attachments: JDK1.8.png > > > {noformat} > export JAVA_HOME=/usr/lib/jdk-11.0.3 > export PATH=${JAVA_HOME}/bin:${PATH} > hive> create table t(id int); > Time taken: 0.035 seconds > hive> insert into t values(1); > Query ID = root_20190811155400_7c0e0494-eecb-4c54-a9fd-942ab52a0794 > Total jobs = 3 > Launching Job 1 out of 3 > Number of reduce tasks determined at compile time: 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > java.lang.RuntimeException: java.lang.NoSuchFieldException: parentOffset > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:390) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$1.create(SerializationUtilities.java:235) > at > org.apache.hive.com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.borrowKryo(SerializationUtilities.java:280) > at > org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:595) > at > org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:587) > at > org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:579) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:357) > at > org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:159) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2317) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1969) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1636) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1396) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1390) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:242) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:189) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:838) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:777) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:696) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236) > Caused by: java.lang.NoSuchFieldException: parentOffset > at java.base/java.lang.Class.getDeclaredField(Class.java:2412) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:384) > ... 29 more > Job Submission failed with exception > 'java.lang.RuntimeException(java.lang.NoSuchFieldException: parentOffset)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask. java.lang.NoSuchFieldException: > parentOffset > {noformat} > The reason is Java remove {{parentOffset}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-17909) JDK9: Tez may not use URLClassloader
[ https://issues.apache.org/jira/browse/HIVE-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-17909: --- Affects Version/s: 3.0.0 3.1.1 > JDK9: Tez may not use URLClassloader > > > Key: HIVE-17909 > URL: https://issues.apache.org/jira/browse/HIVE-17909 > Project: Hive > Issue Type: Sub-task > Components: Build Infrastructure >Affects Versions: 3.0.0, 3.1.1 >Reporter: Zoltan Haindrich >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-17909) JDK9: Tez may not use URLClassloader
[ https://issues.apache.org/jira/browse/HIVE-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-17909: --- Affects Version/s: (was: 3.1.1) (was: 3.0.0) > JDK9: Tez may not use URLClassloader > > > Key: HIVE-17909 > URL: https://issues.apache.org/jira/browse/HIVE-17909 > Project: Hive > Issue Type: Sub-task > Components: Build Infrastructure >Reporter: Zoltan Haindrich >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22097) Incompatible java.util.ArrayList for java 11
[ https://issues.apache.org/jira/browse/HIVE-22097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22097: --- Affects Version/s: 3.0.0 3.1.1 > Incompatible java.util.ArrayList for java 11 > > > Key: HIVE-22097 > URL: https://issues.apache.org/jira/browse/HIVE-22097 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Affects Versions: 3.0.0, 3.1.1 >Reporter: Yuming Wang >Priority: Major > > {noformat} > export JAVA_HOME=/usr/lib/jdk-11.0.3 > export PATH=${JAVA_HOME}/bin:${PATH} > hive> create table t(id int); > Time taken: 0.035 seconds > hive> insert into t values(1); > Query ID = root_20190811155400_7c0e0494-eecb-4c54-a9fd-942ab52a0794 > Total jobs = 3 > Launching Job 1 out of 3 > Number of reduce tasks determined at compile time: 1 > In order to change the average load for a reducer (in bytes): > set hive.exec.reducers.bytes.per.reducer= > In order to limit the maximum number of reducers: > set hive.exec.reducers.max= > In order to set a constant number of reducers: > set mapreduce.job.reduces= > java.lang.RuntimeException: java.lang.NoSuchFieldException: parentOffset > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:390) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$1.create(SerializationUtilities.java:235) > at > org.apache.hive.com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.borrowKryo(SerializationUtilities.java:280) > at > org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:595) > at > org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:587) > at > org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:579) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:357) > at > org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:159) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2317) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1969) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1636) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1396) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1390) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:242) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:189) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:838) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:777) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:696) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.apache.hadoop.util.RunJar.run(RunJar.java:323) > at org.apache.hadoop.util.RunJar.main(RunJar.java:236) > Caused by: java.lang.NoSuchFieldException: parentOffset > at java.base/java.lang.Class.getDeclaredField(Class.java:2412) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:384) > ... 29 more > Job Submission failed with exception > 'java.lang.RuntimeException(java.lang.NoSuchFieldException: parentOffset)' > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask. java.lang.NoSuchFieldException: > parentOffset > {noformat} > The reason is Java remove {{parentOffset}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-22096) Backport HIVE-21584 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904627#comment-16904627 ] Yuming Wang commented on HIVE-22096: cc [~alangates] [~hyukjin.kwon] Hive is build with java 8 from 3.0(HIVE-16281). This patch made some changes to be compatible with Java 7, example: {code:java} private UDFClassLoader createUDFClassLoader() { return new UDFClassLoader(newPaths.stream() .map(Utilities::urlFromPathString) .filter(Objects::nonNull) .toArray(URL[]::new), parentLoader); } {code} to {code:java} private UDFClassLoader createUDFClassLoader() { List urls = new ArrayList<>(); for (String path : newPaths) { URL url = Utilities.urlFromPathString(path); if (url != null) { urls.add(url); } } return new UDFClassLoader(urls.toArray(new URL[urls.size()]), parentLoader); } {code} > Backport HIVE-21584 to branch-2.3 > - > > Key: HIVE-22096 > URL: https://issues.apache.org/jira/browse/HIVE-22096 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-22096.branch-2.3.patch > > > Backport HIVE-21584 to make Spark support JDK 11. > https://www.mail-archive.com/dev@hive.apache.org/msg137001.html -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22096) Backport HIVE-21584 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22096: --- Description: Backport HIVE-21584 to make Spark support JDK 11. https://www.mail-archive.com/dev@hive.apache.org/msg137001.html > Backport HIVE-21584 to branch-2.3 > - > > Key: HIVE-22096 > URL: https://issues.apache.org/jira/browse/HIVE-22096 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-22096.branch-2.3.patch > > > Backport HIVE-21584 to make Spark support JDK 11. > https://www.mail-archive.com/dev@hive.apache.org/msg137001.html -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22096) Backport HIVE-21584 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22096: --- Attachment: HIVE-22096.branch-2.3.patch Status: Patch Available (was: Open) > Backport HIVE-21584 to branch-2.3 > - > > Key: HIVE-22096 > URL: https://issues.apache.org/jira/browse/HIVE-22096 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-22096.branch-2.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22096) Backport HIVE-21584 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22096: --- Status: Open (was: Patch Available) > Backport HIVE-21584 to branch-2.3 > - > > Key: HIVE-22096 > URL: https://issues.apache.org/jira/browse/HIVE-22096 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22096) Backport HIVE-21584 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22096: --- Attachment: HIVE-21584.branch-2.3.patch Status: Patch Available (was: Open) > Backport HIVE-21584 to branch-2.3 > - > > Key: HIVE-22096 > URL: https://issues.apache.org/jira/browse/HIVE-22096 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22096) Backport HIVE-21584 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-22096: --- Attachment: (was: HIVE-21584.branch-2.3.patch) > Backport HIVE-21584 to branch-2.3 > - > > Key: HIVE-22096 > URL: https://issues.apache.org/jira/browse/HIVE-22096 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (HIVE-22096) Backport HIVE-21584 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-22096: -- > Backport HIVE-21584 to branch-2.3 > - > > Key: HIVE-22096 > URL: https://issues.apache.org/jira/browse/HIVE-22096 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-13004) Remove encryption shims
[ https://issues.apache.org/jira/browse/HIVE-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888470#comment-16888470 ] Yuming Wang commented on HIVE-13004: Any update? > Remove encryption shims > --- > > Key: HIVE-13004 > URL: https://issues.apache.org/jira/browse/HIVE-13004 > Project: Hive > Issue Type: Task > Components: Encryption >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Attachments: HIVE-13004.1.patch, HIVE-13004.2.patch, HIVE-13004.patch > > > It has served its purpose. Now that we don't support hadoop-1, its no longer > needed. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (HIVE-21374) Dependency conflicts on org.apache.httpcomponents:httpcore:jar, leading to invoking unexpected methods
[ https://issues.apache.org/jira/browse/HIVE-21374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang resolved HIVE-21374. Resolution: Duplicate Fix Version/s: (was: 2.3.4) > Dependency conflicts on org.apache.httpcomponents:httpcore:jar, leading to > invoking unexpected methods > -- > > Key: HIVE-21374 > URL: https://issues.apache.org/jira/browse/HIVE-21374 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.3.4 >Reporter: Hello CoCooo >Priority: Major > Attachments: validate4.4.1.png, validate4.4.png > > > Hi! In *hive-rel-release-2.3.4\service-rpc,* there are multiple versions of > *org.apache.httpcomponents:httpcore:jar*. As shown in the following > dependency tree, according to Maven's dependency management strategy, only > *org.apache.httpcomponents:httpcore:jar:4.4* can be loaded, and > *org.apache.httpcomponents:httpcore:jar:4.4.1* will be shadowed. > Your project references the method > {color:#d04437}** > {color}via the following invocation path, which is included in the shadowed > version *org.apache.httpcomponents:httpcore:jar:4.4.1*. However, this method > is missing in the actual loaded version > *org.apache.httpcomponents:httpcore:jar:4.4*. Surprisingly, it will not cause > NoSuchMethodError at rumtime. > {color:#59afe1}*Invocation path:*{color} > {code:java} > // code placeholder > requestInvoke(org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer)> > > C:\Users\Flipped\.m2\repository\org\apache\thrift\libthrift\0.9.3\libthrift-0.9.3.jar > > C:\Users\Flipped\.m2\repository\junit\junit\4.11\junit-4.11.jar > get(long,java.util.concurrent.TimeUnit)> > C:\Users\Flipped\.m2\repository\org\apache\httpcomponents\httpcore\4.4.1\httpcore-4.4.1.jar > getPoolEntry(long,java.util.concurrent.TimeUnit)> > C:\Users\Flipped\.m2\repository\org\apache\httpcomponents\httpcore\4.4.1\httpcore-4.4.1.jar > getPoolEntry(long,java.util.concurrent.TimeUnit)> > C:\Users\Flipped\.m2\repository\org\apache\httpcomponents\httpcore\4.4.1\httpcore-4.4.1.jar > access$000(org.apache.http.pool.AbstractConnPool,java.lang.Object,java.lang.Object,long,java.util.concurrent.TimeUnit,org.apache.http.pool.PoolEntryFuture)> > > C:\Users\Flipped\.m2\repository\org\apache\httpcomponents\httpcore\4.4.1\httpcore-4.4.1.jar > getPoolEntryBlocking(java.lang.Object,java.lang.Object,long,java.util.concurrent.TimeUnit,org.apache.http.pool.PoolEntryFuture)> > > C:\Users\Flipped\.m2\repository\org\apache\httpcomponents\httpcore\4.4.1\httpcore-4.4.1.jar > validate(org.apache.http.pool.PoolEntry)>{code} > By further analyzing, I found that the caller > *org.apache.thrift.server.TThreadedSelectorServer.requestInvoke(AbstractNonblockingServer$FrameBuffer)* > would invoke the method > *{color:#d04437}AbstractConnPool.validate(PoolEntry){color}* defined in the > *superclass of org.apache.http.impl.conn.CPool (**CPool* *extends > AbstractConnPool)* with the same signature of the expected callee, due to > dynamic binding mechanism. > Although the actual invoked method belonging to > *{color:#d04437}AbstractConnPool{color}* has the same method name, same > parameter types and return type as the expected method defined in its > subclass {color:#d04437}*CPool*{color}, but it has different control flows > and different behaviors. Maybe it is buggy behavior. > > +_*{color:#f691b2}Solution:{color}*_+ > Use the newer version *org.apache.httpcomponents:httpcore:jar:4.4.1* in > parent pom file to keep the version consistency. > > *Dependency tree* > [INFO] org.apache.hive:hive-service-rpc:jar:2.3.4 > [INFO] +- commons-codec:commons-codec:jar:1.4:compile > [INFO] +- commons-cli:commons-cli:jar:1.2:compile > [INFO] +- tomcat:jasper-compiler:jar:5.5.23:compile > [INFO] | +- javax.servlet:jsp-api:jar:2.0:compile > [INFO] | | - (javax.servlet:servlet-api:jar:2.4:compile - omitted for > duplicate) > [INFO] | - ant:ant:jar:1.6.5:compile > [INFO] +- tomcat:jasper-runtime:jar:5.5.23:compile > [INFO] | +- javax.servlet:servlet-api:jar:2.4:compile > [INFO] | - commons-el:commons-el:jar:1.0:compile > [INFO] | - commons-logging:commons-logging:jar:1.0.3:compile > [INFO] +- org.apache.thrift:libfb303:jar:0.9.3:compile > [INFO] | - (org.apache.thrift:libthrift:jar:0.9.3:compile - omitted for > duplicate) > [INFO] +- org.apache.thrift:libthrift:jar:0.9.3:compile > [INFO] | +- (org.slf4j:slf4j-api:jar:1.7.10:compile - version managed from > 1.7.12; omitted for duplicate) > [INFO] | +- org.apache.httpcomponents:httpclient:jar:4.4:compile (version > managed from 4.4.1) > [INFO] | | +- *(org.apache.httpcomponents:httpcore:jar:4.4:compile - > version managed from 4.4.1; omitted for duplicate)* >
[jira] [Commented] (HIVE-18526) Backport HIVE-16886 to Hive 2
[ https://issues.apache.org/jira/browse/HIVE-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831667#comment-16831667 ] Yuming Wang commented on HIVE-18526: Do we need backport this to branch-2.3? > Backport HIVE-16886 to Hive 2 > - > > Key: HIVE-18526 > URL: https://issues.apache.org/jira/browse/HIVE-18526 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 2.3.3 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Fix For: 2.4.0 > > Attachments: HIVE-18526.01-branch-2.patch, > HIVE-18526.02-branch-2.patch > > > The fix for HIVE-16886 isn't in Hive 2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21680) Backport HIVE-17644 to branch-2 and branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-21680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831663#comment-16831663 ] Yuming Wang commented on HIVE-21680: cc [~alangates] [~hyukjin.kwon] > Backport HIVE-17644 to branch-2 and branch-2.3 > -- > > Key: HIVE-21680 > URL: https://issues.apache.org/jira/browse/HIVE-21680 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21680.branch-2.3.patch, HIVE-21680.branch-2.patch > > > Backport HIVE-17644 to fix the warning in {{get statistics when not analyzed > in Hive or Spark}}: > {code:scala} > test("get statistics when not analyzed in Hive or Spark") { > val tabName = "tab1" > withTable(tabName) { > createNonPartitionedTable(tabName, analyzedByHive = false, > analyzedBySpark = false) > checkTableStats(tabName, hasSizeInBytes = true, expectedRowCounts = > None) > // ALTER TABLE SET TBLPROPERTIES invalidates some contents of Hive > specific statistics > // This is triggered by the Hive alterTable API > val describeResult = hiveClient.runSqlHive(s"DESCRIBE FORMATTED > $tabName") > val rawDataSize = extractStatsPropValues(describeResult, "rawDataSize") > val numRows = extractStatsPropValues(describeResult, "numRows") > val totalSize = extractStatsPropValues(describeResult, "totalSize") > assert(rawDataSize.isEmpty, "rawDataSize should not be shown without > table analysis") > assert(numRows.isEmpty, "numRows should not be shown without table > analysis") > assert(totalSize.isDefined && totalSize.get > 0, "totalSize is lost") > } > } > // > https://github.com/apache/spark/blob/43dcb91a4cb25aa7e1cc5967194f098029a0361e/sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala#L789-L806 > {code} > {noformat} > 06:23:46.103 WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql: Failed > to execute [SELECT "DBS"."NAME", "TBLS"."TBL_NAME", > "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", > "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" > FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = > "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = > "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = > "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = > "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE > "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND > "TBLS"."TBL_NAME" = ?] with parameters [default, tab1] > javax.jdo.JDODataStoreException: Error executing SQL query "SELECT > "DBS"."NAME", "TBLS"."TBL_NAME", > "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", > "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" > FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = > "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = > "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = > "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = > "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE > "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND > "TBLS"."TBL_NAME" = ?". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) > at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1750) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPrimaryKeys(MetaStoreDirectSql.java:1939) > at > org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8213) > at > org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8209) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2719) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeysInternal(ObjectStore.java:8221) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeys(ObjectStore.java:8199) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) > at com.sun.proxy.$Proxy24.getPrimaryKeys(Unknown Source) > at >
[jira] [Updated] (HIVE-21680) Backport HIVE-17644 to branch-2 and branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-21680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21680: --- Attachment: HIVE-21680.branch-2.3.patch > Backport HIVE-17644 to branch-2 and branch-2.3 > -- > > Key: HIVE-21680 > URL: https://issues.apache.org/jira/browse/HIVE-21680 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21680.branch-2.3.patch, HIVE-21680.branch-2.patch > > > Backport HIVE-17644 to fix the warning in {{get statistics when not analyzed > in Hive or Spark}}: > {code:scala} > test("get statistics when not analyzed in Hive or Spark") { > val tabName = "tab1" > withTable(tabName) { > createNonPartitionedTable(tabName, analyzedByHive = false, > analyzedBySpark = false) > checkTableStats(tabName, hasSizeInBytes = true, expectedRowCounts = > None) > // ALTER TABLE SET TBLPROPERTIES invalidates some contents of Hive > specific statistics > // This is triggered by the Hive alterTable API > val describeResult = hiveClient.runSqlHive(s"DESCRIBE FORMATTED > $tabName") > val rawDataSize = extractStatsPropValues(describeResult, "rawDataSize") > val numRows = extractStatsPropValues(describeResult, "numRows") > val totalSize = extractStatsPropValues(describeResult, "totalSize") > assert(rawDataSize.isEmpty, "rawDataSize should not be shown without > table analysis") > assert(numRows.isEmpty, "numRows should not be shown without table > analysis") > assert(totalSize.isDefined && totalSize.get > 0, "totalSize is lost") > } > } > // > https://github.com/apache/spark/blob/43dcb91a4cb25aa7e1cc5967194f098029a0361e/sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala#L789-L806 > {code} > {noformat} > 06:23:46.103 WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql: Failed > to execute [SELECT "DBS"."NAME", "TBLS"."TBL_NAME", > "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", > "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" > FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = > "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = > "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = > "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = > "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE > "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND > "TBLS"."TBL_NAME" = ?] with parameters [default, tab1] > javax.jdo.JDODataStoreException: Error executing SQL query "SELECT > "DBS"."NAME", "TBLS"."TBL_NAME", > "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", > "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" > FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = > "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = > "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = > "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = > "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE > "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND > "TBLS"."TBL_NAME" = ?". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) > at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1750) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPrimaryKeys(MetaStoreDirectSql.java:1939) > at > org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8213) > at > org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8209) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2719) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeysInternal(ObjectStore.java:8221) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeys(ObjectStore.java:8199) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) > at com.sun.proxy.$Proxy24.getPrimaryKeys(Unknown Source) > at >
[jira] [Updated] (HIVE-21680) Backport HIVE-17644 to branch-2 and branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-21680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21680: --- Description: Backport HIVE-17644 to fix the warning in {{get statistics when not analyzed in Hive or Spark}}: {code:scala} test("get statistics when not analyzed in Hive or Spark") { val tabName = "tab1" withTable(tabName) { createNonPartitionedTable(tabName, analyzedByHive = false, analyzedBySpark = false) checkTableStats(tabName, hasSizeInBytes = true, expectedRowCounts = None) // ALTER TABLE SET TBLPROPERTIES invalidates some contents of Hive specific statistics // This is triggered by the Hive alterTable API val describeResult = hiveClient.runSqlHive(s"DESCRIBE FORMATTED $tabName") val rawDataSize = extractStatsPropValues(describeResult, "rawDataSize") val numRows = extractStatsPropValues(describeResult, "numRows") val totalSize = extractStatsPropValues(describeResult, "totalSize") assert(rawDataSize.isEmpty, "rawDataSize should not be shown without table analysis") assert(numRows.isEmpty, "numRows should not be shown without table analysis") assert(totalSize.isDefined && totalSize.get > 0, "totalSize is lost") } } // https://github.com/apache/spark/blob/43dcb91a4cb25aa7e1cc5967194f098029a0361e/sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala#L789-L806 {code} {noformat} 06:23:46.103 WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql: Failed to execute [SELECT "DBS"."NAME", "TBLS"."TBL_NAME", "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND "TBLS"."TBL_NAME" = ?] with parameters [default, tab1] javax.jdo.JDODataStoreException: Error executing SQL query "SELECT "DBS"."NAME", "TBLS"."TBL_NAME", "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND "TBLS"."TBL_NAME" = ?". at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1750) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPrimaryKeys(MetaStoreDirectSql.java:1939) at org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8213) at org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8209) at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2719) at org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeysInternal(ObjectStore.java:8221) at org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeys(ObjectStore.java:8199) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) at com.sun.proxy.$Proxy24.getPrimaryKeys(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_primary_keys(HiveMetaStore.java:6830) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) at
[jira] [Updated] (HIVE-21680) Backport HIVE-17644 to branch-2 and branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-21680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21680: --- Attachment: HIVE-21680.branch-2.patch Status: Patch Available (was: Open) > Backport HIVE-17644 to branch-2 and branch-2.3 > -- > > Key: HIVE-21680 > URL: https://issues.apache.org/jira/browse/HIVE-21680 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21680.branch-2.patch > > > {code:scala} > test("get statistics when not analyzed in Hive or Spark") { > val tabName = "tab1" > withTable(tabName) { > createNonPartitionedTable(tabName, analyzedByHive = false, > analyzedBySpark = false) > checkTableStats(tabName, hasSizeInBytes = true, expectedRowCounts = > None) > // ALTER TABLE SET TBLPROPERTIES invalidates some contents of Hive > specific statistics > // This is triggered by the Hive alterTable API > val describeResult = hiveClient.runSqlHive(s"DESCRIBE FORMATTED > $tabName") > val rawDataSize = extractStatsPropValues(describeResult, "rawDataSize") > val numRows = extractStatsPropValues(describeResult, "numRows") > val totalSize = extractStatsPropValues(describeResult, "totalSize") > assert(rawDataSize.isEmpty, "rawDataSize should not be shown without > table analysis") > assert(numRows.isEmpty, "numRows should not be shown without table > analysis") > assert(totalSize.isDefined && totalSize.get > 0, "totalSize is lost") > } > } > // > https://github.com/apache/spark/blob/43dcb91a4cb25aa7e1cc5967194f098029a0361e/sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala#L789-L806 > {code} > {noformat} > 06:23:46.103 WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql: Failed > to execute [SELECT "DBS"."NAME", "TBLS"."TBL_NAME", > "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", > "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" > FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = > "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = > "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = > "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = > "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE > "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND > "TBLS"."TBL_NAME" = ?] with parameters [default, tab1] > javax.jdo.JDODataStoreException: Error executing SQL query "SELECT > "DBS"."NAME", "TBLS"."TBL_NAME", > "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", > "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" > FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = > "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = > "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = > "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = > "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE > "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND > "TBLS"."TBL_NAME" = ?". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) > at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1750) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPrimaryKeys(MetaStoreDirectSql.java:1939) > at > org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8213) > at > org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8209) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2719) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeysInternal(ObjectStore.java:8221) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeys(ObjectStore.java:8199) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) > at com.sun.proxy.$Proxy24.getPrimaryKeys(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_primary_keys(HiveMetaStore.java:6830) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
[jira] [Assigned] (HIVE-21680) Backport HIVE-17644 to branch-2 and branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-21680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-21680: -- > Backport HIVE-17644 to branch-2 and branch-2.3 > -- > > Key: HIVE-21680 > URL: https://issues.apache.org/jira/browse/HIVE-21680 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > {code:scala} > test("get statistics when not analyzed in Hive or Spark") { > val tabName = "tab1" > withTable(tabName) { > createNonPartitionedTable(tabName, analyzedByHive = false, > analyzedBySpark = false) > checkTableStats(tabName, hasSizeInBytes = true, expectedRowCounts = > None) > // ALTER TABLE SET TBLPROPERTIES invalidates some contents of Hive > specific statistics > // This is triggered by the Hive alterTable API > val describeResult = hiveClient.runSqlHive(s"DESCRIBE FORMATTED > $tabName") > val rawDataSize = extractStatsPropValues(describeResult, "rawDataSize") > val numRows = extractStatsPropValues(describeResult, "numRows") > val totalSize = extractStatsPropValues(describeResult, "totalSize") > assert(rawDataSize.isEmpty, "rawDataSize should not be shown without > table analysis") > assert(numRows.isEmpty, "numRows should not be shown without table > analysis") > assert(totalSize.isDefined && totalSize.get > 0, "totalSize is lost") > } > } > // > https://github.com/apache/spark/blob/43dcb91a4cb25aa7e1cc5967194f098029a0361e/sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala#L789-L806 > {code} > {noformat} > 06:23:46.103 WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql: Failed > to execute [SELECT "DBS"."NAME", "TBLS"."TBL_NAME", > "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", > "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" > FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = > "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = > "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = > "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = > "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE > "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND > "TBLS"."TBL_NAME" = ?] with parameters [default, tab1] > javax.jdo.JDODataStoreException: Error executing SQL query "SELECT > "DBS"."NAME", "TBLS"."TBL_NAME", > "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", > "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" > FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = > "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = > "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = > "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = > "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE > "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND > "TBLS"."TBL_NAME" = ?". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) > at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1750) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPrimaryKeys(MetaStoreDirectSql.java:1939) > at > org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8213) > at > org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8209) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2719) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeysInternal(ObjectStore.java:8221) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeys(ObjectStore.java:8199) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) > at com.sun.proxy.$Proxy24.getPrimaryKeys(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_primary_keys(HiveMetaStore.java:6830) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at >
[jira] [Commented] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826113#comment-16826113 ] Yuming Wang commented on HIVE-21639: [~alangates] [~hyukjin.kwon] We'd better fix HIVE-21536 because the default value of Hive 1.2's {{hive.metastore.disallow.incompatible.col.type.changes}} is false. I have tested branch-2.3 with HIVE-21639 and HIVE-21536 many times. > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21639.branch-2.3.patch, HIVE-21639.branch-2.patch > > > We hint the following exception when [upgrading Spark‘s built-in Hive to > 2.3.4|https://issues.apache.org/jira/browse/SPARK-23710]: > {noformat} > .. > [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' > with class loader > org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see > the next exception for details. > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown > Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) > ... > {noformat} > This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to > backport part of HIVE-17561 to fix this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21639: --- Attachment: (was: HIVE-21638.branch-2.3.patch) > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21639.branch-2.3.patch, HIVE-21639.branch-2.patch > > > We hint the following exception when [upgrading Spark‘s built-in Hive to > 2.3.4|https://issues.apache.org/jira/browse/SPARK-23710]: > {noformat} > .. > [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' > with class loader > org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see > the next exception for details. > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown > Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) > ... > {noformat} > This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to > backport part of HIVE-17561 to fix this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21639: --- Attachment: HIVE-21638.branch-2.3.patch > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21639.branch-2.3.patch, HIVE-21639.branch-2.patch > > > We hint the following exception when [upgrading Spark‘s built-in Hive to > 2.3.4|https://issues.apache.org/jira/browse/SPARK-23710]: > {noformat} > .. > [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' > with class loader > org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see > the next exception for details. > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown > Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) > ... > {noformat} > This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to > backport part of HIVE-17561 to fix this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21639: --- Attachment: HIVE-21639.branch-2.3.patch > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21639.branch-2.3.patch, HIVE-21639.branch-2.patch > > > We hint the following exception when [upgrading Spark‘s built-in Hive to > 2.3.4|https://issues.apache.org/jira/browse/SPARK-23710]: > {noformat} > .. > [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' > with class loader > org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see > the next exception for details. > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown > Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) > ... > {noformat} > This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to > backport part of HIVE-17561 to fix this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21639: --- Description: We hint the following exception when [upgrading Spark‘s built-in Hive to 2.3.4|https://issues.apache.org/jira/browse/SPARK-23710]: {noformat} .. [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see the next exception for details. [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) ... {noformat} This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix this issue for branch-2. was: We hint the following exception when upgrading Spark‘s built-in Hive to 2.3.4: {noformat} .. [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see the next exception for details. [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) ... {noformat} This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix this issue for branch-2. > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21639.branch-2.patch > > > We hint the following exception when [upgrading Spark‘s built-in Hive to > 2.3.4|https://issues.apache.org/jira/browse/SPARK-23710]: > {noformat} > .. > [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' > with class loader > org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see > the next exception for details. > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown > Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) > ... > {noformat} > This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to > backport part of HIVE-17561 to fix this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21639: --- Description: We hint the following exception when upgrading Spark‘s built-in Hive to 2.3.4: {noformat} .. [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see the next exception for details. [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) ... {noformat} This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix this issue for branch-2. was: We hint the following exception when upgrading Spark‘s built-in Hive to 2.3.4: {noformat} .. [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see the next exception for details. [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) ... {noformat} This issue Introduced by The bug fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix this issue for branch-2. > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21639.branch-2.patch > > > We hint the following exception when upgrading Spark‘s built-in Hive to 2.3.4: > {noformat} > .. > [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' > with class loader > org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see > the next exception for details. > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown > Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) > ... > {noformat} > This issue Introduced by HIVE-10632 and fixed by HIVE-17561, I'd like to > backport part of HIVE-17561 to fix this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21639: --- Description: We hint the following exception when upgrading Spark‘s built-in Hive to 2.3.4: {noformat} .. [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see the next exception for details. [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source) [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) ... {noformat} This issue Introduced by The bug fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix this issue for branch-2. was:The bug fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix this issue for branch-2. > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21639.branch-2.patch > > > We hint the following exception when upgrading Spark‘s built-in Hive to 2.3.4: > {noformat} > .. > [info] Cause: java.sql.SQLException: Failed to start database 'metastore_db' > with class loader > org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@2439ab23, see > the next exception for details. > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at > org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown > Source) > [info] at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > [info] at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source) > ... > {noformat} > This issue Introduced by > The bug fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix > this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21639: --- Assignee: Yuming Wang Attachment: HIVE-21639.branch-2.patch Status: Patch Available (was: Open) > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21639.branch-2.patch > > > The bug fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix > this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21639) Spark test failed since HIVE-10632
[ https://issues.apache.org/jira/browse/HIVE-21639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21639: --- Description: The bug fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix this issue for branch-2. > Spark test failed since HIVE-10632 > -- > > Key: HIVE-21639 > URL: https://issues.apache.org/jira/browse/HIVE-21639 > Project: Hive > Issue Type: Bug >Reporter: Yuming Wang >Priority: Major > > The bug fixed by HIVE-17561, I'd like to backport part of HIVE-17561 to fix > this issue for branch-2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21588) Remove HBase dependency from hive-metastore
[ https://issues.apache.org/jira/browse/HIVE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812884#comment-16812884 ] Yuming Wang commented on HIVE-21588: Thank you [~vihangk1] Looks like we don't need. > Remove HBase dependency from hive-metastore > --- > > Key: HIVE-21588 > URL: https://issues.apache.org/jira/browse/HIVE-21588 > Project: Hive > Issue Type: Task > Components: HBase Metastore >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21588.01.patch, HIVE-21588.02.patch > > > HIVE-17234 has removed HBase metastore from master. But maven dependency have > not been removed. We should remove it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21588) Remove HBase dependency from hive-metastore
[ https://issues.apache.org/jira/browse/HIVE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21588: --- Attachment: HIVE-21588.02.patch > Remove HBase dependency from hive-metastore > --- > > Key: HIVE-21588 > URL: https://issues.apache.org/jira/browse/HIVE-21588 > Project: Hive > Issue Type: Task > Components: HBase Metastore >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21588.01.patch, HIVE-21588.02.patch > > > HIVE-17234 has removed HBase metastore from master. But maven dependency have > not been removed. We should remove it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21585) Upgrade branch-2.3 to ORC 1.3.4
[ https://issues.apache.org/jira/browse/HIVE-21585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16811712#comment-16811712 ] Yuming Wang commented on HIVE-21585: Could we upgrade branch-2.3 to 1.5.5? > Upgrade branch-2.3 to ORC 1.3.4 > --- > > Key: HIVE-21585 > URL: https://issues.apache.org/jira/browse/HIVE-21585 > Project: Hive > Issue Type: Bug >Reporter: Owen O'Malley >Assignee: Owen O'Malley >Priority: Major > > Hive's branch-2.3 currently uses ORC 1.3.3. > I'd like to upgrade it use the bug fix release [ORC > 1.3.4|https://issues.apache.org/jira/sr/jira.issueviews:searchrequest-printable/temp/SearchRequest.html?jqlQuery=project+%3D+ORC+AND+status+%3D+Closed+AND+fixVersion+%3D+%221.3.4%22=500]. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-21589) Remove org.eclipse.jetty.orbit:javax.servlet from hive-common
[ https://issues.apache.org/jira/browse/HIVE-21589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang resolved HIVE-21589. Resolution: Duplicate Fixed by HIVE-16049 > Remove org.eclipse.jetty.orbit:javax.servlet from hive-common > - > > Key: HIVE-21589 > URL: https://issues.apache.org/jira/browse/HIVE-21589 > Project: Hive > Issue Type: Task > Components: Spark >Affects Versions: 2.3.4 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > HIVE-12783 includes org.eclipse.jetty.orbit:javax.servlet to fix the Hive on > Spark test failure. > Since Spark 2.0, We do not need it, see SPARK-14897. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21589) Remove org.eclipse.jetty.orbit:javax.servlet from hive-common
[ https://issues.apache.org/jira/browse/HIVE-21589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-21589: -- > Remove org.eclipse.jetty.orbit:javax.servlet from hive-common > - > > Key: HIVE-21589 > URL: https://issues.apache.org/jira/browse/HIVE-21589 > Project: Hive > Issue Type: Task > Components: Spark >Affects Versions: 2.3.4 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > HIVE-12783 includes org.eclipse.jetty.orbit:javax.servlet to fix the Hive on > Spark test failure. > Since Spark 2.0, We do not need it, see SPARK-14897. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-9452) Use HBase to store Hive metadata
[ https://issues.apache.org/jira/browse/HIVE-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16811483#comment-16811483 ] Yuming Wang commented on HIVE-9452: --- The code was removed from Hive 3.0.0: [https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-HiveMetastoreHBase] > Use HBase to store Hive metadata > > > Key: HIVE-9452 > URL: https://issues.apache.org/jira/browse/HIVE-9452 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: hbase-metastore-branch >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HBaseMetastoreApproach.pdf > > > This is an umbrella JIRA for a project to explore using HBase to store the > Hive data catalog (ie the metastore). This project has several goals: > # The current metastore implementation is slow when tables have thousands or > more partitions. With Tez and Spark engines we are pushing Hive to a point > where queries only take a few seconds to run. But planning the query can > take as long as running it. Much of this time is spent in metadata > operations. > # Due to scale limitations we have never allowed tasks to communicate > directly with the metastore. However, with the development of LLAP this > requirement will have to be relaxed. If we can relax this there are other > use cases that could benefit from this. > # Eating our own dogfood. Rather than using external systems to store our > metadata there are benefits to using other components in the Hadoop system. > The proposal is to create a new branch and work on the prototype there. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21588) Remove HBase dependency from hive-metastore
[ https://issues.apache.org/jira/browse/HIVE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16811482#comment-16811482 ] Yuming Wang commented on HIVE-21588: cc [~alangates] > Remove HBase dependency from hive-metastore > --- > > Key: HIVE-21588 > URL: https://issues.apache.org/jira/browse/HIVE-21588 > Project: Hive > Issue Type: Task > Components: HBase Metastore >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21588.01.patch > > > HIVE-17234 has removed HBase metastore from master. But maven dependency have > not been removed. We should remove it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21588) Remove HBase dependency from hive-metastore
[ https://issues.apache.org/jira/browse/HIVE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21588: --- Description: HIVE-17234 has removed HBase metastore from master. But maven dependency have not been removed. We should remove it. (was: HIVE-17234 has removed HBase metastore from master. But maven dependency have not been removed) > Remove HBase dependency from hive-metastore > --- > > Key: HIVE-21588 > URL: https://issues.apache.org/jira/browse/HIVE-21588 > Project: Hive > Issue Type: Task > Components: HBase Metastore >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21588.01.patch > > > HIVE-17234 has removed HBase metastore from master. But maven dependency have > not been removed. We should remove it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21588) Remove HBase dependency from hive-metastore
[ https://issues.apache.org/jira/browse/HIVE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21588: --- Attachment: HIVE-21588.01.patch Status: Patch Available (was: Open) > Remove HBase dependency from hive-metastore > --- > > Key: HIVE-21588 > URL: https://issues.apache.org/jira/browse/HIVE-21588 > Project: Hive > Issue Type: Task > Components: HBase Metastore >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21588.01.patch > > > HIVE-17234 has removed HBase metastore from master. But maven dependency have > not been removed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21588) Remove HBase dependency from hive-metastore
[ https://issues.apache.org/jira/browse/HIVE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-21588: -- > Remove HBase dependency from hive-metastore > --- > > Key: HIVE-21588 > URL: https://issues.apache.org/jira/browse/HIVE-21588 > Project: Hive > Issue Type: Task > Components: HBase Metastore >Affects Versions: 4.0.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > HIVE-17234 has removed HBase metastore from master. But maven dependency have > not been removed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21563) Improve Table#getEmptyTable performance by disable registerAllFunctionsOnce
[ https://issues.apache.org/jira/browse/HIVE-21563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807866#comment-16807866 ] Yuming Wang commented on HIVE-21563: cc [~sershe] > Improve Table#getEmptyTable performance by disable registerAllFunctionsOnce > --- > > Key: HIVE-21563 > URL: https://issues.apache.org/jira/browse/HIVE-21563 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21563.001.patch > > > We do not need registerAllFunctionsOnce when {{Table#getEmptyTable}}. The > stack trace: > {noformat} > at > org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDF(Registry.java:177) > at > org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDF(Registry.java:170) > at > org.apache.hadoop.hive.ql.exec.FunctionRegistry.(FunctionRegistry.java:209) > at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) > at > org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:231) > at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:388) > at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:332) > at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:312) > at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:288) > at > org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:913) > at > org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:877) > at > org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1479) > at > org.apache.hadoop.hive.ql.session.SessionState.getUserFromAuthenticator(SessionState.java:1150) > at org.apache.hadoop.hive.ql.metadata.Table.getEmptyTable(Table.java:180) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21563) Improve Table#getEmptyTable performance by disable registerAllFunctionsOnce
[ https://issues.apache.org/jira/browse/HIVE-21563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21563: --- Attachment: HIVE-21563.001.patch Status: Patch Available (was: Open) > Improve Table#getEmptyTable performance by disable registerAllFunctionsOnce > --- > > Key: HIVE-21563 > URL: https://issues.apache.org/jira/browse/HIVE-21563 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21563.001.patch > > > We do not need registerAllFunctionsOnce when {{Table#getEmptyTable}}. The > stack trace: > {noformat} > at > org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDF(Registry.java:177) > at > org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDF(Registry.java:170) > at > org.apache.hadoop.hive.ql.exec.FunctionRegistry.(FunctionRegistry.java:209) > at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) > at > org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:231) > at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:388) > at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:332) > at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:312) > at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:288) > at > org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:913) > at > org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:877) > at > org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1479) > at > org.apache.hadoop.hive.ql.session.SessionState.getUserFromAuthenticator(SessionState.java:1150) > at org.apache.hadoop.hive.ql.metadata.Table.getEmptyTable(Table.java:180) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21563) Improve Table#getEmptyTable performance by disable registerAllFunctionsOnce
[ https://issues.apache.org/jira/browse/HIVE-21563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-21563: -- > Improve Table#getEmptyTable performance by disable registerAllFunctionsOnce > --- > > Key: HIVE-21563 > URL: https://issues.apache.org/jira/browse/HIVE-21563 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > We do not need registerAllFunctionsOnce when {{Table#getEmptyTable}}. The > stack trace: > {noformat} > at > org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDF(Registry.java:177) > at > org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDF(Registry.java:170) > at > org.apache.hadoop.hive.ql.exec.FunctionRegistry.(FunctionRegistry.java:209) > at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:247) > at > org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:231) > at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:388) > at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:332) > at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:312) > at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:288) > at > org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:913) > at > org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:877) > at > org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1479) > at > org.apache.hadoop.hive.ql.session.SessionState.getUserFromAuthenticator(SessionState.java:1150) > at org.apache.hadoop.hive.ql.metadata.Table.getEmptyTable(Table.java:180) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-21551) Remove tomcat:jasper-* from hive-service-rpc
[ https://issues.apache.org/jira/browse/HIVE-21551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang resolved HIVE-21551. Resolution: Duplicate > Remove tomcat:jasper-* from hive-service-rpc > > > Key: HIVE-21551 > URL: https://issues.apache.org/jira/browse/HIVE-21551 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > {{hive-service}} added these dependency. {{hive-service-rpc}} do not need > these dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-21552) Remove tomcat:jasper-* from hive-service-rpc
[ https://issues.apache.org/jira/browse/HIVE-21552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang resolved HIVE-21552. Resolution: Duplicate > Remove tomcat:jasper-* from hive-service-rpc > > > Key: HIVE-21552 > URL: https://issues.apache.org/jira/browse/HIVE-21552 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > {{hive-service}} added these dependency. {{hive-service-rpc}} do not need > these dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21552) Remove tomcat:jasper-* from hive-service-rpc
[ https://issues.apache.org/jira/browse/HIVE-21552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-21552: -- > Remove tomcat:jasper-* from hive-service-rpc > > > Key: HIVE-21552 > URL: https://issues.apache.org/jira/browse/HIVE-21552 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > {{hive-service}} added these dependency. {{hive-service-rpc}} do not need > these dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21551) Remove tomcat:jasper-* from hive-service-rpc
[ https://issues.apache.org/jira/browse/HIVE-21551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang reassigned HIVE-21551: -- > Remove tomcat:jasper-* from hive-service-rpc > > > Key: HIVE-21551 > URL: https://issues.apache.org/jira/browse/HIVE-21551 > Project: Hive > Issue Type: Improvement >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > > {{hive-service}} added these dependency. {{hive-service-rpc}} do not need > these dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21536) Backport HIVE-17764 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21536: --- Attachment: HIVE-21536-branch-2.3.patch Status: Patch Available (was: Open) > Backport HIVE-17764 to branch-2.3 > - > > Key: HIVE-21536 > URL: https://issues.apache.org/jira/browse/HIVE-21536 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.4 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HIVE-21536-branch-2.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21536) Backport HIVE-17764 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21536: --- Attachment: (was: HIVE-21536.01.patch) > Backport HIVE-17764 to branch-2.3 > - > > Key: HIVE-21536 > URL: https://issues.apache.org/jira/browse/HIVE-21536 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.4 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21536) Backport HIVE-17764 to branch-2.3
[ https://issues.apache.org/jira/browse/HIVE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuming Wang updated HIVE-21536: --- Status: Open (was: Patch Available) > Backport HIVE-17764 to branch-2.3 > - > > Key: HIVE-21536 > URL: https://issues.apache.org/jira/browse/HIVE-21536 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.4 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)