[ 
https://issues.apache.org/jira/browse/HIVE-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18035012#comment-18035012
 ] 

Butao Zhang commented on HIVE-29213:
------------------------------------

Hi [~InvisibleProgrammer] 

>> My question is: do we have some boundaries about how many releases Beeline 
>> should support backward? 

Good question. Regarding client compatibility issues, I believe that since the 
birth of Hive, there has never been a clear definition of how to ensure 
backward compatibility for clients (beeline & JDBC driver). I think this 
compatibility guarantee largely depends on the contributors' understanding of 
the code changes and their conscious efforts to avoid certain compatibility 
issues. Therefore, it is difficult to ensure 100% backward compatibility for 
clients. Just like with MySQL clients, there may be compatibility issues 
between major client versions. What we can do now is to minimize backward 
compatibility problems as much as possible to provide users with a better 
experience.

 

>> what is your opinion, does it make sense to detach Beeline on the repository 
>> level from Hive?

Decoupling the Beeline client from the Hive repository and developing it as a 
multi-version component sounds like a promising direction. However, given our 
current development capacity in the Hive community, I don't believe we have 
sufficient resources to maintain a separate Beeline client repository.

Additionally, HIVE-24348 introduced a great feature: a Java-based, lightweight, 
and standalone Beeline client that does not depend on the Hadoop & Hive tar 
package. If you run `mvn clean package -DskipTests -Pdist`, you can find it at 
this path: `hive/packaging/target/apache-hive-beeline-4.2.0-SNAPSHOT.tar.gz`. 
However, this standalone Beeline client has never been officially released. I 
often use this lightweight client for testing, and we could consider releasing 
it.

But currently, there seem to be some issues with this client on the master 
branch. Take a look at the following exception:
{code:java}
./hive/packaging/target/apache-hive-beeline-4.2.0-SNAPSHOT/bin/beeline -u 
"jdbc:hive2://127.0.0.1:10004/default hive"
HADOOP_HOME not set, executing beeline using JAVA
Exception in thread "main" java.util.ServiceConfigurationError: 
java.net.spi.InetAddressResolverProvider: Provider 
org.xbill.DNS.spi.DnsjavaInetAddressResolverProvider not found
        at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:593)
        at 
java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.nextProviderClass(ServiceLoader.java:1219)
        at 
java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNextService(ServiceLoader.java:1228)
        at 
java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNext(ServiceLoader.java:1273)
        at java.base/java.util.ServiceLoader$2.hasNext(ServiceLoader.java:1309)
        at java.base/java.util.ServiceLoader$3.hasNext(ServiceLoader.java:1393)
        at java.base/java.util.ServiceLoader.findFirst(ServiceLoader.java:1812)
        at java.base/java.net.InetAddress.loadResolver(InetAddress.java:508)
        at java.base/java.net.InetAddress.resolver(InetAddress.java:488)
        at 
java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1826)
        at 
java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:1139)
        at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1818)
        at java.base/java.net.InetAddress.getLocalHost(InetAddress.java:1931)
        at 
org.apache.logging.log4j.core.util.NetUtils.getHostname(NetUtils.java:71)
        at 
org.apache.logging.log4j.core.util.NetUtils.getLocalHostname(NetUtils.java:56)
        at 
org.apache.logging.log4j.core.LoggerContext.lambda$setConfiguration$2(LoggerContext.java:667)
        at 
java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1708)
        at 
org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:667)
        at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:762)
        at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:784)
        at 
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:300)
        at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:257)
        at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:46)
        at org.apache.logging.log4j.LogManager.getContext(LogManager.java:118)
        at 
org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:58)
        at 
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46)
        at 
org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:32)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:363)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
        at 
org.apache.hadoop.util.ShutdownHookManager.<clinit>(ShutdownHookManager.java:70)
        at 
org.apache.hive.common.util.ShutdownHookManager.<clinit>(ShutdownHookManager.java:38)
        at 
org.apache.hive.beeline.BeeLine.addBeelineShutdownHook(BeeLine.java:1454)
        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1121)
        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1101)
        at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:567)
        at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:561)
        at org.apache.hive.beeline.BeeLine.main(BeeLine.java:544)
 {code}

> HS2 report 'Failed to get primary keys' if using old beeline client
> -------------------------------------------------------------------
>
>                 Key: HIVE-29213
>                 URL: https://issues.apache.org/jira/browse/HIVE-29213
>             Project: Hive
>          Issue Type: Bug
>          Components: Beeline
>    Affects Versions: 4.1.0
>            Reporter: Butao Zhang
>            Priority: Critical
>
> HIVE-19996 fixed a performance issue related to beeline. However, this fix 
> causes a warning exception "Failed to get primary keys" to appear in the HS2 
> logs when using a beeline client that does not include this patch (such as 
> Hive 4.1.0) to submit SQL queries to Hive 4.2.0.
>  
> Here is a simple test procedure:
>  # Use the Hive 4.1.0 beeline client to connect to the latest version of Hive 
> 4.2.0 (which includes HIVE-19996).
> // Create a test table
> create table testdb.test12(id int);
> {color:#de350b}*// Query this table, and you will notice the following 
> exception: Failed to get primary keys ..*{color}
> select * from testdb.test12;
> {code:java}
> 2025-09-18T16:22:34,121 ERROR [HiveServer2-Handler-Pool: Thread-166] 
> thrift.ThriftCLIService: Failed to get primary keys [request: 
> TGetPrimaryKeysReq(sessionHandle:TSessionHandle(sessionId:THandleIdentifier(guid:56
>  22 37 10 00 2D 4B BB B6 FE 42 49 E2 25 75 1F, secret:C7 E6 BF 96 D1 82 4E F5 
> 82 36 AC A7 A1 3E 7A A6)), catalogName:, tableName:test12)]
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.thrift.protocol.TProtocolException: Required field 'db_name' is 
> unset! Struct:PrimaryKeysRequest(db_name:null, tbl_name:test12, catName:hive)
>         at 
> org.apache.hive.service.cli.operation.GetPrimaryKeysOperation.runInternal(GetPrimaryKeysOperation.java:120)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:286) 
> ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.session.HiveSessionImpl.getPrimaryKeys(HiveSessionImpl.java:998)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
>  ~[?:?]
>         at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]
>         at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:714)
>  ~[?:?]
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:525) ~[?:?]
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
>  ~[hadoop-common-3.4.1.jar:?]
>         at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at jdk.proxy2/jdk.proxy2.$Proxy41.getPrimaryKeys(Unknown Source) 
> ~[?:?]
>         at 
> org.apache.hive.service.cli.CLIService.getPrimaryKeys(CLIService.java:416) 
> ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.GetPrimaryKeys(ThriftCLIService.java:919)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$GetPrimaryKeys.getResult(TCLIService.java:1870)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$GetPrimaryKeys.getResult(TCLIService.java:1850)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:38) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:250)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>  ~[?:?]
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>  ~[?:?]
>         at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
> Caused by: org.apache.thrift.protocol.TProtocolException: Required field 
> 'db_name' is unset! Struct:PrimaryKeysRequest(db_name:null, tbl_name:test12, 
> catName:hive)
>         at 
> org.apache.hadoop.hive.metastore.api.PrimaryKeysRequest.validate(PrimaryKeysRequest.java:591)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_primary_keys_args.validate(ThriftHiveMetastore.java)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_primary_keys_args$get_primary_keys_argsStandardScheme.write(ThriftHiveMetastore.java)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_primary_keys_args$get_primary_keys_argsStandardScheme.write(ThriftHiveMetastore.java)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_primary_keys_args.write(ThriftHiveMetastore.java)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:71) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_get_primary_keys(ThriftHiveMetastore.java:4926)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_primary_keys(ThriftHiveMetastore.java:4918)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.ThriftHiveMetaStoreClient.getPrimaryKeys(ThriftHiveMetaStoreClient.java:2393)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.MetaStoreClientWrapper.getPrimaryKeys(MetaStoreClientWrapper.java:1081)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT] {code}
> *{color:#de350b}// show  tables, then you will find new 
> exceptions:client.ThriftHiveMetaStoreClient: Got error flushing the cache 
> org.apache.thrift.TApplicationException: Unrecognized type -128{color}*
> show tables;
> {code:java}
> 2025-09-18T16:22:39,771  INFO [56223710-002d-4bbb-b6fe-4249e225751f 
> HiveServer2-Handler-Pool: Thread-166] ql.Driver: Compiling 
> command(queryId=hive_20250918162239_69ae0c6e-309f-4b87-a2c4-8093ae4700e8): 
> show tables
> 2025-09-18T16:22:39,771  WARN [56223710-002d-4bbb-b6fe-4249e225751f 
> HiveServer2-Handler-Pool: Thread-166] client.ThriftHiveMetaStoreClient: Got 
> error flushing the cache
> org.apache.thrift.TApplicationException: Unrecognized type -128
>         at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_flushCache(ThriftHiveMetastore.java:7493)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.flushCache(ThriftHiveMetastore.java:7481)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.ThriftHiveMetaStoreClient.flushCache(ThriftHiveMetaStoreClient.java:2541)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.MetaStoreClientWrapper.flushCache(MetaStoreClientWrapper.java:1043)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.MetaStoreClientWrapper.flushCache(MetaStoreClientWrapper.java:1043)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.MetaStoreClientWrapper.flushCache(MetaStoreClientWrapper.java:1043)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
>  ~[?:?]
>         at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]
>         at 
> org.apache.hadoop.hive.metastore.client.SynchronizedMetaStoreClient$SynchronizedHandler.invoke(SynchronizedMetaStoreClient.java:69)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at jdk.proxy2/jdk.proxy2.$Proxy32.flushCache(Unknown Source) ~[?:?]
>         at 
> org.apache.hadoop.hive.metastore.client.MetaStoreClientWrapper.flushCache(MetaStoreClientWrapper.java:1043)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
>  ~[?:?]
>         at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]
>         at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:232)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at jdk.proxy2/jdk.proxy2.$Proxy32.flushCache(Unknown Source) ~[?:?]
>         at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:198) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:498) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:450) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:414) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:408) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:205)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:268)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:286) 
> ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:558)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:543)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
>  ~[?:?]
>         at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]
>         at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:714)
>  ~[?:?]
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:525) ~[?:?]
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
>  ~[hadoop-common-3.4.1.jar:?]
>         at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at jdk.proxy2/jdk.proxy2.$Proxy41.executeStatementAsync(Unknown 
> Source) ~[?:?]
>         at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:652)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1670)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1650)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:38) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>  ~[hive-service-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:250)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>  ~[?:?]
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>  ~[?:?]
>         at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
> 2025-09-18T16:22:39,773  WARN [56223710-002d-4bbb-b6fe-4249e225751f 
> HiveServer2-Handler-Pool: Thread-166] metastore.RetryingMetaStoreClient: 
> MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. 
> getDatabase
> org.apache.thrift.transport.TTransportException: Socket is closed by peer.
>         at 
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:184)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:464) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:362) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:245)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) 
> ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_database_req(ThriftHiveMetastore.java:1497)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_database_req(ThriftHiveMetastore.java:1484)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.ThriftHiveMetaStoreClient.getDatabase(ThriftHiveMetaStoreClient.java:1979)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.MetaStoreClientWrapper.getDatabase(MetaStoreClientWrapper.java:211)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.getDatabase(SessionHiveMetaStoreClient.java:2281)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> org.apache.hadoop.hive.metastore.client.MetaStoreClientWrapper.getDatabase(MetaStoreClientWrapper.java:211)
>  ~[hive-exec-4.2.0-SNAPSHOT.jar:4.2.0-SNAPSHOT]
>         at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
>  ~[?:?]
>  {code}
>  
>  
> {color:#172b4d}HIVE-19996 may have broken the backward compatibility of 
> beeline. Accessing Hive 4.2 through the Hive 4.1 beeline client should 
> theoretically not present compatibility issues. We need to ensure backward 
> compatibility as much as possible to provide users with a smooth 
> experience.{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to