Hi,

Just a small point, what you are doing is DDL not a query. You are altering
the table and adding a new column.

First can you confirm that your table has 600,000 partitions!. If so, what
is the size of the table and the partition column?

The process seems to be timing out on alter. table. Your table has too many
partitions, then the chance is that timeout happens. You can check this
parameter

hive.metastore.client.socket.timeout

in hive-site.xml. In my case it is set to 600s,

  <property>
    <name>hive.metastore.client.socket.timeout</name>
    *<value>600s</value>*
    <description>
      Expects a time value with unit (d/day, h/hour, m/min, s/sec,
ms/msec, us/usec, ns/nsec), which is sec if not specified.
      MetaStore Client socket timeout in seconds
    </description>
  </property>

and set it at session level before running ALTER TABLE command

set hive.metastore.client.socket.timeout=600  -- or larger value
. . . . . . . . . . . . . . . . . . > No rows affected (0.002 seconds)

HTH


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 19 Apr 2021 at 15:27, Vinícius Matheus Olivieri <
olivierivi...@gmail.com> wrote:

> Hey guys!
>
> I am using HIVE 3.1.2 in a recently updated cluster of EMR 6.0.0 hosted in
> AWS and I am experiencing some problems when trying to execute a simple
> query in hive.
>
> The query is in question is the following:
>
> ALTER TABLE schema.table ADD COLUMNS (new_column STRING);
>
> The table that we are executing the query has approximately 600k
> partitions.
>
> The version of Hive was updated recently to 3.1.2 as the whole package
> included in EMR 6.0.0
> <https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-6x.html>.
>
> The curious thing is that when we were using Hive 2.3.6, the query worked
> with no worries or any hard work. So I searched if the version update
> changed something on the execution of an ALTER TABLE but I didn’t find
> anything relevant that could be the root cause of the problem.
>
> Could you guys help me see the light at the end of the tunnel?
>
>
> The log that is showed in the server side is the following:
>
> 2021-04-14T14:34:45,176 ERROR [HiveServer2-Background-Pool:
> Thread-221871([])]: exec.DDLTask (:()) - Failed
>
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table.
> java.net.SocketTimeoutException: Read timed out
>
> at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:721)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:698)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3966)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:455)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224)
> ~[hive-service-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
> ~[hive-service-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316)
> ~[hive-service-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at java.security.AccessController.doPrivileged(Native Method)
> ~[?:1.8.0_242]
>
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_242]
>
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> ~[hadoop-common-3.2.1-amzn-0.jar:?]
>
> at
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:330)
> ~[hive-service-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[?:1.8.0_242]
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_242]
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> ~[?:1.8.0_242]
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ~[?:1.8.0_242]
>
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
>
> Caused by: org.apache.thrift.transport.TTransportException:
> java.net.SocketTimeoutException: Read timed out
>
> at
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table_with_environment_context(ThriftHiveMetastore.java:2270)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table_with_environment_context(ThriftHiveMetastore.java:2254)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table_with_environmentContext(HiveMetaStoreClient.java:405)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table_with_environmentContext(SessionHiveMetaStoreClient.java:376)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[?:1.8.0_242]
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[?:1.8.0_242]
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:1.8.0_242]
>
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_242]
>
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at com.sun.proxy.$Proxy35.alter_table_with_environmentContext(Unknown
> Source) ~[?:?]
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[?:1.8.0_242]
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[?:1.8.0_242]
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:1.8.0_242]
>
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_242]
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at com.sun.proxy.$Proxy35.alter_table_with_environmentContext(Unknown
> Source) ~[?:?]
>
> at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:717)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> ... 23 more
>
> Caused by: java.net.SocketTimeoutException: Read timed out
>
> at java.net.SocketInputStream.socketRead0(Native Method) ~[?:1.8.0_242]
>
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> ~[?:1.8.0_242]
>
> at java.net.SocketInputStream.read(SocketInputStream.java:171)
> ~[?:1.8.0_242]
>
> at java.net.SocketInputStream.read(SocketInputStream.java:141)
> ~[?:1.8.0_242]
>
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> ~[?:1.8.0_242]
>
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
> ~[?:1.8.0_242]
>
> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
> ~[?:1.8.0_242]
>
> at
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table_with_environment_context(ThriftHiveMetastore.java:2270)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table_with_environment_context(ThriftHiveMetastore.java:2254)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table_with_environmentContext(HiveMetaStoreClient.java:405)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table_with_environmentContext(SessionHiveMetaStoreClient.java:376)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[?:1.8.0_242]
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[?:1.8.0_242]
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:1.8.0_242]
>
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_242]
>
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at com.sun.proxy.$Proxy35.alter_table_with_environmentContext(Unknown
> Source) ~[?:?]
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[?:1.8.0_242]
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[?:1.8.0_242]
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:1.8.0_242]
>
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_242]
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> at com.sun.proxy.$Proxy35.alter_table_with_environmentContext(Unknown
> Source) ~[?:?]
>
> at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:717)
> ~[hive-exec-3.1.2-amzn-0.jar:3.1.2-amzn-0]
>
> ... 23 more
>

Reply via email to