[
https://issues.apache.org/jira/browse/HIVE-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13635676#comment-13635676
]
Thiruvel Thirumoolan commented on HIVE-3620:
--------------------------------------------
[~sho.shimauchi] Did you have any special parameters for datanucleus to get
this working? I tried disabling datanucleus cache and also set connection
pools, but that does not seem to help. Will also post a snapshot of memory dump
I have. BTW, I tried dropping a table with 45k partitions with the batch size
configured to 100 and 1000.
> Drop table using hive CLI throws error when the total number of partition in
> the table is around 50K.
> -----------------------------------------------------------------------------------------------------
>
> Key: HIVE-3620
> URL: https://issues.apache.org/jira/browse/HIVE-3620
> Project: Hive
> Issue Type: Bug
> Reporter: Arup Malakar
>
> hive> drop table load_test_table_20000_0;
>
> FAILED: Error in metadata: org.apache.thrift.transport.TTransportException:
> java.net.SocketTimeoutException: Read timedout
>
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> The DB used is Oracle and hive had only one table:
> select COUNT(*) from PARTITIONS;
> 54839
> I can try and play around with the parameter
> hive.metastore.client.socket.timeout if that is what is being used. But it is
> 200 seconds as of now, and 200 seconds for a drop table calls seems high
> already.
> Thanks,
> Arup
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira