[ https://issues.apache.org/jira/browse/HIVE-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13627283#comment-13627283 ]
Thiruvel Thirumoolan commented on HIVE-3620: -------------------------------------------- I have had this problem in the past (in my case 0.2 million partitions, was stress testing dynamic partitions). Metastore crashes badly, may be mine was a remote situation. The workaround I did was to drop one hierarchy of partition. In my case, there were many partition keys and I used to drop the topmost one instead of dropping the table. May be its worthwhile to visit HIVE-3214 and see if there is anything we could do at datanucleus end. > Drop table using hive CLI throws error when the total number of partition in > the table is around 50K. > ----------------------------------------------------------------------------------------------------- > > Key: HIVE-3620 > URL: https://issues.apache.org/jira/browse/HIVE-3620 > Project: Hive > Issue Type: Bug > Reporter: Arup Malakar > > hive> drop table load_test_table_20000_0; > > FAILED: Error in metadata: org.apache.thrift.transport.TTransportException: > java.net.SocketTimeoutException: Read timedout > > > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask > The DB used is Oracle and hive had only one table: > select COUNT(*) from PARTITIONS; > 54839 > I can try and play around with the parameter > hive.metastore.client.socket.timeout if that is what is being used. But it is > 200 seconds as of now, and 200 seconds for a drop table calls seems high > already. > Thanks, > Arup -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira