[jira] [Created] (HIVE-16048) Hive UDF doesn't get the right evaluate method

2017-02-27 Thread Liao, Xiaoge (JIRA)
Liao, Xiaoge created HIVE-16048:
---

 Summary: Hive UDF doesn't get the right evaluate method
 Key: HIVE-16048
 URL: https://issues.apache.org/jira/browse/HIVE-16048
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 1.1.1
Reporter: Liao, Xiaoge


Hive UDF doesn't get the right evaluate method if there is variable parameter 
in the method of evaluate.
For example:
public class TestUdf extends UDF {
public String evaluate(String a, String b) throws ParseException {
return a + ":" + b;
}
public String evaluate(String a, String... b) throws ParseException {
return b[0] + ":" + a;
}
}

the udf may get the wrong result



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HIVE-12748) shuffle the metastore uris, so the request can be balanced to metastores

2015-12-25 Thread Liao, Xiaoge (JIRA)
Liao, Xiaoge created HIVE-12748:
---

 Summary: shuffle the metastore uris, so the request can be 
balanced to metastores
 Key: HIVE-12748
 URL: https://issues.apache.org/jira/browse/HIVE-12748
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Reporter: Liao, Xiaoge


Currently, HiveMetaStoreClient connect to the first metastore uri defaultly If 
we have multi metastore uris. So the client only send request to the first 
metastore, the metastore will be the bottleneck to process the client request.

I add the logic to shuffle metastore uris, so that the request can be balanced 
to all of the metastores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9854) OutofMemory while read ORCFile table

2015-03-04 Thread Liao, Xiaoge (JIRA)
Liao, Xiaoge created HIVE-9854:
--

 Summary: OutofMemory while read ORCFile table
 Key: HIVE-9854
 URL: https://issues.apache.org/jira/browse/HIVE-9854
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1
Reporter: Liao, Xiaoge



Log:
Diagnostic Messages for this Task:
Error: java.io.IOException: java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:294)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:241)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:365)
at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:591)
at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:166)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:407)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:160)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:155)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:280)
... 11 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at 
org.apache.hadoop.hive.ql.io.orc.DynamicByteArray.grow(DynamicByteArray.java:64)
at 
org.apache.hadoop.hive.ql.io.orc.DynamicByteArray.readAll(DynamicByteArray.java:142)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.startStripe(RecordReaderImpl.java:1547)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.startStripe(RecordReaderImpl.java:1337)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.startStripe(RecordReaderImpl.java:1825)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:2537)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:2950)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:2992)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:284)
at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:480)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createReaderFromFile(OrcInputFormat.java:214)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.(OrcInputFormat.java:146)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:997)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
... 16 more


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 105   Cumulative CPU: 656.39 sec   HDFS Read: 4040094761 
HDFS Write: 139 FAIL
Total MapReduce CPU Time Spent: 10 minutes 56 seconds 390 msec



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6131) New columns after table alter result in null values despite data

2015-02-01 Thread Liao, Xiaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14300839#comment-14300839
 ] 

Liao, Xiaoge commented on HIVE-6131:


how did this bug fix?

> New columns after table alter result in null values despite data
> 
>
> Key: HIVE-6131
> URL: https://issues.apache.org/jira/browse/HIVE-6131
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0, 0.12.0, 0.13.0
>Reporter: James Vaughan
>Priority: Minor
> Attachments: HIVE-6131.1.patch
>
>
> Hi folks,
> I found and verified a bug on our CDH 4.0.3 install of Hive when adding 
> columns to tables with Partitions using 'REPLACE COLUMNS'.  I dug through the 
> Jira a little bit and didn't see anything for it so hopefully this isn't just 
> noise on the radar.
> Basically, when you alter a table with partitions and then reupload data to 
> that partition, it doesn't seem to recognize the extra data that actually 
> exists in HDFS- as in, returns NULL values on the new column despite having 
> the data and recognizing the new column in the metadata.
> Here's some steps to reproduce using a basic table:
> 1.  Run this hive command:  CREATE TABLE jvaughan_test (col1 string) 
> partitioned by (day string);
> 2.  Create a simple file on the system with a couple of entries, something 
> like "hi" and "hi2" separated by newlines.
> 3.  Run this hive command, pointing it at the file:  LOAD DATA LOCAL INPATH 
> '' OVERWRITE INTO TABLE jvaughan_test PARTITION (day = '2014-01-02');
> 4.  Confirm the data with:  SELECT * FROM jvaughan_test WHERE day = 
> '2014-01-02';
> 5.  Alter the column definitions:  ALTER TABLE jvaughan_test REPLACE COLUMNS 
> (col1 string, col2 string);
> 6.  Edit your file and add a second column using the default separator 
> (ctrl+v, then ctrl+a in Vim) and add two more entries, such as "hi3" on the 
> first row and "hi4" on the second
> 7.  Run step 3 again
> 8.  Check the data again like in step 4
> For me, this is the results that get returned:
> hive> select * from jvaughan_test where day = '2014-01-01';
> OK
> hiNULL2014-01-02
> hi2   NULL2014-01-02
> This is despite the fact that there is data in the file stored by the 
> partition in HDFS.
> Let me know if you need any other information.  The only workaround for me 
> currently is to drop partitions for any I'm replacing data in and THEN 
> reupload the new data file.
> Thanks,
> -James



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9465) Table alias is ineffective when do loading dynamic partition

2015-01-28 Thread Liao, Xiaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14296220#comment-14296220
 ] 

Liao, Xiaoge commented on HIVE-9465:


[~xuefuz] Thanks a lot for reply. But I think it should be a feature for Hive 
sql to use table alias while do dynamic partition loading.

> Table alias is ineffective when do loading dynamic partition
> 
>
> Key: HIVE-9465
> URL: https://issues.apache.org/jira/browse/HIVE-9465
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Physical Optimizer
>Affects Versions: 0.13.1
>Reporter: Liao, Xiaoge
>
> sql:
> drop table schema_test_xgliao;
> create table schema_test_xgliao( a string) PARTITIONED  by (p String);
> set hive.exec.dynamic.partition=true;
> set hive.exec.dynamic.partition.mode=nonstrict;
> insert OVERWRITE table schema_test_xgliao
> PARTITION (p)
> select a as p, b from schema_test1_xgliao ;
> It will use "b" for the value of partion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9465) Table alias is ineffective when do loading dynamic partition

2015-01-27 Thread Liao, Xiaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liao, Xiaoge updated HIVE-9465:
---
Component/s: Physical Optimizer

> Table alias is ineffective when do loading dynamic partition
> 
>
> Key: HIVE-9465
> URL: https://issues.apache.org/jira/browse/HIVE-9465
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Physical Optimizer
>Affects Versions: 0.13.1
>Reporter: Liao, Xiaoge
>
> sql:
> drop table schema_test_xgliao;
> create table schema_test_xgliao( a string) PARTITIONED  by (p String);
> set hive.exec.dynamic.partition=true;
> set hive.exec.dynamic.partition.mode=nonstrict;
> insert OVERWRITE table schema_test_xgliao
> PARTITION (p)
> select a as p, b from schema_test1_xgliao ;
> It will use "b" for the value of partion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9466) dynamic partition return error result

2015-01-26 Thread Liao, Xiaoge (JIRA)
Liao, Xiaoge created HIVE-9466:
--

 Summary: dynamic partition return error result
 Key: HIVE-9466
 URL: https://issues.apache.org/jira/browse/HIVE-9466
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.13.1
Reporter: Liao, Xiaoge


expect value is "987654 aaa 123", but result "987654NULL123"

1.
drop table schema_test_xgliao;
create table schema_test_xgliao( a string) PARTITIONED  by (p String);

drop table schema_test1_xgliao;
create table schema_test1_xgliao(a string,b string);

insert OVERWRITE table schema_test1_xgliao
select orderid  as p, num from tmp_ubtdb.orderfix limit 10;
select * from schema_test1_xgliao;


2.
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
insert OVERWRITE table schema_test_xgliao
PARTITION (p)
select b,a from schema_test1_xgliao;

select * from schema_test_xgliao t where t.p='123';

alter table schema_test_xgliao add columns(b string) ;

3.
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
insert OVERWRITE table schema_test_xgliao
PARTITION (p)
select b as x, 'aaa' as y, a as z from schema_test1_xgliao;

select * from schema_test_xgliao t where t.p='123';



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9465) Table alias is ineffective when do loading dynamic partition

2015-01-26 Thread Liao, Xiaoge (JIRA)
Liao, Xiaoge created HIVE-9465:
--

 Summary: Table alias is ineffective when do loading dynamic 
partition
 Key: HIVE-9465
 URL: https://issues.apache.org/jira/browse/HIVE-9465
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.13.1
Reporter: Liao, Xiaoge


sql:
drop table schema_test_xgliao;
create table schema_test_xgliao( a string) PARTITIONED  by (p String);

set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
insert OVERWRITE table schema_test_xgliao
PARTITION (p)
select a as p, b from schema_test1_xgliao ;
It will use "b" for the value of partion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9463) Table view don't have Authorization to select by other user

2015-01-25 Thread Liao, Xiaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liao, Xiaoge updated HIVE-9463:
---
Description: 
i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below:
For 0.10.0, when user A create table view, user B have select privilege to read 
data.
But for 0.13.1, user B can't have rights.

Command:
user A:
hive> create view table_view as select * from xx;
user B:
hive> select * from table_view;
Authorization failed:No privilege 'Select' found for inputs { database:default, 
table:table_view}.Use SHOW GRANT to get more details.

when i grant select on the underlying table, the table view still don't have 
select privilege.


  was:
i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below:
For 0.10.0, when user A create table view, user B have select privilege to read 
data.
But for 0.13.1, user B can't have rights.

Command:
user A:
hive> create view table_view as select * from xx;
user B:
hive> select * from table_view;
Authorization failed:No privilege 'Select' found for inputs { database:default, 
table:table_view}. Use SHOW GRANT to get more details.

when i grant select on the underlying table, the table view still don't have 
select privilege.



> Table view don't have Authorization to select by other user
> ---
>
> Key: HIVE-9463
> URL: https://issues.apache.org/jira/browse/HIVE-9463
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.13.1
>Reporter: Liao, Xiaoge
>
> i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below:
> For 0.10.0, when user A create table view, user B have select privilege to 
> read data.
> But for 0.13.1, user B can't have rights.
> Command:
> user A:
> hive> create view table_view as select * from xx;
> user B:
> hive> select * from table_view;
> Authorization failed:No privilege 'Select' found for inputs { 
> database:default, table:table_view}.Use SHOW GRANT to get more details.
> when i grant select on the underlying table, the table view still don't have 
> select privilege.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9463) Table view don't have Authorization to select by other user

2015-01-25 Thread Liao, Xiaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liao, Xiaoge updated HIVE-9463:
---
Description: 
i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below:
For 0.10.0, when user A create table view, user B have select privilege to read 
data.
But for 0.13.1, user B can't have rights.

Command:
user A:
hive> create view table_view as select * from xx;
user B:
hive> select * from table_view;
Authorization failed:No privilege 'Select' found for inputs { database:default, 
table:table_view}. Use SHOW GRANT to get more details.

when i grant select on the underlying table, the table view still don't have 
select privilege.


  was:
i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below:
For 0.10.0, when user A create table view, user B have select privilege to read 
data.
But for 0.13.1, user B can't have rights.

Command:
user A:
hive> create view table_view as select * from xx;
user B:
hive> select * from table_view;
Authorization failed:No privilege 'Select' found for inputs { database:default, 
table:table_view}. Use SHOW GRANT to get more details.


> Table view don't have Authorization to select by other user
> ---
>
> Key: HIVE-9463
> URL: https://issues.apache.org/jira/browse/HIVE-9463
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.13.1
>Reporter: Liao, Xiaoge
>
> i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below:
> For 0.10.0, when user A create table view, user B have select privilege to 
> read data.
> But for 0.13.1, user B can't have rights.
> Command:
> user A:
> hive> create view table_view as select * from xx;
> user B:
> hive> select * from table_view;
> Authorization failed:No privilege 'Select' found for inputs { 
> database:default, table:table_view}. Use SHOW GRANT to get more details.
> when i grant select on the underlying table, the table view still don't have 
> select privilege.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9463) Table view don't have Authorization to select by other user

2015-01-25 Thread Liao, Xiaoge (JIRA)
Liao, Xiaoge created HIVE-9463:
--

 Summary: Table view don't have Authorization to select by other 
user
 Key: HIVE-9463
 URL: https://issues.apache.org/jira/browse/HIVE-9463
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.13.1
Reporter: Liao, Xiaoge


i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below:
For 0.10.0, when user A create table view, user B have select privilege to read 
data.
But for 0.13.1, user B can't have rights.

Command:
user A:
hive> create view table_view as select * from xx;
user B:
hive> select * from table_view;
Authorization failed:No privilege 'Select' found for inputs { database:default, 
table:table_view}. Use SHOW GRANT to get more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8519) Hive metastore lock wait timeout

2014-10-20 Thread Liao, Xiaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liao, Xiaoge updated HIVE-8519:
---
Description: 
We got a lot of exception as below when doing a drop table partition, which 
made hive query every every slow. For example, it will cost 250s while 
executing use db_test;

Log:
2014-10-17 04:04:46,873 ERROR Datastore.Persist (Log4JLogger.java:error(115)) - 
Update of object 
"org.apache.hadoop.hive.metastore.model.MStorageDescriptor@13c9c4b3" using 
statement "UPDATE `SDS` SET `CD_ID`=? WHERE `SD_ID`=?" failed : 
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4096)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4028)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2490)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2651)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2734)
at 
com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at 
org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:399)
at 
org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:439)
at 
org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:374)
at 
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417)
at 
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390)
at 
org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5012)
at org.datanucleus.FlushOrdered.execute(FlushOrdered.java:106)
at 
org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4019)
at 
org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450)
at org.datanucleus.store.query.Query.prepareDatastore(Query.java:1575)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1760)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243)
at 
org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:2185)
at 
org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2131)
at 
org.apache.hadoop.hive.metastore.ObjectStore.preDropStorageDescriptor(ObjectStore.java:2162)
at 
org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionCommon(ObjectStore.java:1361)
at 
org.apache.hadoop.hive.metastore.ObjectStore.dropPartition(ObjectStore.java:1301)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
at $Proxy4.dropPartition(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition_common(HiveMetaStore.java:1865)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition(HiveMetaStore.java:1911)
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at $Proxy5.drop_partition(Unknown Source)

  was:
We got a lot of exception as below when doing a drop table partition:

Log:
2014-10-17 04:04:46,873 ERROR Datastore.Persist (Log4JLogger.java:error(115)) - 
Update of object 
"org.apache.hadoop.hive.metastore.model.MStorageDescriptor@13c9c4b3" using 
statement "UPDATE `SDS` SET `CD_ID`=? WHERE `SD_ID`=?" failed : 
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4096)
at com.mysql.jdbc.MysqlIO.chec

[jira] [Updated] (HIVE-8519) Hive metastore lock wait timeout

2014-10-20 Thread Liao, Xiaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liao, Xiaoge updated HIVE-8519:
---
Description: 
We got a lot of exception as below when doing a drop table partition:

Log:
2014-10-17 04:04:46,873 ERROR Datastore.Persist (Log4JLogger.java:error(115)) - 
Update of object 
"org.apache.hadoop.hive.metastore.model.MStorageDescriptor@13c9c4b3" using 
statement "UPDATE `SDS` SET `CD_ID`=? WHERE `SD_ID`=?" failed : 
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4096)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4028)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2490)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2651)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2734)
at 
com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at 
org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:399)
at 
org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:439)
at 
org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:374)
at 
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417)
at 
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390)
at 
org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5012)
at org.datanucleus.FlushOrdered.execute(FlushOrdered.java:106)
at 
org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4019)
at 
org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450)
at org.datanucleus.store.query.Query.prepareDatastore(Query.java:1575)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1760)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243)
at 
org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:2185)
at 
org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2131)
at 
org.apache.hadoop.hive.metastore.ObjectStore.preDropStorageDescriptor(ObjectStore.java:2162)
at 
org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionCommon(ObjectStore.java:1361)
at 
org.apache.hadoop.hive.metastore.ObjectStore.dropPartition(ObjectStore.java:1301)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
at $Proxy4.dropPartition(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition_common(HiveMetaStore.java:1865)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition(HiveMetaStore.java:1911)
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at $Proxy5.drop_partition(Unknown Source)

  was:
We got a lot of exception as below when doing a drop table partition:

Log:
2014-10-17 04:04:39,300 INFO  metastore.HiveMetaStore 
(HiveMetaStore.java:logInfo(447)) - 655: source:/*.*.*.* get_table : db=* tbl=*
2014-10-17 04:04:43,180 INFO  metastore.HiveMetaStore 
(HiveMetaStore.java:logInfo(447)) - 622: source:/*.*.*.* get_table : db=* tbl=*
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4096)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4028)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2734)
at 
com.mysql.jdbc.PreparedStatemen

[jira] [Updated] (HIVE-8519) Hive metastore lock wait timeout

2014-10-20 Thread Liao, Xiaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liao, Xiaoge updated HIVE-8519:
---
Summary: Hive metastore lock wait timeout  (was: Hive lock wait timeout)

> Hive metastore lock wait timeout
> 
>
> Key: HIVE-8519
> URL: https://issues.apache.org/jira/browse/HIVE-8519
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.10.0
>Reporter: Liao, Xiaoge
>
> We got a lot of exception as below when doing a drop table partition:
> Log:
> 2014-10-17 04:04:39,300 INFO  metastore.HiveMetaStore 
> (HiveMetaStore.java:logInfo(447)) - 655: source:/*.*.*.* get_table : db=* 
> tbl=*
> 2014-10-17 04:04:43,180 INFO  metastore.HiveMetaStore 
> (HiveMetaStore.java:logInfo(447)) - 622: source:/*.*.*.* get_table : db=* 
> tbl=*
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1074)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4096)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4028)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2734)
> at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
> at 
> com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458)
> at 
> com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375)
> at 
> com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359)
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
> at 
> org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:399)
> at 
> org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:439)
> at 
> org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:374)
> at 
> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417)
> at 
> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390)
> at 
> org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5012)
> at org.datanucleus.FlushOrdered.execute(FlushOrdered.java:106)
> at 
> org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4019)
> at 
> org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450)
> at org.datanucleus.store.query.Query.prepareDatastore(Query.java:1575)
> at org.datanucleus.store.query.Query.executeQuery(Query.java:1760)
> at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
> at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:2185)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2131)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.preDropStorageDescriptor(ObjectStore.java:2162stStorageDescriptorsWithCD)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionCommon(ObjectStore.java:1361)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.dropPartition(ObjectStore.java:1301)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
> at $Proxy4.dropPartition(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition_common(HiveMetaStore.java:1865)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition(HiveMetaStore.java:1911)
> at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
> at $Proxy5.drop_partition(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_partition.getResult(ThriftHiveMetastore.java:6240)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_partition.getResult(ThriftHiveMetastore.java:6224)
> at org.apache.

[jira] [Updated] (HIVE-8519) Hive lock wait timeout

2014-10-20 Thread Liao, Xiaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liao, Xiaoge updated HIVE-8519:
---
Description: 
We got a lot of exception as below when doing a drop table partition:

Log:
2014-10-17 04:04:39,300 INFO  metastore.HiveMetaStore 
(HiveMetaStore.java:logInfo(447)) - 655: source:/*.*.*.* get_table : db=* tbl=*
2014-10-17 04:04:43,180 INFO  metastore.HiveMetaStore 
(HiveMetaStore.java:logInfo(447)) - 622: source:/*.*.*.* get_table : db=* tbl=*
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4096)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4028)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2734)
at 
com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at 
org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:399)
at 
org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:439)
at 
org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:374)
at 
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417)
at 
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390)
at 
org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5012)
at org.datanucleus.FlushOrdered.execute(FlushOrdered.java:106)
at 
org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4019)
at 
org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450)
at org.datanucleus.store.query.Query.prepareDatastore(Query.java:1575)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1760)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243)
at 
org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:2185)
at 
org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2131)
at 
org.apache.hadoop.hive.metastore.ObjectStore.preDropStorageDescriptor(ObjectStore.java:2162stStorageDescriptorsWithCD)
at 
org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionCommon(ObjectStore.java:1361)
at 
org.apache.hadoop.hive.metastore.ObjectStore.dropPartition(ObjectStore.java:1301)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
at $Proxy4.dropPartition(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition_common(HiveMetaStore.java:1865)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition(HiveMetaStore.java:1911)
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at $Proxy5.drop_partition(Unknown Source)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_partition.getResult(ThriftHiveMetastore.java:6240)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_partition.getResult(ThriftHiveMetastore.java:6224)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:115)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:112)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(Ha

[jira] [Created] (HIVE-8519) Hive lock wait timeout

2014-10-20 Thread Liao, Xiaoge (JIRA)
Liao, Xiaoge created HIVE-8519:
--

 Summary: Hive lock wait timeout
 Key: HIVE-8519
 URL: https://issues.apache.org/jira/browse/HIVE-8519
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Liao, Xiaoge


We got a lot of exception as below when doing a drop table partition:

Log:
2014-10-17 04:04:39,300 INFO  metastore.HiveMetaStore 
(HiveMetaStore.java:logInfo(447)) - 655: source:/10.8.116.64 get_table : 
db=source_fltdb tbl=flt_sharerawpolicysub
2014-10-17 04:04:43,180 INFO  metastore.HiveMetaStore 
(HiveMetaStore.java:logInfo(447)) - 622: source:/10.8.77.119 get_table : 
db=dw_pubdb tbl=factpromocodedbcoupon
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1074)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4096)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4028)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2734)
at 
com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375)
at 
com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at 
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at 
org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:399)
at 
org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:439)
at 
org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:374)
at 
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417)
at 
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390)
at 
org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5012)
at org.datanucleus.FlushOrdered.execute(FlushOrdered.java:106)
at 
org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4019)
at 
org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450)
at org.datanucleus.store.query.Query.prepareDatastore(Query.java:1575)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1760)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243)
at 
org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:2185)
at 
org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2131)
at 
org.apache.hadoop.hive.metastore.ObjectStore.preDropStorageDescriptor(ObjectStore.java:2162stStorageDescriptorsWithCD)
at 
org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionCommon(ObjectStore.java:1361)
at 
org.apache.hadoop.hive.metastore.ObjectStore.dropPartition(ObjectStore.java:1301)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
at $Proxy4.dropPartition(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition_common(HiveMetaStore.java:1865)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition(HiveMetaStore.java:1911)
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at $Proxy5.drop_partition(Unknown Source)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_partition.getResult(ThriftHiveMetastore.java:6240)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_partition.getResult(ThriftHiveMetastore.java:6224)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:115)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:112)
at java.security.AccessController.doPrivileged(Native Method)