[jira] [Created] (HIVE-7018) Table and Partition tables have column LINK_TARGET_ID in Mysql scripts but not others

2014-05-06 Thread Brock Noland (JIRA)
Brock Noland created HIVE-7018:
--

 Summary: Table and Partition tables have column LINK_TARGET_ID in 
Mysql scripts but not others
 Key: HIVE-7018
 URL: https://issues.apache.org/jira/browse/HIVE-7018
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland


It appears that at least postgres and oracle do not have the LINK_TARGET_ID 
column while mysql does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one

2014-05-06 Thread Bing Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990480#comment-13990480
 ] 

Bing Li commented on HIVE-6990:
---

Hi, [~sershe]
The failures in build#88 are not related to this patch.

If we don't set javax.jdo.mapping.Schema in hive-site.xml, then the value of 
the schema is empty, and I can't get the table schema info from the database 
either.

Do you have some good method to get this info?

Thank you!

 Direct SQL fails when the explicit schema setting is different from the 
 default one
 ---

 Key: HIVE-6990
 URL: https://issues.apache.org/jira/browse/HIVE-6990
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
 Environment: hive + derby
Reporter: Bing Li
Assignee: Bing Li
 Fix For: 0.14.0

 Attachments: HIVE-6990.1.patch, HIVE-6990.2.patch


 I got the following ERROR in hive.log
 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore 
 (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling 
 back to ORM
 javax.jdo.JDODataStoreException: Error executing SQL query select 
 PARTITIONS.PART_ID from PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = 
 TBLS.TBL_ID   inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join 
 PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and 
 FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and 
 ((FILTER0.PART_KEY_VAL = ?)).
 at 
 org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
 at 
 org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
 at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source)
 Reproduce steps:
 1. set the following properties in hive-site.xml
  property
   namejavax.jdo.mapping.Schema/name
   valueHIVE/value
  /property
  property
   namejavax.jdo.option.ConnectionUserName/name
   valueuser1/value
  /property
 2. execute hive queries
 hive create table mytbl ( key int, value string);
 hive load data local inpath 'examples/files/kv1.txt' overwrite into table 
 mytbl;
 hive select * from mytbl;
 hive create view myview partitioned on (value) as select key, value from 
 mytbl where key=98;
 hive alter view myview add partition (value='val_98') partition 
 (value='val_xyz');
 hive alter view myview drop partition (value='val_xyz');



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one

2014-05-06 Thread Bing Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Li updated HIVE-6990:
--

Attachment: HIVE-6990.2.patch

patch based on the latest trunk

 Direct SQL fails when the explicit schema setting is different from the 
 default one
 ---

 Key: HIVE-6990
 URL: https://issues.apache.org/jira/browse/HIVE-6990
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
 Environment: hive + derby
Reporter: Bing Li
Assignee: Bing Li
 Fix For: 0.14.0

 Attachments: HIVE-6990.1.patch, HIVE-6990.2.patch


 I got the following ERROR in hive.log
 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore 
 (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling 
 back to ORM
 javax.jdo.JDODataStoreException: Error executing SQL query select 
 PARTITIONS.PART_ID from PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = 
 TBLS.TBL_ID   inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join 
 PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and 
 FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and 
 ((FILTER0.PART_KEY_VAL = ?)).
 at 
 org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
 at 
 org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
 at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source)
 Reproduce steps:
 1. set the following properties in hive-site.xml
  property
   namejavax.jdo.mapping.Schema/name
   valueHIVE/value
  /property
  property
   namejavax.jdo.option.ConnectionUserName/name
   valueuser1/value
  /property
 2. execute hive queries
 hive create table mytbl ( key int, value string);
 hive load data local inpath 'examples/files/kv1.txt' overwrite into table 
 mytbl;
 hive select * from mytbl;
 hive create view myview partitioned on (value) as select key, value from 
 mytbl where key=98;
 hive alter view myview add partition (value='val_98') partition 
 (value='val_xyz');
 hive alter view myview drop partition (value='val_xyz');



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7019) Hive cannot build against Hadoop branch-2 after YARN-1553

2014-05-06 Thread Fengdong Yu (JIRA)
Fengdong Yu created HIVE-7019:
-

 Summary: Hive cannot build against Hadoop branch-2 after YARN-1553
 Key: HIVE-7019
 URL: https://issues.apache.org/jira/browse/HIVE-7019
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.13.0
Reporter: Fengdong Yu


Hive cannot build against Hadoop branch-2 after YARN-1553, I'll upload patch 
laterly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7019) Hive cannot build against Hadoop branch-2 after YARN-1553

2014-05-06 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HIVE-7019:
--

Attachment: HIVE-7019.patch

 Hive cannot build against Hadoop branch-2 after YARN-1553
 -

 Key: HIVE-7019
 URL: https://issues.apache.org/jira/browse/HIVE-7019
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.13.0
Reporter: Fengdong Yu
 Attachments: HIVE-7019.patch


 Hive cannot build against Hadoop branch-2 after YARN-1553, I'll upload patch 
 laterly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7019) Hive cannot build against Hadoop branch-2 after YARN-1553

2014-05-06 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HIVE-7019:
--

Status: Patch Available  (was: Open)

 Hive cannot build against Hadoop branch-2 after YARN-1553
 -

 Key: HIVE-7019
 URL: https://issues.apache.org/jira/browse/HIVE-7019
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.13.0
Reporter: Fengdong Yu
 Attachments: HIVE-7019.patch


 Hive cannot build against Hadoop branch-2 after YARN-1553, I'll upload patch 
 laterly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one

2014-05-06 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990492#comment-13990492
 ] 

Hive QA commented on HIVE-6990:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12643520/HIVE-6990.2.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/129/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/129/console

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-common ---
[INFO] Compiling 39 source files to 
/data/hive-ptest/working/apache-svn-trunk-source/common/target/classes
[WARNING] Note: 
/data/hive-ptest/working/apache-svn-trunk-source/common/src/java/org/apache/hadoop/hive/common/FileUtils.java
 uses or overrides a deprecated API.
[WARNING] Note: Recompile with -Xlint:deprecation for details.
[WARNING] Note: 
/data/hive-ptest/working/apache-svn-trunk-source/common/src/java/org/apache/hadoop/hive/common/ObjectPair.java
 uses unchecked or unsafe operations.
[WARNING] Note: Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 4 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-common ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/common/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/common/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/common/target/tmp/conf
 [copy] Copying 5 files to 
/data/hive-ptest/working/apache-svn-trunk-source/common/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-common ---
[INFO] Compiling 13 source files to 
/data/hive-ptest/working/apache-svn-trunk-source/common/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-common ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-common ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-svn-trunk-source/common/target/hive-common-0.14.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hive-common ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-common ---
[INFO] Installing 
/data/hive-ptest/working/apache-svn-trunk-source/common/target/hive-common-0.14.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/hive-common/0.14.0-SNAPSHOT/hive-common-0.14.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-svn-trunk-source/common/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/hive-common/0.14.0-SNAPSHOT/hive-common-0.14.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive Serde 0.14.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-serde ---
[INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/serde 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-serde 
---
[INFO] Source directory: 
/data/hive-ptest/working/apache-svn-trunk-source/serde/src/gen/protobuf/gen-java
 added.
[INFO] Source directory: 
/data/hive-ptest/working/apache-svn-trunk-source/serde/src/gen/thrift/gen-javabean
 added.
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-serde ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hive-serde ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-svn-trunk-source/serde/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-serde ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-serde ---
[INFO] Compiling 351 source files to 
/data/hive-ptest/working/apache-svn-trunk-source/serde/target/classes
[WARNING] Note: Some input files use or override a deprecated API.
[WARNING] Note: Recompile with -Xlint:deprecation for details.
[WARNING] Note: Some input files use unchecked or unsafe operations.
[WARNING] Note: 

[jira] [Commented] (HIVE-6204) The result of show grant / show role should be tabular format

2014-05-06 Thread J. Freiknecht (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990498#comment-13990498
 ] 

J. Freiknecht commented on HIVE-6204:
-

A description of the table would be usefull. I wonder, in particular, what this 
boolean value is saying. Does it mean that role1 has NO Create privilege?

 The result of show grant / show role should be tabular format
 -

 Key: HIVE-6204
 URL: https://issues.apache.org/jira/browse/HIVE-6204
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6204.1.patch.txt, HIVE-6204.2.patch.txt, 
 HIVE-6204.3.patch.txt


 {noformat}
 hive show grant role role1 on all;
 OK
 database  default
 table src
 principalName role1
 principalType ROLE
 privilege Create
 grantTime Wed Dec 18 14:17:56 KST 2013
 grantor   navis
 database  default
 table srcpart
 principalName role1
 principalType ROLE
 privilege Update
 grantTime Wed Dec 18 14:18:28 KST 2013
 grantor   navis
 {noformat}
 This should be something like below, especially for JDBC clients.
 {noformat}
 hive show grant role role1 on all;
 OK
 default   src role1   ROLECreate  false   
 1387343876000   navis
 default   srcpart role1   ROLEUpdate  false   
 1387343908000   navis
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7020) NPE when there is no plan file.

2014-05-06 Thread Fengdong Yu (JIRA)
Fengdong Yu created HIVE-7020:
-

 Summary: NPE when there is no plan file.
 Key: HIVE-7020
 URL: https://issues.apache.org/jira/browse/HIVE-7020
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.13.0
Reporter: Fengdong Yu


Hive throws NPE when there is no plan file.

Exception message:
{code}
2014-05-06 18:03:17,749 INFO [main] org.apache.hadoop.hive.ql.exec.Utilities: 
No plan file found: 
file:/tmp/test/hive_2014-05-06_18-02-58_539_232619201891510265-1/-mr-10001/8cf1c965-b173-4482-a016-4a51a74b9324/map.xml
2014-05-06 18:03:17,750 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:437)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:430)
at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:587)
at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:168)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:409)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
{code}

I looked through the code,
ql/exec/Utilities.java:
{code}
private static BaseWork getBaseWork(Configuration conf, String name) {
  
  } catch (FileNotFoundException fnf) {
  // happens. e.g.: no reduce work.
  LOG.info(No plan file found: +path);
  return null;
}
{code}

this code was called by HiveInputFormat.java:
{code}
  protected void init(JobConf job) {
mrwork = Utilities.getMapWork(job);
pathToPartitionInfo = mrwork.getPathToPartitionInfo();
  }
{code}

mrwork  is null, then NPE here.





--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Is there a bug in hiveserver2 metastore caused hold huge objects

2014-05-06 Thread Meng QingPing
Weird, the picture attached in sent mail.

Anyway send again.


2014-05-05 12:17 GMT+08:00 Chandra Reddy chandu...@gmail.com:

 seems you have missed attachment.
 -Chandra


 On Sun, May 4, 2014 at 6:57 PM, Meng QingPing mqingp...@gmail.com wrote:

  I run hiveserver2 with metastore in mysql. The hiveserver2 OOM and the
  heap dump show huge objects hold by org.datanucleus.api.jdo.
  JDOPersistenceManagerFactory as attached. It seems not release
  org.datanucleus.api.jdo.JDOPersistenceManager. Hive version is 0.13.
 
  Thanks,
  Jack
 



 --
 Thanks,
 -Chandra.




-- 
Thanks,
Qingping


[jira] [Updated] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one

2014-05-06 Thread Bing Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Li updated HIVE-6990:
--

Attachment: HIVE-6990.3.patch

 Direct SQL fails when the explicit schema setting is different from the 
 default one
 ---

 Key: HIVE-6990
 URL: https://issues.apache.org/jira/browse/HIVE-6990
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
 Environment: hive + derby
Reporter: Bing Li
Assignee: Bing Li
 Fix For: 0.14.0

 Attachments: HIVE-6990.1.patch, HIVE-6990.2.patch, HIVE-6990.3.patch


 I got the following ERROR in hive.log
 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore 
 (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling 
 back to ORM
 javax.jdo.JDODataStoreException: Error executing SQL query select 
 PARTITIONS.PART_ID from PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = 
 TBLS.TBL_ID   inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join 
 PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and 
 FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and 
 ((FILTER0.PART_KEY_VAL = ?)).
 at 
 org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
 at 
 org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
 at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source)
 Reproduce steps:
 1. set the following properties in hive-site.xml
  property
   namejavax.jdo.mapping.Schema/name
   valueHIVE/value
  /property
  property
   namejavax.jdo.option.ConnectionUserName/name
   valueuser1/value
  /property
 2. execute hive queries
 hive create table mytbl ( key int, value string);
 hive load data local inpath 'examples/files/kv1.txt' overwrite into table 
 mytbl;
 hive select * from mytbl;
 hive create view myview partitioned on (value) as select key, value from 
 mytbl where key=98;
 hive alter view myview add partition (value='val_98') partition 
 (value='val_xyz');
 hive alter view myview drop partition (value='val_xyz');



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one

2014-05-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990679#comment-13990679
 ] 

Sergey Shelukhin commented on HIVE-6990:


Can you elaborate on your question? not sure I understand. If schema is empty 
then table names are default and no prefix is needed in queries, right?

 Direct SQL fails when the explicit schema setting is different from the 
 default one
 ---

 Key: HIVE-6990
 URL: https://issues.apache.org/jira/browse/HIVE-6990
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
 Environment: hive + derby
Reporter: Bing Li
Assignee: Bing Li
 Fix For: 0.14.0

 Attachments: HIVE-6990.1.patch, HIVE-6990.2.patch, HIVE-6990.3.patch


 I got the following ERROR in hive.log
 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore 
 (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling 
 back to ORM
 javax.jdo.JDODataStoreException: Error executing SQL query select 
 PARTITIONS.PART_ID from PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = 
 TBLS.TBL_ID   inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join 
 PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and 
 FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and 
 ((FILTER0.PART_KEY_VAL = ?)).
 at 
 org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
 at 
 org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
 at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source)
 Reproduce steps:
 1. set the following properties in hive-site.xml
  property
   namejavax.jdo.mapping.Schema/name
   valueHIVE/value
  /property
  property
   namejavax.jdo.option.ConnectionUserName/name
   valueuser1/value
  /property
 2. execute hive queries
 hive create table mytbl ( key int, value string);
 hive load data local inpath 'examples/files/kv1.txt' overwrite into table 
 mytbl;
 hive select * from mytbl;
 hive create view myview partitioned on (value) as select key, value from 
 mytbl where key=98;
 hive alter view myview add partition (value='val_98') partition 
 (value='val_xyz');
 hive alter view myview drop partition (value='val_xyz');



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Is there a bug in hiveserver2 metastore caused hold huge objects

2014-05-06 Thread Sergey Shelukhin
I don't think the dev list supports attachment. Can you post to some image
sharing service?


On Tue, May 6, 2014 at 3:20 AM, Meng QingPing mqingp...@gmail.com wrote:


 Weird, the picture attached in sent mail.

 Anyway send again.


 2014-05-05 12:17 GMT+08:00 Chandra Reddy chandu...@gmail.com:

 seems you have missed attachment.
 -Chandra


 On Sun, May 4, 2014 at 6:57 PM, Meng QingPing mqingp...@gmail.com
 wrote:

  I run hiveserver2 with metastore in mysql. The hiveserver2 OOM and the
  heap dump show huge objects hold by org.datanucleus.api.jdo.
  JDOPersistenceManagerFactory as attached. It seems not release
  org.datanucleus.api.jdo.JDOPersistenceManager. Hive version is 0.13.
 
  Thanks,
  Jack
 



 --
 Thanks,
 -Chandra.




 --
 Thanks,
 Qingping


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Commented] (HIVE-7009) HIVE_USER_INSTALL_DIR could not bet set to non-HDFS filesystem

2014-05-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990734#comment-13990734
 ] 

Sergey Shelukhin commented on HIVE-7009:


Sounds reasonable to me. I wonder what was the rationale for HDFS check in the 
first place and if less strict check could be added instead. [~hagleitn] can 
you comment

 HIVE_USER_INSTALL_DIR could not bet set to non-HDFS filesystem
 --

 Key: HIVE-7009
 URL: https://issues.apache.org/jira/browse/HIVE-7009
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.13.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HIVE-7009.patch


 In {{hive/ql/exec/tez/DagUtils.java}}, we enforce the user path get from 
 {{HIVE_USER_INSTALL_DIR}} to be HDFS. This makes it impossible to run 
 Hive+Tez jobs on non-HDFS filesystem, e.g. WASB. Relevant code are as follows:
 {noformat}
   public Path getDefaultDestDir(Configuration conf) throws LoginException, 
 IOException {
 UserGroupInformation ugi = 
 ShimLoader.getHadoopShims().getUGIForConf(conf);
 String userName = ShimLoader.getHadoopShims().getShortUserName(ugi);
 String userPathStr = HiveConf.getVar(conf, 
 HiveConf.ConfVars.HIVE_USER_INSTALL_DIR);
 Path userPath = new Path(userPathStr);
 FileSystem fs = userPath.getFileSystem(conf);
 if (!(fs instanceof DistributedFileSystem)) {
   throw new IOException(ErrorMsg.INVALID_HDFS_URI.format(userPathStr));
 }
 {noformat}
 Exceptions running jobs with defaultFs configured to WASB.
 {noformat}
 2014-05-01 00:21:39,847 ERROR exec.Task (TezTask.java:execute(192)) - Failed 
 to execute tez graph.
 java.io.IOException: 
 wasb://hdi31-chuan...@clhdistorage.blob.core.windows.net/user is not a hdfs 
 uri
   at 
 org.apache.hadoop.hive.ql.exec.tez.DagUtils.getDefaultDestDir(DagUtils.java:662)
   at 
 org.apache.hadoop.hive.ql.exec.tez.DagUtils.getHiveJarDirectory(DagUtils.java:759)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createJarLocalResource(TezSessionState.java:321)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:159)
   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:154)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1504)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1271)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1089)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:912)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6430) MapJoin hash table has large memory overhead

2014-05-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990758#comment-13990758
 ] 

Sergey Shelukhin commented on HIVE-6430:


Will remove on commit. [~hagleitn] can you take a look? [~t3rmin4t0r] signed 
off on RB but he's not formally a committer

 MapJoin hash table has large memory overhead
 

 Key: HIVE-6430
 URL: https://issues.apache.org/jira/browse/HIVE-6430
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-6430.01.patch, HIVE-6430.02.patch, 
 HIVE-6430.03.patch, HIVE-6430.04.patch, HIVE-6430.05.patch, 
 HIVE-6430.06.patch, HIVE-6430.07.patch, HIVE-6430.08.patch, 
 HIVE-6430.09.patch, HIVE-6430.10.patch, HIVE-6430.11.patch, 
 HIVE-6430.12.patch, HIVE-6430.12.patch, HIVE-6430.patch


 Right now, in some queries, I see that storing e.g. 4 ints (2 for key and 2 
 for row) can take several hundred bytes, which is ridiculous. I am reducing 
 the size of MJKey and MJRowContainer in other jiras, but in general we don't 
 need to have java hash table there.  We can either use primitive-friendly 
 hashtable like the one from HPPC (Apache-licenced), or some variation, to map 
 primitive keys to single row storage structure without an object per row 
 (similar to vectorization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[GitHub] hive pull request: Branch 0.13

2014-05-06 Thread lakshmi83
GitHub user lakshmi83 opened a pull request:

https://github.com/apache/hive/pull/14

Branch 0.13



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/hive branch-0.13

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/14.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14


commit 21bc97ca4d7702a2c78cb06ce1e461ccbc247be5
Author: Harish Butani rhbut...@apache.org
Date:   2014-03-05T01:09:50Z

Branching for 0.13 releases

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1574276 
13f79535-47bb-0310-9956-ffa450edef68

commit c49ac879180e399cdc83e613ccc6c9e67f8c799c
Author: Harish Butani rhbut...@apache.org
Date:   2014-03-05T01:31:23Z

Preparing for release 0.13.0

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1574279 
13f79535-47bb-0310-9956-ffa450edef68

commit 69436bea1ecebba8be164b4c814da8c7a9b436ea
Author: Vikram Dixit K vik...@apache.org
Date:   2014-03-05T19:55:24Z

HIVE-6325: Enable using multiple concurrent sessions in tez (Vikram Dixit, 
reviewed by Gunther Hagleitner)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1574640 
13f79535-47bb-0310-9956-ffa450edef68

commit 376aed5024a9f25d7fedc6950e5b49e80810eae2
Author: Ashutosh Chauhan hashut...@apache.org
Date:   2014-03-05T20:02:50Z

HIVE-6548 : Missing owner name and type fields in schema script for DBS 
table (Ashutosh Chauhan via Thejas Nair)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1574643 
13f79535-47bb-0310-9956-ffa450edef68

commit 362add70a7e2e7729750530d894c7bd3335c3b1a
Author: Sergey Shelukhin ser...@apache.org
Date:   2014-03-07T18:22:23Z

HIVE-6537 NullPointerException when loading hashtable for MapJoin directly 
(Sergey Shelukhin and Navis, reviewed by Gunther Hagleitner)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1575353 
13f79535-47bb-0310-9956-ffa450edef68

commit dfe9cfdf82e1b43b47df5fb8457e2d8066bd407b
Author: Ashutosh Chauhan hashut...@apache.org
Date:   2014-03-07T19:34:07Z

HIVE-6555 : Fix metastore version in mysql script(Ashutosh Chauhan via 
Prasad Mujumdar)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1575380 
13f79535-47bb-0310-9956-ffa450edef68

commit a3c4d93772d311df394b8e546ad0b828eb148c09
Author: Ashutosh Chauhan hashut...@apache.org
Date:   2014-03-07T19:37:14Z

HIVE-6417 : sql std auth - new users in admin role config should get added 
(Ashutosh Chauhan via Thejas Nair)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1575383 
13f79535-47bb-0310-9956-ffa450edef68

commit c8265e092bf54667aaa07c9066930d3600ccc50f
Author: Gunther Hagleitner gunt...@apache.org
Date:   2014-03-07T20:36:11Z

HIVE-6566: Incorrect union-all plan with map-joins on Tez (Gunther 
Hagleitner, reviewed by Sergey Shelukhin)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1575393 
13f79535-47bb-0310-9956-ffa450edef68

commit 5e1a28770883d96ba939af66f616d4dca3158e56
Author: Harish Butani rhbut...@apache.org
Date:   2014-03-08T17:23:33Z

HIVE-6403 uncorrelated subquery is failing with auto.convert.join=true 
(Navis via Harish Butani)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1575574 
13f79535-47bb-0310-9956-ffa450edef68

commit 6a1de74e1dfbc42a280efc128e035bc49a87168d
Author: Thejas Madhavan Nair the...@apache.org
Date:   2014-03-08T18:14:35Z

HIVE-5901 : Query cancel should stop running MR tasks (Navis via Thejas 
Nair)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1575583 
13f79535-47bb-0310-9956-ffa450edef68

commit 6c75c83b883aaca5bfd7ff7718e4a278aae43e7b
Author: Jitendra Nath Pandey jiten...@apache.org
Date:   2014-03-08T23:31:13Z

HIVE-6508 : Mismatched aggregation results between vector and non-vector 
mode with decimal field (Remus Rusanu via jitendra)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1575633 
13f79535-47bb-0310-9956-ffa450edef68

commit fbfe781436e3909ba7b5b33600d847cfc3d79cfd
Author: Ashutosh Chauhan hashut...@apache.org
Date:   2014-03-09T01:40:21Z

HIVE-6573 : Oracle metastore doesnt come up when 
hive.cluster.delegation.token.store.class is set to DBTokenStore (Ashutosh 
Chauhan via Thejas Nair)

git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.13@1575644 
13f79535-47bb-0310-9956-ffa450edef68

commit f15a74c5e1499967d98e211fa18d340937312f4a
Author: Jitendra Nath Pandey jiten...@apache.org
Date:   2014-03-09T17:02:53Z

HIVE-6511: Casting from decimal to tinyint,smallint, int and bigint 
generates different result when vectorization is 

[jira] [Updated] (HIVE-6994) parquet-hive createArray strips null elements

2014-05-06 Thread Justin Coffey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Coffey updated HIVE-6994:


Attachment: HIVE-6994-1.patch

updated patch after rebasing against the trunk.  it applies for me :)

 parquet-hive createArray strips null elements
 -

 Key: HIVE-6994
 URL: https://issues.apache.org/jira/browse/HIVE-6994
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0, 0.14.0
Reporter: Justin Coffey
Assignee: Justin Coffey
 Fix For: 0.14.0

 Attachments: HIVE-6994-1.patch, HIVE-6994.patch


 The createArray method in ParquetHiveSerDe strips null values from resultant 
 ArrayWritables.
 tracked here as well: https://github.com/Parquet/parquet-mr/issues/377



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7019) Hive cannot build against Hadoop branch-2 after YARN-1553

2014-05-06 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990965#comment-13990965
 ] 

Hive QA commented on HIVE-7019:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12643528/HIVE-7019.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/132/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/132/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-132/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target packaging/target 
hbase-handler/target testutils/target jdbc/target metastore/target 
itests/target itests/hcatalog-unit/target itests/test-serde/target 
itests/qtest/target itests/hive-minikdc/target itests/hive-unit/target 
itests/custom-serde/target itests/util/target hcatalog/target 
hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target 
hwi/target common/target common/src/gen service/target contrib/target 
serde/target beeline/target odbc/target cli/target 
ql/dependency-reduced-pom.xml ql/target
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1592833.

At revision 1592833.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12643528

 Hive cannot build against Hadoop branch-2 after YARN-1553
 -

 Key: HIVE-7019
 URL: https://issues.apache.org/jira/browse/HIVE-7019
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.13.0
Reporter: Fengdong Yu
 Attachments: HIVE-7019.patch


 Hive cannot build against Hadoop branch-2 after YARN-1553, I'll upload patch 
 laterly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one

2014-05-06 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990963#comment-13990963
 ] 

Hive QA commented on HIVE-6990:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12643577/HIVE-6990.3.patch

{color:red}ERROR:{color} -1 due to 212 failed/errored test(s), 5428 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_add_part_exist
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_merge_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_clusterby_sortby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_format_loc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_protect_mode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_rename_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_rename_partition_authorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_serde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_analyze_table_null_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_archive_excludeHadoop20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_parts
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_groupby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnstats_partlvl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_combine2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_combine3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_concatenate_inherit_table_location
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_or_replace_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_compact2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_query3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_query4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_query5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_describe_formatted_view_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_describe_formatted_view_partitioned_json
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_partitions_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_partitions_filter2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_partitions_filter3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_table2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_table_removes_partition_dirs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_escape2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_02_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_04_all_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_04_evolved_parts
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_05_some_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_06_one_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_07_all_part_over_nonoverlap
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_08_nonpart_rename
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_09_part_spec_nonoverlap
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_15_external_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_16_part_external
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_17_part_managed
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_18_part_external
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_19_00_part_external_location
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_19_part_external_location

[jira] [Commented] (HIVE-7017) Insertion into Parquet tables fails under Tez

2014-05-06 Thread Craig Condit (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990997#comment-13990997
 ] 

Craig Condit commented on HIVE-7017:


It's not obvious what the proper thing to do in this case is. The existing ID 
could be parsed and reformatted, or Tez could be modified to generate 
TaskAttemptID-compatible identifiers. I have created 
https://issues.apache.org/jira/browse/TEZ-1104 to track the issue on that end.

 Insertion into Parquet tables fails under Tez
 -

 Key: HIVE-7017
 URL: https://issues.apache.org/jira/browse/HIVE-7017
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.13.0
 Environment: Hive 0.13.0, CentOS 6
Reporter: Craig Condit

 It seems Parquet tables cannot be written to in Tez mode. CREATE TABLE foo 
 STORED AS PARQUET SELECT ... queries fail with:
 {noformat}
   java.lang.IllegalArgumentException: TaskAttemptId string : 
 task1396892688715_80817_m_76_3 is not properly formed
   at 
 org.apache.hadoop.mapreduce.TaskAttemptID.forName(TaskAttemptID.java:201)
   at 
 org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.init(ParquetRecordWriterWrapper.java:49)
 {noformat}
 The same queries work fine after setting hive.execution.engine=mr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7017) Insertion into Parquet tables fails under Tez

2014-05-06 Thread Craig Condit (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991025#comment-13991025
 ] 

Craig Condit commented on HIVE-7017:


I mistakenly assumed that code came from TEZ, when it in fact exists in Hive...

https://github.com/apache/hive/blob/022ee59b8cb9161996310861d4fbf59801d4b9fe/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezProcessor.java#L103

Should probably be:

{noformat}
StringBuilder taskAttemptIdBuilder = new StringBuilder(attempt_);
{noformat}

instead of:

{noformat}
StringBuilder taskAttemptIdBuilder = new StringBuilder(task);
{noformat}




 Insertion into Parquet tables fails under Tez
 -

 Key: HIVE-7017
 URL: https://issues.apache.org/jira/browse/HIVE-7017
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.13.0
 Environment: Hive 0.13.0, CentOS 6
Reporter: Craig Condit

 It seems Parquet tables cannot be written to in Tez mode. CREATE TABLE foo 
 STORED AS PARQUET SELECT ... queries fail with:
 {noformat}
   java.lang.IllegalArgumentException: TaskAttemptId string : 
 task1396892688715_80817_m_76_3 is not properly formed
   at 
 org.apache.hadoop.mapreduce.TaskAttemptID.forName(TaskAttemptID.java:201)
   at 
 org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.init(ParquetRecordWriterWrapper.java:49)
 {noformat}
 The same queries work fine after setting hive.execution.engine=mr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7021) HiveServer2 memory leak on failed queries

2014-05-06 Thread Naveen Gangam (JIRA)
Naveen Gangam created HIVE-7021:
---

 Summary: HiveServer2 memory leak on failed queries
 Key: HIVE-7021
 URL: https://issues.apache.org/jira/browse/HIVE-7021
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
Reporter: Naveen Gangam


The number of the following objects keeps increasing if a query causes an 
exception:
org.apache.hive.service.cli.HandleIdentifier
org.apache.hive.service.cli.OperationHandle
org.apache.hive.service.cli.log.LinkedStringBuffer
org.apache.hive.service.cli.log.OperationLog



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-7021) HiveServer2 memory leak on failed queries

2014-05-06 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam reassigned HIVE-7021:
---

Assignee: Naveen Gangam

 HiveServer2 memory leak on failed queries
 -

 Key: HIVE-7021
 URL: https://issues.apache.org/jira/browse/HIVE-7021
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam

 The number of the following objects keeps increasing if a query causes an 
 exception:
 org.apache.hive.service.cli.HandleIdentifier
 org.apache.hive.service.cli.OperationHandle
 org.apache.hive.service.cli.log.LinkedStringBuffer
 org.apache.hive.service.cli.log.OperationLog
 The leak can be observed using a JDBCClient that runs something like this
   connection = 
 DriverManager.getConnection(jdbc:hive2:// + hostname + :1/default, 
 , );
   statement   = connection.createStatement();
   statement.execute(CREATE TEMPORARY FUNCTION 
 dummy_function AS 'dummy.class.name');
 The above SQL will fail if HS2 cannot load dummy.class.name class. Each 
 iteration of such query will result in +1 increase in instance count for the 
 classes mentioned above.
 This will eventually cause OOM in the HS2 service.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7021) HiveServer2 memory leak on failed queries

2014-05-06 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-7021:


Description: 
The number of the following objects keeps increasing if a query causes an 
exception:
org.apache.hive.service.cli.HandleIdentifier
org.apache.hive.service.cli.OperationHandle
org.apache.hive.service.cli.log.LinkedStringBuffer
org.apache.hive.service.cli.log.OperationLog

The leak can be observed using a JDBCClient that runs something like this
  connection = 
DriverManager.getConnection(jdbc:hive2:// + hostname + :1/default, , 
);
  statement   = connection.createStatement();
  statement.execute(CREATE TEMPORARY FUNCTION 
dummy_function AS 'dummy.class.name');

The above SQL will fail if HS2 cannot load dummy.class.name class. Each 
iteration of such query will result in +1 increase in instance count for the 
classes mentioned above.

This will eventually cause OOM in the HS2 service.


  was:
The number of the following objects keeps increasing if a query causes an 
exception:
org.apache.hive.service.cli.HandleIdentifier
org.apache.hive.service.cli.OperationHandle
org.apache.hive.service.cli.log.LinkedStringBuffer
org.apache.hive.service.cli.log.OperationLog


 HiveServer2 memory leak on failed queries
 -

 Key: HIVE-7021
 URL: https://issues.apache.org/jira/browse/HIVE-7021
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
Reporter: Naveen Gangam

 The number of the following objects keeps increasing if a query causes an 
 exception:
 org.apache.hive.service.cli.HandleIdentifier
 org.apache.hive.service.cli.OperationHandle
 org.apache.hive.service.cli.log.LinkedStringBuffer
 org.apache.hive.service.cli.log.OperationLog
 The leak can be observed using a JDBCClient that runs something like this
   connection = 
 DriverManager.getConnection(jdbc:hive2:// + hostname + :1/default, 
 , );
   statement   = connection.createStatement();
   statement.execute(CREATE TEMPORARY FUNCTION 
 dummy_function AS 'dummy.class.name');
 The above SQL will fail if HS2 cannot load dummy.class.name class. Each 
 iteration of such query will result in +1 increase in instance count for the 
 classes mentioned above.
 This will eventually cause OOM in the HS2 service.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one

2014-05-06 Thread Li Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991102#comment-13991102
 ] 

Li Zhang commented on HIVE-6990:


Would the patch cause a problem if the table name is in a format of 
schema1.tblname within a query? I'd think it will incorrectly return table 
not found because schema1 will be ignored?

 Direct SQL fails when the explicit schema setting is different from the 
 default one
 ---

 Key: HIVE-6990
 URL: https://issues.apache.org/jira/browse/HIVE-6990
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
 Environment: hive + derby
Reporter: Bing Li
Assignee: Bing Li
 Fix For: 0.14.0

 Attachments: HIVE-6990.1.patch, HIVE-6990.2.patch, HIVE-6990.3.patch


 I got the following ERROR in hive.log
 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore 
 (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling 
 back to ORM
 javax.jdo.JDODataStoreException: Error executing SQL query select 
 PARTITIONS.PART_ID from PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = 
 TBLS.TBL_ID   inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join 
 PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and 
 FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and 
 ((FILTER0.PART_KEY_VAL = ?)).
 at 
 org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
 at 
 org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
 at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source)
 Reproduce steps:
 1. set the following properties in hive-site.xml
  property
   namejavax.jdo.mapping.Schema/name
   valueHIVE/value
  /property
  property
   namejavax.jdo.option.ConnectionUserName/name
   valueuser1/value
  /property
 2. execute hive queries
 hive create table mytbl ( key int, value string);
 hive load data local inpath 'examples/files/kv1.txt' overwrite into table 
 mytbl;
 hive select * from mytbl;
 hive create view myview partitioned on (value) as select key, value from 
 mytbl where key=98;
 hive alter view myview add partition (value='val_98') partition 
 (value='val_xyz');
 hive alter view myview drop partition (value='val_xyz');



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6367) Implement Decimal in ParquetSerde

2014-05-06 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-6367:
--

Attachment: dec.parq

 Implement Decimal in ParquetSerde
 -

 Key: HIVE-6367
 URL: https://issues.apache.org/jira/browse/HIVE-6367
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Brock Noland
Assignee: Xuefu Zhang
  Labels: Parquet
 Attachments: dec.parq


 Some code in the Parquet Serde deals with decimal and other does not. For 
 example in ETypeConverter we convert Decimal to double (which is invalid) 
 whereas in DataWritableWriter and other locations we throw an exception if 
 decimal is used.
 This JIRA is to implement decimal support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7022) Replace BinaryWritable with BytesWritable in Parquet serde

2014-05-06 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-7022:
-

 Summary: Replace BinaryWritable with BytesWritable in Parquet serde
 Key: HIVE-7022
 URL: https://issues.apache.org/jira/browse/HIVE-7022
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang


Currently ParquetHiveSerde uses BinaryWritable to enclose bytes read from 
Parquet data. However, existing Hadoop class, BytesWritable, already does that, 
and BinaryWritable offers no advantage. On the other hand, BinaryWritable has a 
confusing getString() method, which, in misused, can cause unexpected result. 
The proposal here is to replace it with Hadoop BytesWritable.

The issue was identified in HIVE-6367, serving as a follow-up JIRA. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 11925: Hive-3159 Update AvroSerde to determine schema of new tables

2014-05-06 Thread Mohammad Islam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11925/
---

(Updated May 6, 2014, 10:06 p.m.)


Review request for hive, Ashutosh Chauhan and Jakob Homan.


Changes
---

New patch that addressed Carl's review comments.

This patch addressed the following missing functions.
1. Create AVRO table from using HIVE schema ( w/o  specifying Avro Schema).
2. Copy AVRO table structure and data from existing non-AVRO table using CTAS.
3. Copy AVRO table structure and data from existing AVRO table using CTAS.

Note: We can close dependent JIRA HIVE-5803 that is no longer required. Another 
JIRA has already taken care of this.


Bugs: HIVE-3159
https://issues.apache.org/jira/browse/HIVE-3159


Repository: hive-git


Description
---

Problem:
Hive doesn't support to create a Avro-based table using HQL create table 
command. It currently requires to specify Avro schema literal or schema file 
name.
For multiple cases, it is very inconvenient for user.
Some of the un-supported use cases:
1. Create table ... Avro-SERDE etc. as SELECT ... from NON-AVRO FILE
2. Create table ... Avro-SERDE etc. as SELECT from AVRO TABLE
3. Create  table  without specifying Avro schema.


Diffs (updated)
-

  ql/src/test/queries/clientpositive/avro_create_as_select.q PRE-CREATION 
  ql/src/test/queries/clientpositive/avro_nested_complex.q PRE-CREATION 
  ql/src/test/queries/clientpositive/avro_nullable_fields.q f90ceb9 
  ql/src/test/queries/clientpositive/avro_without_schema.q PRE-CREATION 
  ql/src/test/results/clientpositive/avro_create_as_select.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/avro_nested_complex.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/avro_nullable_fields.q.out 77a6a2e 
  ql/src/test/results/clientpositive/avro_without_schema.q.out PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java 9d58d13 
  serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
PRE-CREATION 
  serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroSerdeUtils.java 
67d5570 
  serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/11925/diff/


Testing
---

Wrote a new java Test class for a new Java class. Added a new test case into 
existing java test class. In addition, there are 4 .q file for testing multiple 
use-cases.


Thanks,

Mohammad Islam



[jira] [Updated] (HIVE-3159) Update AvroSerde to determine schema of new tables

2014-05-06 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HIVE-3159:


Affects Version/s: 0.12.0
   Status: Patch Available  (was: Open)

 Update AvroSerde to determine schema of new tables
 --

 Key: HIVE-3159
 URL: https://issues.apache.org/jira/browse/HIVE-3159
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.12.0
Reporter: Jakob Homan
Assignee: Mohammad Kamrul Islam
 Attachments: HIVE-3159.4.patch, HIVE-3159.5.patch, HIVE-3159.6.patch, 
 HIVE-3159.7.patch, HIVE-3159.9.patch, HIVE-3159v1.patch


 Currently when writing tables to Avro one must manually provide an Avro 
 schema that matches what is being delivered by Hive. It'd be better to have 
 the serde infer this schema by converting the table's TypeInfo into an 
 appropriate AvroSchema.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-3159) Update AvroSerde to determine schema of new tables

2014-05-06 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HIVE-3159:


Attachment: HIVE-3159.9.patch

New patch that addressed [~cwsteinbach]'s review comments.

This patch addressed the following missing functions.
1. Create AVRO table from using HIVE schema ( w/o  specifying Avro Schema).
2. Copy AVRO table structure and data from an existing non-AVRO table using 
CTAS.
3. Copy AVRO table structure and data from an existing AVRO table using CTAS.

Note: We can close dependent JIRA HIVE-5803 that is no longer required. Another 
JIRA has already taken care of this.



 Update AvroSerde to determine schema of new tables
 --

 Key: HIVE-3159
 URL: https://issues.apache.org/jira/browse/HIVE-3159
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.12.0
Reporter: Jakob Homan
Assignee: Mohammad Kamrul Islam
 Attachments: HIVE-3159.4.patch, HIVE-3159.5.patch, HIVE-3159.6.patch, 
 HIVE-3159.7.patch, HIVE-3159.9.patch, HIVE-3159v1.patch


 Currently when writing tables to Avro one must manually provide an Avro 
 schema that matches what is being delivered by Hive. It'd be better to have 
 the serde infer this schema by converting the table's TypeInfo into an 
 appropriate AvroSchema.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2014-05-06 Thread Swarnim Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swarnim Kulkarni updated HIVE-6411:
---

Attachment: HIVE-6411.10.patch.txt

Updated patch to address [~xuefuz]'s review comments and also rebased with the 
current master.

 Support more generic way of using composite key for HBaseHandler
 

 Key: HIVE-6411
 URL: https://issues.apache.org/jira/browse/HIVE-6411
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6411.1.patch.txt, HIVE-6411.10.patch.txt, 
 HIVE-6411.2.patch.txt, HIVE-6411.3.patch.txt, HIVE-6411.4.patch.txt, 
 HIVE-6411.5.patch.txt, HIVE-6411.6.patch.txt, HIVE-6411.7.patch.txt, 
 HIVE-6411.8.patch.txt, HIVE-6411.9.patch.txt


 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 21138: Support more generic way of using composite key for HBaseHandler

2014-05-06 Thread Swarnim Kulkarni

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21138/
---

Review request for hive.


Repository: hive-git


Description
---

HIVE-2599 introduced using custom object for the row key. But it forces key 
objects to extend HBaseCompositeKey, which is again extension of LazyStruct. If 
user provides proper Object and OI, we can replace internal key and keyOI with 
those. 

Initial implementation is based on factory interface.
{code}
public interface HBaseKeyFactory {
  void init(SerDeParameters parameters, Properties properties) throws 
SerDeException;
  ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
  LazyObjectBase createObject(ObjectInspector inspector) throws SerDeException;
}
{code}


Diffs
-

  hbase-handler/pom.xml 132af43 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/AbstractHBaseKeyFactory.java
 PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/ColumnMappings.java 
PRE-CREATION 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/CompositeHBaseKeyFactory.java
 PRE-CREATION 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/DefaultHBaseKeyFactory.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
5008f15 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
PRE-CREATION 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseRowSerializer.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java 
b64590d 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
4fe1b1b 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
 142bfd8 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java fc40195 
  
hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java 
13c344b 
  hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
PRE-CREATION 
  hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
PRE-CREATION 
  hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestLazyHBaseObject.java 
7c4fc9f 
  hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
  hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION 
  hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
  hbase-handler/src/test/results/positive/hbase_custom_key2.q.out PRE-CREATION 
  itests/util/pom.xml e9720df 
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 113227d 
  ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
d39ee2e 
  ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 5f1329c 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 4921966 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java 293b74e 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
 2a7fdf9 
  
ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java 
9f35575 
  ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java e50026b 
  ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java ecb82d7 
  ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java c0a8269 
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java 
5f32f2d 
  serde/src/java/org/apache/hadoop/hive/serde2/BaseStructObjectInspector.java 
PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/NullStructSerDe.java dba5e33 
  serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java 
1fd6853 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java 10f4c05 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java 3334dff 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java 
82c1263 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java 8a1ea46 
  
serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazySimpleStructObjectInspector.java
 8a5386a 
  serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryObject.java 
598683f 
  serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryStruct.java 
caf3517 
  
serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ColumnarStructObjectInspector.java
 7d0d91c 
  
serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/DelegatedStructObjectInspector.java
 5e1a369 
  

[jira] [Commented] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2014-05-06 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991332#comment-13991332
 ] 

Swarnim Kulkarni commented on HIVE-6411:


Unfortunately since I couldn't update the old RB, I created a new one[1]. 
[~xuefuz] I think I addressed all concerns on the patch that you had with this 
new one. Please let me know if I missed something.

[1] https://reviews.apache.org/r/21138/

 Support more generic way of using composite key for HBaseHandler
 

 Key: HIVE-6411
 URL: https://issues.apache.org/jira/browse/HIVE-6411
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6411.1.patch.txt, HIVE-6411.10.patch.txt, 
 HIVE-6411.2.patch.txt, HIVE-6411.3.patch.txt, HIVE-6411.4.patch.txt, 
 HIVE-6411.5.patch.txt, HIVE-6411.6.patch.txt, HIVE-6411.7.patch.txt, 
 HIVE-6411.8.patch.txt, HIVE-6411.9.patch.txt


 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6204) The result of show grant / show role should be tabular format

2014-05-06 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991353#comment-13991353
 ] 

Lefty Leverenz commented on HIVE-6204:
--

Agreed, each section needs a description of the output columns and an example.

Why doesn't the table have column headings?

 The result of show grant / show role should be tabular format
 -

 Key: HIVE-6204
 URL: https://issues.apache.org/jira/browse/HIVE-6204
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6204.1.patch.txt, HIVE-6204.2.patch.txt, 
 HIVE-6204.3.patch.txt


 {noformat}
 hive show grant role role1 on all;
 OK
 database  default
 table src
 principalName role1
 principalType ROLE
 privilege Create
 grantTime Wed Dec 18 14:17:56 KST 2013
 grantor   navis
 database  default
 table srcpart
 principalName role1
 principalType ROLE
 privilege Update
 grantTime Wed Dec 18 14:18:28 KST 2013
 grantor   navis
 {noformat}
 This should be something like below, especially for JDBC clients.
 {noformat}
 hive show grant role role1 on all;
 OK
 default   src role1   ROLECreate  false   
 1387343876000   navis
 default   srcpart role1   ROLEUpdate  false   
 1387343908000   navis
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6204) The result of show grant / show role should be tabular format

2014-05-06 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991370#comment-13991370
 ] 

Thejas M Nair commented on HIVE-6204:
-

bq. Why doesn't the table have column headings?
Table headings show up when you use beeling. In case of hive cli, you can do 
set hive.cli.print.header=true; to get headers.


 The result of show grant / show role should be tabular format
 -

 Key: HIVE-6204
 URL: https://issues.apache.org/jira/browse/HIVE-6204
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6204.1.patch.txt, HIVE-6204.2.patch.txt, 
 HIVE-6204.3.patch.txt


 {noformat}
 hive show grant role role1 on all;
 OK
 database  default
 table src
 principalName role1
 principalType ROLE
 privilege Create
 grantTime Wed Dec 18 14:17:56 KST 2013
 grantor   navis
 database  default
 table srcpart
 principalName role1
 principalType ROLE
 privilege Update
 grantTime Wed Dec 18 14:18:28 KST 2013
 grantor   navis
 {noformat}
 This should be something like below, especially for JDBC clients.
 {noformat}
 hive show grant role role1 on all;
 OK
 default   src role1   ROLECreate  false   
 1387343876000   navis
 default   srcpart role1   ROLEUpdate  false   
 1387343908000   navis
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HIVE-6204) The result of show grant / show role should be tabular format

2014-05-06 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991370#comment-13991370
 ] 

Thejas M Nair edited comment on HIVE-6204 at 5/6/14 11:58 PM:
--

bq. Why doesn't the table have column headings?
Table headings show up when you use beeline. In case of hive cli, you can do 
set hive.cli.print.header=true; to get headers.



was (Author: thejas):
bq. Why doesn't the table have column headings?
Table headings show up when you use beeling. In case of hive cli, you can do 
set hive.cli.print.header=true; to get headers.


 The result of show grant / show role should be tabular format
 -

 Key: HIVE-6204
 URL: https://issues.apache.org/jira/browse/HIVE-6204
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6204.1.patch.txt, HIVE-6204.2.patch.txt, 
 HIVE-6204.3.patch.txt


 {noformat}
 hive show grant role role1 on all;
 OK
 database  default
 table src
 principalName role1
 principalType ROLE
 privilege Create
 grantTime Wed Dec 18 14:17:56 KST 2013
 grantor   navis
 database  default
 table srcpart
 principalName role1
 principalType ROLE
 privilege Update
 grantTime Wed Dec 18 14:18:28 KST 2013
 grantor   navis
 {noformat}
 This should be something like below, especially for JDBC clients.
 {noformat}
 hive show grant role role1 on all;
 OK
 default   src role1   ROLECreate  false   
 1387343876000   navis
 default   srcpart role1   ROLEUpdate  false   
 1387343908000   navis
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-2777) ability to add and drop partitions atomically

2014-05-06 Thread Xinyu Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyu Wang updated HIVE-2777:
-

Attachment: (was: hive-2777.patch)

 ability to add and drop partitions atomically
 -

 Key: HIVE-2777
 URL: https://issues.apache.org/jira/browse/HIVE-2777
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Aniket Mokashi
Assignee: Aniket Mokashi
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2777.D2271.1.patch


 Hive should have ability to atomically add and drop partitions. This way 
 admins can change partitions atomically without breaking the running jobs. It 
 allows admin to merge several partitions into one.
 Essentially, we would like to have an api- add_drop_partitions(String db, 
 String tbl_name, ListPartition addParts, ListListString dropParts, 
 boolean deleteData);
 This jira covers changes required for metastore and thrift.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-2777) ability to add and drop partitions atomically

2014-05-06 Thread Xinyu Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyu Wang updated HIVE-2777:
-

Attachment: hive-2777.patch

Sorry for the previous patch, I rebased it, and it seems fine now. Can someone 
please review?

 ability to add and drop partitions atomically
 -

 Key: HIVE-2777
 URL: https://issues.apache.org/jira/browse/HIVE-2777
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Aniket Mokashi
Assignee: Aniket Mokashi
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2777.D2271.1.patch, 
 hive-2777.patch


 Hive should have ability to atomically add and drop partitions. This way 
 admins can change partitions atomically without breaking the running jobs. It 
 allows admin to merge several partitions into one.
 Essentially, we would like to have an api- add_drop_partitions(String db, 
 String tbl_name, ListPartition addParts, ListListString dropParts, 
 boolean deleteData);
 This jira covers changes required for metastore and thrift.



--
This message was sent by Atlassian JIRA
(v6.2#6252)