[jira] [Commented] (HIVE-6252) sql std auth - support 'with admin option' in revoke role metastore api

2014-07-22 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069893#comment-14069893
 ] 

Lefty Leverenz commented on HIVE-6252:
--

This is documented in the SQL Standard Auth doc and in the default 
Authorization doc (thank you, [~jdere]).  But I'm leaving the TODOC14 label on 
this jira because in the default Auth doc the wiki version note uses future 
tense, which should be updated when 0.14.0 is released.  (And the same goes for 
HIVE-7404.)

* [SQL Standard Based Hive Authorization -- Revoke Role | 
https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization#SQLStandardBasedHiveAuthorization-RevokeRole]
* [Authorization -- Grant/Revoke Roles | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Authorization#LanguageManualAuthorization-Grant/RevokeRoles]

 sql std auth - support 'with admin option' in revoke role metastore api
 ---

 Key: HIVE-6252
 URL: https://issues.apache.org/jira/browse/HIVE-6252
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Jason Dere
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-6252.1.patch, HIVE-6252.2.patch, HIVE-6252.3.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 The metastore api for revoking role privileges does not accept 'with admin 
 option' , though the syntax supports it. SQL syntax also supports grantor 
 specification in revoke role statement.
 It should be similar to the grant_role api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Structure of Pre-commit builds changed

2014-07-22 Thread Brock Noland
Hi,

I changed the configuration of the Jenkins a builds a bit.

http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/

PreCommit-HIVE-Build - simply forwards requests to either the TRUNK or
SPARK builds below.

PreCommit-HIVE-TRUNK-Build - executes a precommit build on the
hardware for TRUNK
PreCommit-HIVE-SPARK-Build - executes a precommit build on the
hardware fro the SPARK branch

This is to allow parallel testing of both the TRUNK and SPARK branches.

Brock


[jira] [Commented] (HIVE-5923) SQL std auth - parser changes

2014-07-22 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069903#comment-14069903
 ] 

Lefty Leverenz commented on HIVE-5923:
--

[~thejas], do the changes in the release note also apply to default Hive 
authorization?  (The wiki still shows ROLE and TABLE keywords.) 

Also, while checking the Authorization doc for these changes I noticed a flaw 
in the syntax for GRANT priv and REVOKE priv:  they both say ON object_type 
where object_type can be TABLE or DATABASE, but the name of the table or 
database isn't in the syntax.  Have I misunderstood, or does this need to be 
fixed?

{code}
GRANT
priv_type [(column_list)]
  [, priv_type [(column_list)]] ...
[ON object_type]
TO principal_specification [, principal_specification] ...
[WITH GRANT OPTION]

REVOKE [GRANT OPTION FOR]
priv_type [(column_list)]
  [, priv_type [(column_list)]] ...
[ON object_type priv_level]
FROM principal_specification [, principal_specification] ...

object_type:
TABLE
  | DATABASE
{code}

Of course, if this jira's changes don't apply to the default authorization then 
this comment belongs somewhere else.

* [Authorization -- Grant/Revoke Privileges | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Authorization#LanguageManualAuthorization-Grant/RevokePrivileges]

 SQL std auth - parser changes
 -

 Key: HIVE-5923
 URL: https://issues.apache.org/jira/browse/HIVE-5923
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-5923.1.patch, HIVE-5923.2.patch, HIVE-5923.3.patch, 
 HIVE-5923.4.patch

   Original Estimate: 96h
  Time Spent: 168h
  Remaining Estimate: 0h

 There are new access control statements proposed in the functional spec in 
 HIVE-5837 . It also proposes some small changes to the existing query syntax 
 (mostly extensions and some optional keywords).
 The syntax supported should depend on the current authorization mode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7436) Load Spark configuration into Hive driver

2014-07-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-7436:


Description: 
load Spark configuration into Hive driver, there are 3 ways to setup spark 
configurations:
#  Java property.
#  Configure properties in spark configuration file(spark-defaults.conf).
#  Hive configuration file(hive-site.xml).

The below configuration has more priority, and would overwrite previous 
configuration with the same property name.

Please refer to [http://spark.apache.org/docs/latest/configuration.html] for 
all configurable properties of spark, and you can configure spark configuration 
in Hive through following ways:
# Configure through spark configuration file.
#* Create spark-defaults.conf, and place it in the /etc/spark/conf 
configuration directory. configure properties in spark-defaults.conf in java 
properties format.
#* Create the $SPARK_CONF_DIR environment variable and set it to the location 
of spark-defaults.conf.
export SPARK_CONF_DIR=/etc/spark/conf
#* Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH
# Configure through hive configuration file.
#* edit hive-site.xml in hive conf directory, configure properties in 
spark-defaults.conf in xml format.

Hive driver default spark properties:
||name||default value||description||
|spark.master|local|Spark master url.|
|spark.app.name|Hive on Spark|Default Spark application name.|

NO PRECOMMIT TESTS. This is for spark-branch only.

  was:
load Spark configuration into Hive driver, there are 3 ways to setup spark 
configurations:
#  Configure properties in spark configuration file(spark-defaults.conf).
#  Java property.
#  System environment.
Spark support configuration through system environment just for compatible with 
previous scripts, we won't support in Hive on Spark. Hive on Spark load 
defaults from java properties, then load properties from configuration file, 
and override existed properties.

configuration steps:
# Create spark-defaults.conf, and place it in the /etc/spark/conf configuration 
directory.
please refer to [http://spark.apache.org/docs/latest/configuration.html] 
for configuration of spark-defaults.conf.
# Create the $SPARK_CONF_DIR environment variable and set it to the location of 
spark-defaults.conf.
export SPARK_CONF_DIR=/etc/spark/conf
# Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH

NO PRECOMMIT TESTS. This is for spark-branch only.


 Load Spark configuration into Hive driver
 -

 Key: HIVE-7436
 URL: https://issues.apache.org/jira/browse/HIVE-7436
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Attachments: HIVE-7436-Spark.1.patch, HIVE-7436-Spark.2.patch


 load Spark configuration into Hive driver, there are 3 ways to setup spark 
 configurations:
 #  Java property.
 #  Configure properties in spark configuration file(spark-defaults.conf).
 #  Hive configuration file(hive-site.xml).
 The below configuration has more priority, and would overwrite previous 
 configuration with the same property name.
 Please refer to [http://spark.apache.org/docs/latest/configuration.html] for 
 all configurable properties of spark, and you can configure spark 
 configuration in Hive through following ways:
 # Configure through spark configuration file.
 #* Create spark-defaults.conf, and place it in the /etc/spark/conf 
 configuration directory. configure properties in spark-defaults.conf in java 
 properties format.
 #* Create the $SPARK_CONF_DIR environment variable and set it to the location 
 of spark-defaults.conf.
 export SPARK_CONF_DIR=/etc/spark/conf
 #* Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
 export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH
 # Configure through hive configuration file.
 #* edit hive-site.xml in hive conf directory, configure properties in 
 spark-defaults.conf in xml format.
 Hive driver default spark properties:
 ||name||default value||description||
 |spark.master|local|Spark master url.|
 |spark.app.name|Hive on Spark|Default Spark application name.|
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7172) Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()

2014-07-22 Thread DJ Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DJ Choi updated HIVE-7172:
--

Attachment: HIVE-7172.patch

 Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()
 -

 Key: HIVE-7172
 URL: https://issues.apache.org/jira/browse/HIVE-7172
 Project: Hive
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: HIVE-7172.patch


 {code}
   ResultSet res = stmt.executeQuery(versionQuery);
   if (!res.next()) {
 throw new HiveMetaException(Didn't find version data in metastore);
   }
   String currentSchemaVersion = res.getString(1);
   metastoreConn.close();
 {code}
 When HiveMetaException is thrown, metastoreConn.close() would be skipped.
 stmt is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7172) Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()

2014-07-22 Thread DJ Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DJ Choi updated HIVE-7172:
--

Attachment: (was: HIVE-7172.patch)

 Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()
 -

 Key: HIVE-7172
 URL: https://issues.apache.org/jira/browse/HIVE-7172
 Project: Hive
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: HIVE-7172.patch


 {code}
   ResultSet res = stmt.executeQuery(versionQuery);
   if (!res.next()) {
 throw new HiveMetaException(Didn't find version data in metastore);
   }
   String currentSchemaVersion = res.getString(1);
   metastoreConn.close();
 {code}
 When HiveMetaException is thrown, metastoreConn.close() would be skipped.
 stmt is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7172) Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()

2014-07-22 Thread DJ Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DJ Choi updated HIVE-7172:
--

Attachment: (was: HIVE-7172.patch)

 Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()
 -

 Key: HIVE-7172
 URL: https://issues.apache.org/jira/browse/HIVE-7172
 Project: Hive
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor

 {code}
   ResultSet res = stmt.executeQuery(versionQuery);
   if (!res.next()) {
 throw new HiveMetaException(Didn't find version data in metastore);
   }
   String currentSchemaVersion = res.getString(1);
   metastoreConn.close();
 {code}
 When HiveMetaException is thrown, metastoreConn.close() would be skipped.
 stmt is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7457) Minor HCatalog Pig Adapter test clean up

2014-07-22 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069932#comment-14069932
 ] 

David Chen commented on HIVE-7457:
--

The two test failures that may be relevant are TestHCatLoader and 
TestOrcHCatLoader. Both failures appear to be caused by the following:

{code}
java.lang.AssertionError: rowNum=0 colNum=0 Reference data=true actual=1; 
types=(class java.lang.Boolean,class java.lang.Integer)
{code}

It looks like the cause is that after the test loads the data via Pig, the 
boolean field is being returned as an Integer rather than a Boolean. This does 
look like a bug but it does not appear to be caused by this patch.

 Minor HCatalog Pig Adapter test clean up
 

 Key: HIVE-7457
 URL: https://issues.apache.org/jira/browse/HIVE-7457
 Project: Hive
  Issue Type: Sub-task
Reporter: David Chen
Assignee: David Chen
Priority: Minor
 Attachments: HIVE-7457.1.patch, HIVE-7457.2.patch


 Minor cleanup to the HCatalog Pig Adapter tests in preparation for HIVE-7420:
  * Run through Hive Eclipse formatter.
  * Convert JUnit 3-style tests to follow JUnit 4 conventions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7420) Parameterize tests for HCatalog Pig interfaces for testing against all storage formats

2014-07-22 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-7420:
-

Attachment: HIVE-7420.1.patch

 Parameterize tests for HCatalog Pig interfaces for testing against all 
 storage formats
 --

 Key: HIVE-7420
 URL: https://issues.apache.org/jira/browse/HIVE-7420
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: David Chen
Assignee: David Chen
 Attachments: HIVE-7420.1.patch


 Currently, HCatalog tests only test against RCFile with a few testing against 
 ORC. The tests should be covering other Hive storage formats as well.
 HIVE-7286 turns HCatMapReduceTest into a test fixture that can be run with 
 all Hive storage formats and with that patch, all test suites built on 
 HCatMapReduceTest are running and passing against Sequence File, Text, and 
 ORC in addition to RCFile.
 Similar changes should be made to make the tests for HCatLoader and 
 HCatStorer generic so that they can be run against all Hive storage formats.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7436) Load Spark configuration into Hive driver

2014-07-22 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069931#comment-14069931
 ] 

Chengxiang Li commented on HIVE-7436:
-

[~xuefuz] About the hive configurations(spark related) which can only be set in 
hive-site.xml, i think it's some configurations that can described as:
# not spark configuration.
# introduced by hive on spark feature.

Currently i think we have not introduce any extra configuration yet, but maybe 
we would in the future. like hive on tez introduced several configurations for 
tez seesion pool management,
{noformat}
HIVE_SERVER2_TEZ_DEFAULT_QUEUES(hive.server2.tez.default.queues, ),

HIVE_SERVER2_TEZ_SESSIONS_PER_DEFAULT_QUEUE(hive.server2.tez.sessions.per.default.queue,
 1),

HIVE_SERVER2_TEZ_INITIALIZE_DEFAULT_SESSIONS(hive.server2.tez.initialize.default.sessions)
{noformat}
hive on spark may need configurations like the following as well.
{noformat}
HIVE_SERVER2_TEZ_DEFAULT_QUEUES(hive.server2.spark.default.queues, ),

HIVE_SERVER2_TEZ_SESSIONS_PER_DEFAULT_QUEUE(hive.server2.spark.sessions.per.default.queue,
 1),

HIVE_SERVER2_TEZ_INITIALIZE_DEFAULT_SESSIONS(hive.server2.spark.initialize.default.sessions)
{noformat}

Besides, i updated the description for spark configuration loading.

 Load Spark configuration into Hive driver
 -

 Key: HIVE-7436
 URL: https://issues.apache.org/jira/browse/HIVE-7436
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Attachments: HIVE-7436-Spark.1.patch, HIVE-7436-Spark.2.patch


 load Spark configuration into Hive driver, there are 3 ways to setup spark 
 configurations:
 #  Java property.
 #  Configure properties in spark configuration file(spark-defaults.conf).
 #  Hive configuration file(hive-site.xml).
 The below configuration has more priority, and would overwrite previous 
 configuration with the same property name.
 Please refer to [http://spark.apache.org/docs/latest/configuration.html] for 
 all configurable properties of spark, and you can configure spark 
 configuration in Hive through following ways:
 # Configure through spark configuration file.
 #* Create spark-defaults.conf, and place it in the /etc/spark/conf 
 configuration directory. configure properties in spark-defaults.conf in java 
 properties format.
 #* Create the $SPARK_CONF_DIR environment variable and set it to the location 
 of spark-defaults.conf.
 export SPARK_CONF_DIR=/etc/spark/conf
 #* Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
 export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH
 # Configure through hive configuration file.
 #* edit hive-site.xml in hive conf directory, configure properties in 
 spark-defaults.conf in xml format.
 Hive driver default spark properties:
 ||name||default value||description||
 |spark.master|local|Spark master url.|
 |spark.app.name|Hive on Spark|Default Spark application name.|
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23797: HIVE-7420: Parameterize tests for HCatalog Pig interfaces for testing against all storage formats

2014-07-22 Thread David Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23797/
---

Review request for hive.


Bugs: HIVE-7420
https://issues.apache.org/jira/browse/HIVE-7420


Repository: hive-git


Description
---

HIVE-7420: Parameterize tests for HCatalog Pig interfaces for testing against 
all storage formats


Depends on: HIVE-7457: Minor HCatalog Pig Adapter test cleanup.


Diffs
-

  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/HCatStorerWrapper.java
 b06e9b4c35b0c10ed64ea0f7766be2b77fa5bb71 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/MockLoader.java
 c87b95a00af03d2531eb8bbdda4e307c3aac1fe2 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/MyPigStorage.java
 d056910cf166d4e22200b8431e235b862e9b3e69 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/StorageFormats.java
 PRE-CREATION 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestE2EScenarios.java
 a4b55c8463b3563f1e602ae2d0809dd318bcfa7f 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoader.java
 82fc8a9391667138780be8796931793661f61ebb 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderComplexSchema.java
 eadbf20afc525dd9f33e9e7fb2a5d5cb89907d7e 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderStorer.java
 716258458fc27aadaa03918164dba0b49738ed40 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatStorer.java
 fcfc6428e7db80b8bfe0ce10e37d7b0ee6e58e20 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatStorerMulti.java
 76080f7635548ed9af114c823180d8da9ea8f6c2 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatStorerWrapper.java
 7f0bca763eb07db3822c6d6028357e81278803c9 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestOrcHCatLoader.java
 82eb0d72b4f885184c094113f775415c06bdce98 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestOrcHCatLoaderComplexSchema.java
 05387711289279cab743f51aee791069609b904a 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestOrcHCatPigStorer.java
 a9b452101c15fb7a3f0d8d0339f7d0ad97383441 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestOrcHCatStorer.java
 1084092828a9ac5e37f5b50b9c6bbd03f70b48fd 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestPigHCatUtil.java
 a8ce61aaad42b03e4de346530d0724f3d69776b9 

Diff: https://reviews.apache.org/r/23797/diff/


Testing
---


Thanks,

David Chen



[jira] [Updated] (HIVE-7172) Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()

2014-07-22 Thread DJ Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DJ Choi updated HIVE-7172:
--

Attachment: HIVE-7172.patch

 Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()
 -

 Key: HIVE-7172
 URL: https://issues.apache.org/jira/browse/HIVE-7172
 Project: Hive
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: HIVE-7172.patch


 {code}
   ResultSet res = stmt.executeQuery(versionQuery);
   if (!res.next()) {
 throw new HiveMetaException(Didn't find version data in metastore);
   }
   String currentSchemaVersion = res.getString(1);
   metastoreConn.close();
 {code}
 When HiveMetaException is thrown, metastoreConn.close() would be skipped.
 stmt is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7397) Set the default threshold for fetch task conversion to 1Gb

2014-07-22 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-7397:
--

Attachment: HIVE-7397.5.patch

Cannot reproduce the tez test failures on my machine.

Re-uploading for a run before commit.

 Set the default threshold for fetch task conversion to 1Gb
 --

 Key: HIVE-7397
 URL: https://issues.apache.org/jira/browse/HIVE-7397
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 0.13.1
Reporter: Gopal V
Assignee: Gopal V
  Labels: Performance
 Fix For: 0.14.0

 Attachments: HIVE-7397.1.patch, HIVE-7397.2.patch, HIVE-7397.3.patch, 
 HIVE-7397.4.patch.txt, HIVE-7397.5.patch


 Currently, modifying the value of hive.fetch.task.conversion to more 
 results in a dangerous setting where small scale queries function, but large 
 scale queries crash.
 This occurs because the default threshold of -1 means apply this optimization 
 for a petabyte table.
 I am testing a variety of queries with the setting more (to make it the 
 default option as suggested by HIVE-887) change the default threshold for 
 this feature to a reasonable 1Gb.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7420) Parameterize tests for HCatalog Pig interfaces for testing against all storage formats

2014-07-22 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069937#comment-14069937
 ] 

David Chen commented on HIVE-7420:
--

RB: https://reviews.apache.org/r/23797/

Note: the patch contains the patch for HIVE-7457 due to the dependency on that 
ticket.

 Parameterize tests for HCatalog Pig interfaces for testing against all 
 storage formats
 --

 Key: HIVE-7420
 URL: https://issues.apache.org/jira/browse/HIVE-7420
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: David Chen
Assignee: David Chen
 Attachments: HIVE-7420.1.patch


 Currently, HCatalog tests only test against RCFile with a few testing against 
 ORC. The tests should be covering other Hive storage formats as well.
 HIVE-7286 turns HCatMapReduceTest into a test fixture that can be run with 
 all Hive storage formats and with that patch, all test suites built on 
 HCatMapReduceTest are running and passing against Sequence File, Text, and 
 ORC in addition to RCFile.
 Similar changes should be made to make the tests for HCatLoader and 
 HCatStorer generic so that they can be run against all Hive storage formats.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7420) Parameterize tests for HCatalog Pig interfaces for testing against all storage formats

2014-07-22 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-7420:
-

Status: Patch Available  (was: Open)

 Parameterize tests for HCatalog Pig interfaces for testing against all 
 storage formats
 --

 Key: HIVE-7420
 URL: https://issues.apache.org/jira/browse/HIVE-7420
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: David Chen
Assignee: David Chen
 Attachments: HIVE-7420.1.patch


 Currently, HCatalog tests only test against RCFile with a few testing against 
 ORC. The tests should be covering other Hive storage formats as well.
 HIVE-7286 turns HCatMapReduceTest into a test fixture that can be run with 
 all Hive storage formats and with that patch, all test suites built on 
 HCatMapReduceTest are running and passing against Sequence File, Text, and 
 ORC in addition to RCFile.
 Similar changes should be made to make the tests for HCatLoader and 
 HCatStorer generic so that they can be run against all Hive storage formats.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7466) Rollback HIVE-7409 by violating bylaw

2014-07-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069940#comment-14069940
 ] 

Hive QA commented on HIVE-7466:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12657006/HIVE-7466.1.patch.txt

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5736 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_8
org.apache.hive.jdbc.TestJdbcDriver2.testParentReferences
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12657006

 Rollback HIVE-7409 by violating bylaw
 -

 Key: HIVE-7466
 URL: https://issues.apache.org/jira/browse/HIVE-7466
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-7466.1.patch.txt


 https://issues.apache.org/jira/browse/HIVE-7409?focusedCommentId=14069585page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14069585
 Sorry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7468) UDF translation needs to use Hive UDF name

2014-07-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069944#comment-14069944
 ] 

Hive QA commented on HIVE-7468:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12657037/HIVE-7468.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'conf/hive-default.xml.template'
Reverted 'common/src/java/org/apache/hive/common/util/AnnotationUtils.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/optimizer/StatsOptimizer.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/plan/GroupByDesc.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionInfo.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/WindowFunctionInfo.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFEvaluator.java'
++ svn status --no-ignore
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target packaging/target 
hbase-handler/target testutils/target jdbc/target metastore/target 
itests/target itests/hcatalog-unit/target itests/test-serde/target 
itests/qtest/target itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-unit/target itests/custom-serde/target itests/util/target 
hcatalog/target hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target 
hwi/target common/target common/src/gen contrib/target service/target 
serde/target beeline/target odbc/target cli/target 
ql/dependency-reduced-pom.xml ql/target
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1612492.

At revision 1612492.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12657037

 UDF translation needs to use Hive UDF name
 --

 Key: HIVE-7468
 URL: https://issues.apache.org/jira/browse/HIVE-7468
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: 

Re: Review Request 23738: HIVE-5160: HS2 should support .hiverc

2014-07-22 Thread Dong Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23738/
---

(Updated July 22, 2014, 8:24 a.m.)


Review request for hive.


Changes
---

An updated patch (HIVE-5160.1.patch) based on review comments


Repository: hive-git


Description
---

HIVE-5160: HS2 should support .hiverc


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/common/cli/HiveFileProcessor.java 
PRE-CREATION 
  common/src/java/org/apache/hadoop/hive/common/cli/IHiveFileProcessor.java 
PRE-CREATION 
  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 593c566 
  conf/hive-default.xml.template 653f5cc 
  service/src/java/org/apache/hive/service/cli/session/HiveSessionBase.java 
a5c8e9b 
  service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
7a3286d 
  
service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
 e79b129 
  service/src/java/org/apache/hive/service/cli/session/SessionManager.java 
6650c05 
  
service/src/test/org/apache/hive/service/cli/session/TestSessionGlobalInitFile.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/23738/diff/


Testing
---

UT passed.


Thanks,

Dong Chen



[jira] [Commented] (HIVE-964) handle skewed keys for a join in a separate job

2014-07-22 Thread wangmeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069997#comment-14069997
 ] 

wangmeng commented on HIVE-964:
---

if the two join tables  have the same big skew key on one value (for example 
,select *  from  table A join B  on  A.id=b.id,  both table A  and B  have  a 
lot of  keys on id=1,  in  this  case ,map join  will OOM),  how  to fix this  
case?  Will  it  rollback  to common  join ? 

 handle skewed keys for a join in a separate job
 ---

 Key: HIVE-964
 URL: https://issues.apache.org/jira/browse/HIVE-964
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: He Yongqiang
 Fix For: 0.6.0

 Attachments: hive-964-2009-12-17.txt, hive-964-2009-12-28-2.patch, 
 hive-964-2009-12-29-4.patch, hive-964-2010-01-08.patch, 
 hive-964-2010-01-13-2.patch, hive-964-2010-01-14-3.patch, 
 hive-964-2010-01-15-4.patch


 The skewed keys can be written to a temporary table or file, and a followup 
 conditional task can be used to perform the join on those keys.
 As a first step, JDBM can be used for those keys



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7469) skew join keys when two join table have the same big skew key

2014-07-22 Thread wangmeng (JIRA)
wangmeng created HIVE-7469:
--

 Summary: skew join keys  when  two join  table  have the same big 
skew key
 Key: HIVE-7469
 URL: https://issues.apache.org/jira/browse/HIVE-7469
 Project: Hive
  Issue Type: Improvement
Reporter: wangmeng


In https://issues.apache.org/jira/browse/HIVE-964, I  have an general   idea 
about how to  deal with skew join key ,but there has a case  which troubles me:
if the two join tables  have the same big skew key on one value :
for example , select *  from  table A join B  on  A.id=b.id,  both table A  and 
B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  to deal 
with   the skew key  id=1  ,maybe itwill OOM.
so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7469) skew join keys when two join table have the same big skew key

2014-07-22 Thread wangmeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangmeng updated HIVE-7469:
---

Description: 
In https://issues.apache.org/jira/browse/HIVE-964, I  have a  general   idea 
about how to  deal with skew join key ,but there has a case  which troubles me:
if the two join tables  have the same big skew key on one value :
for example , select *  from  table A join B  on  A.id=b.id,  both table A  and 
B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  to deal 
with   the skew key  id=1  ,maybe itwill OOM.
so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.

  was:
In https://issues.apache.org/jira/browse/HIVE-964, I  have an general   idea 
about how to  deal with skew join key ,but there has a case  which troubles me:
if the two join tables  have the same big skew key on one value :
for example , select *  from  table A join B  on  A.id=b.id,  both table A  and 
B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  to deal 
with   the skew key  id=1  ,maybe itwill OOM.
so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.


 skew join keys  when  two join  table  have the same big skew key
 -

 Key: HIVE-7469
 URL: https://issues.apache.org/jira/browse/HIVE-7469
 Project: Hive
  Issue Type: Improvement
Reporter: wangmeng

 In https://issues.apache.org/jira/browse/HIVE-964, I  have a  general   idea 
 about how to  deal with skew join key ,but there has a case  which troubles 
 me:
 if the two join tables  have the same big skew key on one value :
 for example , select *  from  table A join B  on  A.id=b.id,  both table A  
 and B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  
 to deal with   the skew key  id=1  ,maybe itwill OOM.
 so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7436) Load Spark configuration into Hive driver

2014-07-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-7436:


Attachment: HIVE-7436-Spark.3.patch

update patch, support load spark properties from hive configuration.

 Load Spark configuration into Hive driver
 -

 Key: HIVE-7436
 URL: https://issues.apache.org/jira/browse/HIVE-7436
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Attachments: HIVE-7436-Spark.1.patch, HIVE-7436-Spark.2.patch, 
 HIVE-7436-Spark.3.patch


 load Spark configuration into Hive driver, there are 3 ways to setup spark 
 configurations:
 #  Java property.
 #  Configure properties in spark configuration file(spark-defaults.conf).
 #  Hive configuration file(hive-site.xml).
 The below configuration has more priority, and would overwrite previous 
 configuration with the same property name.
 Please refer to [http://spark.apache.org/docs/latest/configuration.html] for 
 all configurable properties of spark, and you can configure spark 
 configuration in Hive through following ways:
 # Configure through spark configuration file.
 #* Create spark-defaults.conf, and place it in the /etc/spark/conf 
 configuration directory. configure properties in spark-defaults.conf in java 
 properties format.
 #* Create the $SPARK_CONF_DIR environment variable and set it to the location 
 of spark-defaults.conf.
 export SPARK_CONF_DIR=/etc/spark/conf
 #* Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
 export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH
 # Configure through hive configuration file.
 #* edit hive-site.xml in hive conf directory, configure properties in 
 spark-defaults.conf in xml format.
 Hive driver default spark properties:
 ||name||default value||description||
 |spark.master|local|Spark master url.|
 |spark.app.name|Hive on Spark|Default Spark application name.|
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7469) skew join keys when two join table have the same big skew key

2014-07-22 Thread wangmeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangmeng updated HIVE-7469:
---

Description: 
In https://issues.apache.org/jira/browse/HIVE-964, I  have a  general   idea 
about how to  deal with skew join key , the key  point is that  use mapjoin to 
deal with skew key, but there has a case  which troubles me:
if the two join tables  have the same big skew key on one value :
for example , select *  from  table A join B  on  A.id=b.id,  both table A  and 
B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  to deal 
with   the skew key  id=1  ,maybe itwill OOM.
so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.

  was:
In https://issues.apache.org/jira/browse/HIVE-964, I  have a  general   idea 
about how to  deal with skew join key , the key is that  use mapjoin to deal 
with skew key, but there has a case  which troubles me:
if the two join tables  have the same big skew key on one value :
for example , select *  from  table A join B  on  A.id=b.id,  both table A  and 
B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  to deal 
with   the skew key  id=1  ,maybe itwill OOM.
so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.


 skew join keys  when  two join  table  have the same big skew key
 -

 Key: HIVE-7469
 URL: https://issues.apache.org/jira/browse/HIVE-7469
 Project: Hive
  Issue Type: Improvement
Reporter: wangmeng

 In https://issues.apache.org/jira/browse/HIVE-964, I  have a  general   idea 
 about how to  deal with skew join key , the key  point is that  use mapjoin 
 to deal with skew key, but there has a case  which troubles me:
 if the two join tables  have the same big skew key on one value :
 for example , select *  from  table A join B  on  A.id=b.id,  both table A  
 and B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  
 to deal with   the skew key  id=1  ,maybe itwill OOM.
 so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HIVE-7439) Spark job monitoring and error reporting

2014-07-22 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-7439 started by Chengxiang Li.

 Spark job monitoring and error reporting
 

 Key: HIVE-7439
 URL: https://issues.apache.org/jira/browse/HIVE-7439
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li

 After Hive submits a job to Spark cluster, we need to report to user the job 
 progress, such as the percentage done, to the user. This is especially 
 important for long running queries. Moreover, if there is an error during job 
 submission or execution, it's also crucial for hive to fetch the error log 
 and/or stacktrace and feedback it to the user.
 Please refer design doc on wiki for more information.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7469) skew join keys when two join table have the same big skew key

2014-07-22 Thread wangmeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangmeng updated HIVE-7469:
---

Description: 
In https://issues.apache.org/jira/browse/HIVE-964, I  have a  general   idea 
about how to  deal with skew join key , the key is that  use mapjoin to deal 
with skew key, but there has a case  which troubles me:
if the two join tables  have the same big skew key on one value :
for example , select *  from  table A join B  on  A.id=b.id,  both table A  and 
B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  to deal 
with   the skew key  id=1  ,maybe itwill OOM.
so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.

  was:
In https://issues.apache.org/jira/browse/HIVE-964, I  have a  general   idea 
about how to  deal with skew join key ,but there has a case  which troubles me:
if the two join tables  have the same big skew key on one value :
for example , select *  from  table A join B  on  A.id=b.id,  both table A  and 
B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  to deal 
with   the skew key  id=1  ,maybe itwill OOM.
so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.


 skew join keys  when  two join  table  have the same big skew key
 -

 Key: HIVE-7469
 URL: https://issues.apache.org/jira/browse/HIVE-7469
 Project: Hive
  Issue Type: Improvement
Reporter: wangmeng

 In https://issues.apache.org/jira/browse/HIVE-964, I  have a  general   idea 
 about how to  deal with skew join key , the key is that  use mapjoin to deal 
 with skew key, but there has a case  which troubles me:
 if the two join tables  have the same big skew key on one value :
 for example , select *  from  table A join B  on  A.id=b.id,  both table A  
 and B  have  a lot of  keys on id=1,  in  this  case , if we  use map join  
 to deal with   the skew key  id=1  ,maybe itwill OOM.
 so ,how  to fix this  case?  Will  it  rollback  to common  join ? Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7404) Revoke privilege should support revoking of grant option

2014-07-22 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070006#comment-14070006
 ] 

Jason Dere commented on HIVE-7404:
--

Yes, revoking admin/grant option for roles/privileges should work for both 
default and sql standard auth.

 Revoke privilege should support revoking of grant option
 

 Key: HIVE-7404
 URL: https://issues.apache.org/jira/browse/HIVE-7404
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.14.0

 Attachments: HIVE-7404.1.patch, HIVE-7404.2.patch


 Similar to HIVE-6252, but for grant option on privileges:
 {noformat}
 REVOKE GRANT OPTION FOR privilege ON object FROM USER user
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5160) HS2 should support .hiverc

2014-07-22 Thread Dong Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070008#comment-14070008
 ] 

Dong Chen commented on HIVE-5160:
-

Hi Szehon, Lefty, thanks very much for your comments. They are very helpful.
I have updated the code in the review board. The change lists are:

1. clear typo. I should be more careful on spelling and grammar. ^_^
2. rebase my local trunk and use new HiveConf.java.
3. rework the initialization order so that we use proxy Session.

 HS2 should support .hiverc
 --

 Key: HIVE-5160
 URL: https://issues.apache.org/jira/browse/HIVE-5160
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Thejas M Nair
 Attachments: HIVE-5160.patch


 It would be useful to support the .hiverc functionality with hive server2 as 
 well.
 .hiverc is processed by CliDriver, so it works only with hive cli. It would 
 be useful to be able to do things like register a standard set of jars and 
 functions for all users. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23799: HIVE-7390: refactor csv output format with in RFC mode and add one more option to support formatting as the csv format in hive cli

2014-07-22 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23799/
---

Review request for hive.


Bugs: HIVE-7390
https://issues.apache.org/jira/browse/HIVE-7390


Repository: hive-git


Description
---

HIVE-7390: refactor csv output format with in RFC mode and add one more option 
to support formatting as the csv format in hive cli


Diffs
-

  beeline/pom.xml 6ec1d1aff3f35c097aa6054aae84faf2d63854f1 
  beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java 
75f7d38cb97fb753a8f39c19488b9ce0a8d77590 
  beeline/src/java/org/apache/hive/beeline/SeparatedValuesOutputFormat.java 
7853c3f38f3c3fb9ae0b9939c714f1dc940ba053 
  beeline/src/main/resources/BeeLine.properties 
390d062b8dc52dfa790c7351f3db44c1e0dd7e37 
  
itests/hive-unit/src/test/java/org/apache/hive/beeline/TestBeeLineWithArgs.java 
bd97aff5959fd9040fc0f0a1f6b782f2aa6f 
  pom.xml b5a5697e6a3b689c2b244ba0338be541261eaa3d 

Diff: https://reviews.apache.org/r/23799/diff/


Testing
---


Thanks,

cheng xu



[jira] [Created] (HIVE-7470) Wrong Thrift declaration for

2014-07-22 Thread Damien Carol (JIRA)
Damien Carol created HIVE-7470:
--

 Summary: Wrong Thrift declaration for 
 Key: HIVE-7470
 URL: https://issues.apache.org/jira/browse/HIVE-7470
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
Priority: Minor
 Fix For: 0.14.0


Prerequiste :
1. Remote metastore
2. Activate ACID and compactions
3. Launch ALTER TABLE foo COMPACT 'bar'
4. Call {{show_compact()}} on remote metastore

This use case throws exception in Thrift stack.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23799: HIVE-7390: refactor csv output format with in RFC mode and add one more option to support formatting as the csv format in hive cli

2014-07-22 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23799/
---

(Updated July 22, 2014, 8:48 a.m.)


Review request for hive.


Bugs: HIVE-7390
https://issues.apache.org/jira/browse/HIVE-7390


Repository: hive-git


Description
---

HIVE-7390: refactor csv output format with in RFC mode and add one more option 
to support formatting as the csv format in hive cli


Diffs
-

  beeline/pom.xml 6ec1d1aff3f35c097aa6054aae84faf2d63854f1 
  beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java 
75f7d38cb97fb753a8f39c19488b9ce0a8d77590 
  beeline/src/java/org/apache/hive/beeline/SeparatedValuesOutputFormat.java 
7853c3f38f3c3fb9ae0b9939c714f1dc940ba053 
  beeline/src/main/resources/BeeLine.properties 
390d062b8dc52dfa790c7351f3db44c1e0dd7e37 
  
itests/hive-unit/src/test/java/org/apache/hive/beeline/TestBeeLineWithArgs.java 
bd97aff5959fd9040fc0f0a1f6b782f2aa6f 
  pom.xml b5a5697e6a3b689c2b244ba0338be541261eaa3d 

Diff: https://reviews.apache.org/r/23799/diff/


Testing
---


Thanks,

cheng xu



[jira] [Updated] (HIVE-7390) Make quote character optional and configurable in BeeLine CSV/TSV output

2014-07-22 Thread ferdinand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ferdinand updated HIVE-7390:


Attachment: HIVE-7390.1.patch

Use RFC format for csv mode and add one more option to reserve the previous 
hive cli format which is not enclosed by quote.
And RB entry is created in https://reviews.apache.org/r/23799/

 Make quote character optional and configurable in BeeLine CSV/TSV output
 

 Key: HIVE-7390
 URL: https://issues.apache.org/jira/browse/HIVE-7390
 Project: Hive
  Issue Type: New Feature
  Components: Clients
Affects Versions: 0.13.1
Reporter: Jim Halfpenny
 Attachments: HIVE-7390.1.patch, HIVE-7390.patch


 Currently when either the CSV or TSV output formats are used in beeline each 
 column is wrapped in single quotes. Quote wrapping of columns should be 
 optional and the user should be able to choose the character used to wrap the 
 columns.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5160) HS2 should support .hiverc

2014-07-22 Thread Dong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Chen updated HIVE-5160:


Attachment: HIVE-5160.1.patch

 HS2 should support .hiverc
 --

 Key: HIVE-5160
 URL: https://issues.apache.org/jira/browse/HIVE-5160
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Thejas M Nair
 Attachments: HIVE-5160.1.patch, HIVE-5160.patch


 It would be useful to support the .hiverc functionality with hive server2 as 
 well.
 .hiverc is processed by CliDriver, so it works only with hive cli. It would 
 be useful to be able to do things like register a standard set of jars and 
 functions for all users. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7470) Wrong Thrift declaration for {{ShowCompactResponseElement}}

2014-07-22 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070026#comment-14070026
 ] 

Damien Carol commented on HIVE-7470:


Exception are throws because show_compact() return object 
{{ShowCompactResponse}}.

Main property of {{ShowCompactResponse}} is a list of  
{{ShowCompactResponseElement}}

Thrift definition of {{ShowCompactResponseElement}} is :

{noformat}
struct ShowCompactResponseElement {
1: required string dbname,
2: required string tablename,
3: required string partitionname,
4: required CompactionType type,
5: required string state,
6: required string workerid,
7: required i64 start,
8: required string runAs,
}
{noformat}

But in metastore database, table which store compactions infos is defined with 
non required properties :
{code:sql}
CREATE TABLE COMPACTION_QUEUE
(
  CQ_ID bigint NOT NULL,
  CQ_DATABASE character varying(128) NOT NULL,
  CQ_TABLE character varying(128) NOT NULL,
  CQ_PARTITION character varying(767),
  CQ_STATE character(1) NOT NULL,
  CQ_TYPE character(1) NOT NULL,
  CQ_WORKER_ID character varying(128),
  CQ_START bigint,
  CQ_RUN_AS character varying(128),
  CONSTRAINT COMPACTION_QUEUE_pkey PRIMARY KEY (CQ_ID)
)
{code}

Also, ACID code store NULL values in this table.
This throws exceptions when doing {{SHOW COMPACTIONS}} in CLI because metastore 
Thrift server try to send {{ShowCompactResponseElement}} with required 
properties to NULL.

Properties that throws errors :
1. {{partitionname}}, when the table have no partition
2. {{workerid}}, {{start, {{runAs}} when the table have partitions

 Wrong Thrift declaration for {{ShowCompactResponseElement}}
 ---

 Key: HIVE-7470
 URL: https://issues.apache.org/jira/browse/HIVE-7470
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
Priority: Minor
  Labels: metastore, thrift
 Fix For: 0.14.0


 Prerequiste :
 1. Remote metastore
 2. Activate ACID and compactions
 3. Launch ALTER TABLE foo COMPACT 'bar'
 4. Call {{show_compact()}} on remote metastore
 This use case throws exception in Thrift stack.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7470) Wrong Thrift declaration for {{ShowCompactResponseElement}}

2014-07-22 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070043#comment-14070043
 ] 

Damien Carol commented on HIVE-7470:


Change Thritft definition from :
{noformat}
struct ShowCompactResponseElement {
...
3: required string partitionname,
4: required CompactionType type,
5: required string state,
6: required string workerid,
7: required i64 start,
8: required string runAs,
}
{noformat}
To :
{noformat}
struct ShowCompactResponseElement {
...
3: optional string partitionname,
4: required CompactionType type,
5: required string state,
6: optional string workerid,
7: optional i64 start,
8: optional string runAs,
}
{noformat}

seems to fix the problem. Pushing the first patch.


 Wrong Thrift declaration for {{ShowCompactResponseElement}}
 ---

 Key: HIVE-7470
 URL: https://issues.apache.org/jira/browse/HIVE-7470
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
Priority: Minor
  Labels: metastore, thrift
 Fix For: 0.14.0


 Prerequiste :
 1. Remote metastore
 2. Activate ACID and compactions
 3. Launch ALTER TABLE foo COMPACT 'bar'
 4. Call {{show_compact()}} on remote metastore
 This use case throws exception in Thrift stack.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7470) Wrong Thrift declaration for {{ShowCompactResponseElement}}

2014-07-22 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-7470:
---

Status: Patch Available  (was: Open)

 Wrong Thrift declaration for {{ShowCompactResponseElement}}
 ---

 Key: HIVE-7470
 URL: https://issues.apache.org/jira/browse/HIVE-7470
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
Priority: Minor
  Labels: metastore, thrift
 Fix For: 0.14.0

 Attachments: HIVE-7470.1.patch


 Prerequiste :
 1. Remote metastore
 2. Activate ACID and compactions
 3. Launch ALTER TABLE foo COMPACT 'bar'
 4. Call {{show_compact()}} on remote metastore
 This use case throws exception in Thrift stack.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7470) Wrong Thrift declaration for {{ShowCompactResponseElement}}

2014-07-22 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-7470:
---

Attachment: HIVE-7470.1.patch

 Wrong Thrift declaration for {{ShowCompactResponseElement}}
 ---

 Key: HIVE-7470
 URL: https://issues.apache.org/jira/browse/HIVE-7470
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
Priority: Minor
  Labels: metastore, thrift
 Fix For: 0.14.0

 Attachments: HIVE-7470.1.patch


 Prerequiste :
 1. Remote metastore
 2. Activate ACID and compactions
 3. Launch ALTER TABLE foo COMPACT 'bar'
 4. Call {{show_compact()}} on remote metastore
 This use case throws exception in Thrift stack.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7292) Hive on Spark

2014-07-22 Thread wangmeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070044#comment-14070044
 ] 

wangmeng commented on HIVE-7292:


This is a very valuable project!

 Hive on Spark
 -

 Key: HIVE-7292
 URL: https://issues.apache.org/jira/browse/HIVE-7292
 Project: Hive
  Issue Type: Improvement
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: Hive-on-Spark.pdf


 Spark as an open-source data analytics cluster computing framework has gained 
 significant momentum recently. Many Hive users already have Spark installed 
 as their computing backbone. To take advantages of Hive, they still need to 
 have either MapReduce or Tez on their cluster. This initiative will provide 
 user a new alternative so that those user can consolidate their backend. 
 Secondly, providing such an alternative further increases Hive's adoption as 
 it exposes Spark users  to a viable, feature-rich de facto standard SQL tools 
 on Hadoop.
 Finally, allowing Hive to run on Spark also has performance benefits. Hive 
 queries, especially those involving multiple reducer stages, will run faster, 
 thus improving user experience as Tez does.
 This is an umbrella JIRA which will cover many coming subtask. Design doc 
 will be attached here shortly, and will be on the wiki as well. Feedback from 
 the community is greatly appreciated!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23800: HIVE-7470: Wrong Thrift declaration for {{ShowCompactResponseElement}}

2014-07-22 Thread Damien Carol

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23800/
---

Review request for hive.


Bugs: HIVE-7470
https://issues.apache.org/jira/browse/HIVE-7470


Repository: hive-git


Description
---

HIVE-7470 Wrong Thrift declaration for {{ShowCompactResponseElement}}

ShowCompactResponseElement declaration fix all fields required but ACID code 
use table with null-able columns.

This throw exceptions in Thirft stack when calling show_compact() on remote 
metastore.

This patch is very simple, it change definition of null-able properties to 
optional Thrift property.


Diffs
-

  metastore/if/hive_metastore.thrift 55f41db 
  metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h f352cd5 
  metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp a6a40fd 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsRequest.java
 4547970 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsResult.java
 68a4219 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatistics.java
 6aecf26 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java
 a4ae892 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Function.java
 781281a 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsInfoResponse.java
 b782d32 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsResponse.java
 d549ce9 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPrincipalsInRoleResponse.java
 3ef6224 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetRoleGrantsForPrincipalResponse.java
 3ddc1ac 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HeartbeatTxnRangeResponse.java
 f3e3c07 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HiveObjectRef.java
 b22b211 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/LockRequest.java
 cdf6f30 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/OpenTxnsResponse.java
 54955c6 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 7d29d09 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprResult.java
 5ea5a1b 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsRequest.java
 80a151a 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsResult.java
 537db47 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 0c9518a 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrivilegeBag.java
 4285ed8 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java
 2fcb216 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 58e9028 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowCompactResponse.java
 b962e27 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowCompactResponseElement.java
 47da9b3 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowLocksResponse.java
 1399f8b 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 ab5c0ed 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 813b4f0 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 484bd6a 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/TableStatsRequest.java
 ddf 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/TableStatsResult.java
 e37b75c 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 1e0cdea 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Type.java
 1882b57 
  metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py c71b7b7 
  metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb e21f662 

Diff: https://reviews.apache.org/r/23800/diff/


Testing
---


Thanks,

Damien Carol



Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-22 Thread Tom White


 On July 18, 2014, 1:57 p.m., Tom White wrote:
  serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java, line 112
  https://reviews.apache.org/r/23387/diff/9/?file=634614#file634614line112
 
  Is it possible to default name to table name, namespace to database 
  name, and doc to table comment?
 
 Ashish Singh wrote:
 I was planning to do this, but slipped off my mind. Thanks for pointing 
 this out. I don't think it is possible to retrieve database name inside 
 serde. Addressed name and doc.
 
 Tom White wrote:
 Thanks for fixing this. There's no test that name and comment are 
 picked up from the table definition - perhaps you could add one, or at least 
 confirm it manually. I couldn't see where in Hive they get set...
 
 Otherwise, +1 from me - thanks for addressing all my comments. This is a 
 great feature to add.
 
 Ashish Singh wrote:
 Tom, actually its tested by all the unit tests now. Look at the diffs in 
 https://reviews.apache.org/r/23387/diff/8-9/#index_header.

Unless I am missing something, the unit tests in TestTypeInfoToSchema don't 
test this since they hardcode the table name to avrotest and the table 
comment to This is to test hive-avro.

Perhaps this is tested indirectly through the ql tests since Avro schema 
resolution rules mean that a record schema's name must match for both the 
reader and writer. However, this isn't true for comments (Avro schema doc), and 
it would be good to confirm that inserting data into an Avro-backed Hive table 
creates Avro files with the expected top-level name and comment. 


- Tom


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/#review48120
---


On July 19, 2014, 5:11 a.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23387/
 ---
 
 (Updated July 19, 2014, 5:11 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6806
 https://issues.apache.org/jira/browse/HIVE-6806
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-6806: Native avro support
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
 1bae0a8fee04049f90b16d813ff4c96707b349c8 
   
 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
  a23ff115512da5fe3167835a88d582c427585b8e 
   ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
 d53ebc65174d66bfeee25fd2891c69c78f9137ee 
   ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_partitioned_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
 PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 0db12437406170686a21b6055d83156fe5d6a55f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
 1fe31e0034f8988d03a0c51a90904bb93e7cb157 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java 
 4564e75d9bfc73f8e10f160e2535f1a08b90ff79 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23387/diff/
 
 
 Testing
 ---
 
 Added qTests and unit tests
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Updated] (HIVE-7374) SHOW COMPACTIONS fail with remote metastore when there are no compations

2014-07-22 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-7374:
---

Component/s: (was: CLI)
 Thrift API

 SHOW COMPACTIONS fail with remote metastore when there are no compations
 

 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
  Labels: cli, compaction, metastore
 Fix For: 0.14.0

 Attachments: HIVE-7374.1.patch, HIVE-7374.2.patch


 Prerequistes :
 1. Remote metastore
 2. No compactions
 In CLI after doing this :
 {{show compactions;}}
 Return error :
 {noformat}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.transport.TTransportException
 {noformat}
 In metatore logs :
 {noformat}
 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
 (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing 
 of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
 unset! Struct:ShowCompactResponse(compacts:null)
 at 
 org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7434) beeline should not always enclose the output by default in CSV/TSV mode

2014-07-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070087#comment-14070087
 ] 

Hive QA commented on HIVE-7434:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12657040/HIVE-7434.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 5736 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_8
org.apache.hive.hcatalog.pig.TestOrcHCatLoader.testReadDataPrimitiveTypes
org.apache.hive.jdbc.TestJdbcDriver2.testParentReferences
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12657040

 beeline should not always enclose the output by default in CSV/TSV mode
 ---

 Key: HIVE-7434
 URL: https://issues.apache.org/jira/browse/HIVE-7434
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: ferdinand
 Attachments: HIVE-7434.patch, HIVE-7434.patch


 When using beeline in CSV/TSV mode (via command !outputformat csv) , the 
 output is always enclosed in single quotes. This is however not the case for 
 Hive CLI, so we need to make this enclose optional.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7172) Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()

2014-07-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070138#comment-14070138
 ] 

Hive QA commented on HIVE-7172:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12657080/HIVE-7172.patch

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 5751 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_join_hash
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_8
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes
org.apache.hive.jdbc.TestJdbcDriver2.testParentReferences
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12657080

 Potential resource leak in HiveSchemaTool#getMetaStoreSchemaVersion()
 -

 Key: HIVE-7172
 URL: https://issues.apache.org/jira/browse/HIVE-7172
 Project: Hive
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
 Attachments: HIVE-7172.patch


 {code}
   ResultSet res = stmt.executeQuery(versionQuery);
   if (!res.next()) {
 throw new HiveMetaException(Didn't find version data in metastore);
   }
   String currentSchemaVersion = res.getString(1);
   metastoreConn.close();
 {code}
 When HiveMetaException is thrown, metastoreConn.close() would be skipped.
 stmt is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7397) Set the default threshold for fetch task conversion to 1Gb

2014-07-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070195#comment-14070195
 ] 

Hive QA commented on HIVE-7397:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12657079/HIVE-7397.5.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 5751 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_8
org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testRevokeTimedOutWorkers
org.apache.hive.jdbc.TestJdbcDriver2.testParentReferences
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12657079

 Set the default threshold for fetch task conversion to 1Gb
 --

 Key: HIVE-7397
 URL: https://issues.apache.org/jira/browse/HIVE-7397
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 0.13.1
Reporter: Gopal V
Assignee: Gopal V
  Labels: Performance
 Fix For: 0.14.0

 Attachments: HIVE-7397.1.patch, HIVE-7397.2.patch, HIVE-7397.3.patch, 
 HIVE-7397.4.patch.txt, HIVE-7397.5.patch


 Currently, modifying the value of hive.fetch.task.conversion to more 
 results in a dangerous setting where small scale queries function, but large 
 scale queries crash.
 This occurs because the default threshold of -1 means apply this optimization 
 for a petabyte table.
 I am testing a variety of queries with the setting more (to make it the 
 default option as suggested by HIVE-887) change the default threshold for 
 this feature to a reasonable 1Gb.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7397) Set the default threshold for fetch task conversion to 1Gb

2014-07-22 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070267#comment-14070267
 ] 

Gopal V commented on HIVE-7397:
---

Will fix the null_scan test-case which needs a golden file update from the 
looks of it.

Other failures look unrelated.

 Set the default threshold for fetch task conversion to 1Gb
 --

 Key: HIVE-7397
 URL: https://issues.apache.org/jira/browse/HIVE-7397
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 0.13.1
Reporter: Gopal V
Assignee: Gopal V
  Labels: Performance
 Fix For: 0.14.0

 Attachments: HIVE-7397.1.patch, HIVE-7397.2.patch, HIVE-7397.3.patch, 
 HIVE-7397.4.patch.txt, HIVE-7397.5.patch


 Currently, modifying the value of hive.fetch.task.conversion to more 
 results in a dangerous setting where small scale queries function, but large 
 scale queries crash.
 This occurs because the default threshold of -1 means apply this optimization 
 for a petabyte table.
 I am testing a variety of queries with the setting more (to make it the 
 default option as suggested by HIVE-887) change the default threshold for 
 this feature to a reasonable 1Gb.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7420) Parameterize tests for HCatalog Pig interfaces for testing against all storage formats

2014-07-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070347#comment-14070347
 ] 

Hive QA commented on HIVE-7420:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12657078/HIVE-7420.1.patch

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 5822 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_8
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[1]
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[2]
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[3]
org.apache.hive.jdbc.TestJdbcDriver2.testParentReferences
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12657078

 Parameterize tests for HCatalog Pig interfaces for testing against all 
 storage formats
 --

 Key: HIVE-7420
 URL: https://issues.apache.org/jira/browse/HIVE-7420
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: David Chen
Assignee: David Chen
 Attachments: HIVE-7420.1.patch


 Currently, HCatalog tests only test against RCFile with a few testing against 
 ORC. The tests should be covering other Hive storage formats as well.
 HIVE-7286 turns HCatMapReduceTest into a test fixture that can be run with 
 all Hive storage formats and with that patch, all test suites built on 
 HCatMapReduceTest are running and passing against Sequence File, Text, and 
 ORC in addition to RCFile.
 Similar changes should be made to make the tests for HCatLoader and 
 HCatStorer generic so that they can be run against all Hive storage formats.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-07-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070405#comment-14070405
 ] 

Sergio Peña commented on HIVE-7373:
---

Is it correct to add extra zeros to the decimal places if the scale value is 
less than the desired decimal scale value?

For instance:
0.0   in decimal(5,4)  may be  0.
2.56 in decimal(5,4)  may be  2.5600

I have seen this format is enforced in other databases and applications, and it 
helps users to have a better view of its decimal data.

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7471) testing for equality of decimal columns does not work

2014-07-22 Thread Raj Thapar (JIRA)
Raj Thapar created HIVE-7471:


 Summary: testing for equality of decimal columns does not work
 Key: HIVE-7471
 URL: https://issues.apache.org/jira/browse/HIVE-7471
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
 Environment: x86_64 x86_64 x86_64 GNU/Linux
Reporter: Raj Thapar


I am using Hive version 0.11 and am facing following problem,

I have 2 tables: venus_all_min_prices and venus_all_prices. They have a column 
bp_price_paid having type of decimal. When I try to query for equality on this 
column between 2 tables, I don't get any results. However if I use one specific 
value and use 2 conditions against this value (one for each column anded 
together), it does return results. 

ie 
1. venus_all_min_prices.bp_price_paid = venus_all_prices.bp_price_paid does not 
return any values
2. venus_all_min_prices.bp_price_paid = 59.99 and 
venus_all_prices.bp_price_paid = 59.99: returns results

What should I do to make (1) work?

My table definitions are below:

CREATE  TABLE venus_all_min_prices(
  bp_price_paid decimal,
  opr_sty_clr_cd string)
PARTITIONED BY (
  partition_timestamp string)
ROW FORMAT DELIMITED
  FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
  'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
  'hdfs://host:8020/user/user/database/temp_data_location/venus_all_min_prices'
TBLPROPERTIES (
  'numPartitions'='1',
  'numFiles'='1',
  'transient_lastDdlTime'='1406040417',
  'numRows'='0',
  'totalSize'='2507',
  'rawDataSize'='0')

CREATE  TABLE venus_all_prices(
  bp_price_paid decimal,
  ord_key bigint,
  oms_ord_ln_key string,
  opr_sty_clr_cd string)
PARTITIONED BY (
  partition_timestamp string)
ROW FORMAT DELIMITED
  FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
  'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
  'hdfs://host:8020/user/user/database/temp_data_location/venus_all_prices'
TBLPROPERTIES (
  'numPartitions'='3',
  'numFiles'='11',
  'transient_lastDdlTime'='1405979150',
  'numRows'='0',
  'totalSize'='4845600',
  'rawDataSize'='0')  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7470) Wrong Thrift declaration for {{ShowCompactResponseElement}}

2014-07-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070435#comment-14070435
 ] 

Hive QA commented on HIVE-7470:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12657094/HIVE-7470.1.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 5751 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_8
org.apache.hive.jdbc.TestJdbcDriver2.testParentReferences
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12657094

 Wrong Thrift declaration for {{ShowCompactResponseElement}}
 ---

 Key: HIVE-7470
 URL: https://issues.apache.org/jira/browse/HIVE-7470
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
Priority: Minor
  Labels: metastore, thrift
 Fix For: 0.14.0

 Attachments: HIVE-7470.1.patch


 Prerequiste :
 1. Remote metastore
 2. Activate ACID and compactions
 3. Launch ALTER TABLE foo COMPACT 'bar'
 4. Call {{show_compact()}} on remote metastore
 This use case throws exception in Thrift stack.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-22 Thread Ashish Singh


 On July 18, 2014, 1:57 p.m., Tom White wrote:
  serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java, line 112
  https://reviews.apache.org/r/23387/diff/9/?file=634614#file634614line112
 
  Is it possible to default name to table name, namespace to database 
  name, and doc to table comment?
 
 Ashish Singh wrote:
 I was planning to do this, but slipped off my mind. Thanks for pointing 
 this out. I don't think it is possible to retrieve database name inside 
 serde. Addressed name and doc.
 
 Tom White wrote:
 Thanks for fixing this. There's no test that name and comment are 
 picked up from the table definition - perhaps you could add one, or at least 
 confirm it manually. I couldn't see where in Hive they get set...
 
 Otherwise, +1 from me - thanks for addressing all my comments. This is a 
 great feature to add.
 
 Ashish Singh wrote:
 Tom, actually its tested by all the unit tests now. Look at the diffs in 
 https://reviews.apache.org/r/23387/diff/8-9/#index_header.
 
 Tom White wrote:
 Unless I am missing something, the unit tests in TestTypeInfoToSchema 
 don't test this since they hardcode the table name to avrotest and the 
 table comment to This is to test hive-avro.
 
 Perhaps this is tested indirectly through the ql tests since Avro schema 
 resolution rules mean that a record schema's name must match for both the 
 reader and writer. However, this isn't true for comments (Avro schema doc), 
 and it would be good to confirm that inserting data into an Avro-backed Hive 
 table creates Avro files with the expected top-level name and comment.

Tom, my bad. I thought we are talking about having hive typeinfo in doc for 
corresponding avro schema. 

I did verify that top-level name and comment are being created as expected 
before posting the patch here. I log created avro schema in AvroSerde.java and 
that came handy to verify this.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/#review48120
---


On July 19, 2014, 5:11 a.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23387/
 ---
 
 (Updated July 19, 2014, 5:11 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6806
 https://issues.apache.org/jira/browse/HIVE-6806
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-6806: Native avro support
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
 1bae0a8fee04049f90b16d813ff4c96707b349c8 
   
 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
  a23ff115512da5fe3167835a88d582c427585b8e 
   ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
 d53ebc65174d66bfeee25fd2891c69c78f9137ee 
   ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_partitioned_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
 PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 0db12437406170686a21b6055d83156fe5d6a55f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
 1fe31e0034f8988d03a0c51a90904bb93e7cb157 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java 
 4564e75d9bfc73f8e10f160e2535f1a08b90ff79 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23387/diff/
 
 
 Testing
 ---
 
 Added qTests and unit tests
 
 
 Thanks,
 
 Ashish Singh
 




Do we support == operator?

2014-07-22 Thread Yin Huai
Hi,

Based on our language manual (
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF), seems
we do not support == operator. However, in FunctionRegistry, we treat
== as =. I guess we want to make == invalid and throw an exception
when a user uses it?

Thanks,

Yin


[jira] [Commented] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-07-22 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070467#comment-14070467
 ] 

Xuefu Zhang commented on HIVE-7373:
---

For the same reason as we don't trim trailing zeros, personally I don't think 
we should append zeros either. Some databases may choose to do so in formatting 
the output, to me this seems questionable, non-essential at least.

This JIRA is to deal with trailing zero cases.

 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7471) testing for equality of decimal columns does not work

2014-07-22 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070488#comment-14070488
 ] 

Xuefu Zhang commented on HIVE-7471:
---

[~rthapar] Decimal data type doesn't really work well in Hive 0.11. It has 
since been enhanced quite lot. Thus, you should looking into Hive 0.13 instead.

By the way, JIRA isn't a place to ask questions, but to report problem or 
request features. For usage questions, hive user list is the best option.

 testing for equality of decimal columns does not work
 -

 Key: HIVE-7471
 URL: https://issues.apache.org/jira/browse/HIVE-7471
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
 Environment: x86_64 x86_64 x86_64 GNU/Linux
Reporter: Raj Thapar

 I am using Hive version 0.11 and am facing following problem,
 I have 2 tables: venus_all_min_prices and venus_all_prices. They have a 
 column bp_price_paid having type of decimal. When I try to query for equality 
 on this column between 2 tables, I don't get any results. However if I use 
 one specific value and use 2 conditions against this value (one for each 
 column anded together), it does return results. 
 ie 
 1. venus_all_min_prices.bp_price_paid = venus_all_prices.bp_price_paid does 
 not return any values
 2. venus_all_min_prices.bp_price_paid = 59.99 and 
 venus_all_prices.bp_price_paid = 59.99: returns results
 What should I do to make (1) work?
 My table definitions are below:
 CREATE  TABLE venus_all_min_prices(
   bp_price_paid decimal,
   opr_sty_clr_cd string)
 PARTITIONED BY (
   partition_timestamp string)
 ROW FORMAT DELIMITED
   FIELDS TERMINATED BY ','
 STORED AS INPUTFORMAT
   'org.apache.hadoop.mapred.TextInputFormat'
 OUTPUTFORMAT
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
 LOCATION
   
 'hdfs://host:8020/user/user/database/temp_data_location/venus_all_min_prices'
 TBLPROPERTIES (
   'numPartitions'='1',
   'numFiles'='1',
   'transient_lastDdlTime'='1406040417',
   'numRows'='0',
   'totalSize'='2507',
   'rawDataSize'='0')
 CREATE  TABLE venus_all_prices(
   bp_price_paid decimal,
   ord_key bigint,
   oms_ord_ln_key string,
   opr_sty_clr_cd string)
 PARTITIONED BY (
   partition_timestamp string)
 ROW FORMAT DELIMITED
   FIELDS TERMINATED BY ','
 STORED AS INPUTFORMAT
   'org.apache.hadoop.mapred.TextInputFormat'
 OUTPUTFORMAT
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
 LOCATION
   'hdfs://host:8020/user/user/database/temp_data_location/venus_all_prices'
 TBLPROPERTIES (
   'numPartitions'='3',
   'numFiles'='11',
   'transient_lastDdlTime'='1405979150',
   'numRows'='0',
   'totalSize'='4845600',
   'rawDataSize'='0')  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-22 Thread Ashish Singh


 On July 19, 2014, 12:43 a.m., David Chen wrote:
  serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java,
   line 294
  https://reviews.apache.org/r/23387/diff/9/?file=634616#file634616line294
 
  It would improve maintainability to keep the test schemas in separate 
  .avsc files under serde/src/test/resources rather than inline in the file. 
  You can use Guava's Resources class to get the file and construct the 
  schema. For example:
  
  Schema expectedSchema = new Schema.Parser().parse(
  Resources.getResource(record1.avsc).openStream());
 
 Ashish Singh wrote:
 David, as the tests have a lot common in their schema I am using a method 
 to generate the common schema part and each test only provides a part of 
 schema that is specific to the test. This made my tests have much less LOC. 
 If I create a .avsc file for each test it will much more cumbersome for both 
 maintaining and adding new tests.
 
 David Chen wrote:
 Hi Ashish, sorry I was a bit unclear. The tests for the individual data 
 types are fine. I thought that moving just the two large schemas into their 
 own files may make them easier to maintain since keeping them inline requires 
 a large number of escape characters.

Davis, its done now.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/#review48168
---


On July 22, 2014, 5:13 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23387/
 ---
 
 (Updated July 22, 2014, 5:13 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6806
 https://issues.apache.org/jira/browse/HIVE-6806
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-6806: Native avro support
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
 1bae0a8fee04049f90b16d813ff4c96707b349c8 
   
 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
  a23ff115512da5fe3167835a88d582c427585b8e 
   ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
 d53ebc65174d66bfeee25fd2891c69c78f9137ee 
   ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_partitioned_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
 PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 0db12437406170686a21b6055d83156fe5d6a55f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
 1fe31e0034f8988d03a0c51a90904bb93e7cb157 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java 
 4564e75d9bfc73f8e10f160e2535f1a08b90ff79 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/resources/avro-nested-struct.avsc PRE-CREATION 
   serde/src/test/resources/avro-struct.avsc PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23387/diff/
 
 
 Testing
 ---
 
 Added qTests and unit tests
 
 
 Thanks,
 
 Ashish Singh
 




Re: Review Request 23744: HIVE-7451 : pass function name in create/drop function to authorization api

2014-07-22 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23744/#review48353
---



ql/src/java/org/apache/hadoop/hive/ql/metadata/SessionHiveMetaStoreClient.java
https://reviews.apache.org/r/23744/#comment84900

No changes relevant to patch - whitespace/imports removed. I guess it's not 
so bad since this seems to be the only such file, I would make more of a stink 
if there were lots of files like this in the patch.



ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java
https://reviews.apache.org/r/23744/#comment84905

So there will be 2 WriteEntities generated for CREATE FUNCTION - one on the 
DB object (to check admin privs), and one on the function being created. Can 
you explain the use case here?

Also, looking at the way the FUNCTION Entity is validated in 
SQLStdHiveAuthorizationValidator.java, it simply checks for admin.  Would we be 
able to just replace the older Database Entity check with the Function Entity?



ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java
https://reviews.apache.org/r/23744/#comment84904

Temp functions don't actually have an associated database, might be more 
appropriate to set null DB here?

Default DB used for temp functions in the WriteEntity created in line 174, 
just enable us to check that user has admin privileges for creating temp 
functions.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java
https://reviews.apache.org/r/23744/#comment84903

Should database name (for metastore functions only, not really applicable 
for temp functions) be included here as well)?



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
https://reviews.apache.org/r/23744/#comment84939

If we ever support execute privileges for UDFS then for that case we would 
likely want to check the metastore for execute privileges here. Would there be 
a way to have both kinds of privilege checking behavior here? 


- Jason Dere


On July 21, 2014, 5:33 p.m., Thejas Nair wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23744/
 ---
 
 (Updated July 21, 2014, 5:33 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7451
 https://issues.apache.org/jira/browse/HIVE-7451
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 see https://issues.apache.org/jira/browse/HIVE-7451
 
 
 Diffs
 -
 
   contrib/src/test/results/clientnegative/case_with_row_sequence.q.out 
 db564ff 
   contrib/src/test/results/clientnegative/invalid_row_sequence.q.out 89646a2 
   contrib/src/test/results/clientnegative/udtf_explode2.q.out 87dc534 
   contrib/src/test/results/clientpositive/dboutput.q.out 909ae2e 
   contrib/src/test/results/clientpositive/lateral_view_explode2.q.out 4b849fa 
   contrib/src/test/results/clientpositive/udaf_example_avg.q.out 3786078 
   contrib/src/test/results/clientpositive/udaf_example_group_concat.q.out 
 83b4802 
   contrib/src/test/results/clientpositive/udaf_example_max.q.out b68ec61 
   contrib/src/test/results/clientpositive/udaf_example_max_n.q.out 62632e3 
   contrib/src/test/results/clientpositive/udaf_example_min.q.out ec3a134 
   contrib/src/test/results/clientpositive/udaf_example_min_n.q.out 2e802e0 
   contrib/src/test/results/clientpositive/udf_example_add.q.out 4510ba4 
   contrib/src/test/results/clientpositive/udf_example_arraymapstruct.q.out 
 1e3bca4 
   contrib/src/test/results/clientpositive/udf_example_format.q.out 83e508a 
   contrib/src/test/results/clientpositive/udf_row_sequence.q.out 3b58cb5 
   contrib/src/test/results/clientpositive/udtf_explode2.q.out 47512c3 
   contrib/src/test/results/clientpositive/udtf_output_on_close.q.out 4ce0481 
   
 itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
  3618185 
   ql/src/java/org/apache/hadoop/hive/ql/Driver.java c89f90c 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 40ec4e5 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/Entity.java 2a38aad 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 26836b6 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/SessionHiveMetaStoreClient.java
  37b1669 
   ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java 
 e64ef76 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationUtils.java
  604c39d 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java
  8cdff5b 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java
  1ac6cab 
   
 

Re: hive 13: dynamic partition inserts

2014-07-22 Thread Prasanth Jayachandran
Hi Vishnu

Yes. There is change in the way dynamic partitions are inserted in hive 13. The 
new dynamic partitioning is highly scalable and uses very less memory. Here is 
the related JIRA  https://issues.apache.org/jira/browse/HIVE-6455. 

Setting hive.optimize.sort.dynamic.partition to false will fallback to old 
way of insertion. If your destination table uses columnar formats like ORC, 
Parquet etc. then it makes sense leave the optimization ON, as columnar formats 
needs some buffer space for each column before flushing to disk. Buffer space 
(runtime memory) will quickly shoot up when there are lots of partition column 
values and columns. HIVE-6455 addresses this issue.

Thanks
Prasanth Jayachandran

On Jul 22, 2014, at 10:51 AM, Gajendran, Vishnu vis...@amazon.com wrote:

 adding u...@hive.apache.org for wider audience
 From: Gajendran, Vishnu
 Sent: Tuesday, July 22, 2014 10:42 AM
 To: dev@hive.apache.org
 Subject: hive 13: dynamic partition inserts
 
 Hello,
 
 I am seeing a difference between hive 11 and hive 13 when inserting to a 
 table with dynamic partitions.
 
 In Hive 11, when I set hive.merge.mapfiles=false before doing a dynamic 
 partition insert, I see number of files (generated my each mapper) in the 
 specified hdfs location as expected. But, in Hive 13, when I set 
 hive.merge.mapfiles=false, I just see one file in specified hdfs location for 
 the same query. I think hive is not honoring the hive.merge.mapfiles 
 parameter and it merged all the mapper outputs to a single file.
 
 In Hive 11, 19 mappers were executed for the dynamic partition insert task. 
 But in Hive 13, 19 mappers and 2 reducers were executed.
 
 When I checked the query plan for hive 11, there is only a map operator task 
 for dynamic partition insert. But, in hive 13, I see both map operator and 
 reduce operator task.
 
 Is there any changes in hive 13 regarding dymamic partition inserts? Any 
 comments on this issue is greatly appreciated.
 
 Thanks,
 vishnu


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Updated] (HIVE-7436) Load Spark configuration into Hive driver

2014-07-22 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7436:
--

   Resolution: Fixed
Fix Version/s: spark-branch
   Status: Resolved  (was: Patch Available)

Patch committed to spark branch. Thanks to Chengxiang for the contribution.

 Load Spark configuration into Hive driver
 -

 Key: HIVE-7436
 URL: https://issues.apache.org/jira/browse/HIVE-7436
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Fix For: spark-branch

 Attachments: HIVE-7436-Spark.1.patch, HIVE-7436-Spark.2.patch, 
 HIVE-7436-Spark.3.patch


 load Spark configuration into Hive driver, there are 3 ways to setup spark 
 configurations:
 #  Java property.
 #  Configure properties in spark configuration file(spark-defaults.conf).
 #  Hive configuration file(hive-site.xml).
 The below configuration has more priority, and would overwrite previous 
 configuration with the same property name.
 Please refer to [http://spark.apache.org/docs/latest/configuration.html] for 
 all configurable properties of spark, and you can configure spark 
 configuration in Hive through following ways:
 # Configure through spark configuration file.
 #* Create spark-defaults.conf, and place it in the /etc/spark/conf 
 configuration directory. configure properties in spark-defaults.conf in java 
 properties format.
 #* Create the $SPARK_CONF_DIR environment variable and set it to the location 
 of spark-defaults.conf.
 export SPARK_CONF_DIR=/etc/spark/conf
 #* Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
 export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH
 # Configure through hive configuration file.
 #* edit hive-site.xml in hive conf directory, configure properties in 
 spark-defaults.conf in xml format.
 Hive driver default spark properties:
 ||name||default value||description||
 |spark.master|local|Spark master url.|
 |spark.app.name|Hive on Spark|Default Spark application name.|
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


RE: hive 13: dynamic partition inserts

2014-07-22 Thread Gajendran, Vishnu
Hi Prasanth,

 Thanks a lot for your quick response.

From: Prasanth Jayachandran [pjayachand...@hortonworks.com]
Sent: Tuesday, July 22, 2014 11:28 AM
To: u...@hive.apache.org
Cc: dev@hive.apache.org
Subject: Re: hive 13: dynamic partition inserts

Hi Vishnu

Yes. There is change in the way dynamic partitions are inserted in hive 13. The 
new dynamic partitioning is highly scalable and uses very less memory. Here is 
the related JIRA  https://issues.apache.org/jira/browse/HIVE-6455.

Setting hive.optimize.sort.dynamic.partition to false will fallback to old 
way of insertion. If your destination table uses columnar formats like ORC, 
Parquet etc. then it makes sense leave the optimization ON, as columnar formats 
needs some buffer space for each column before flushing to disk. Buffer space 
(runtime memory) will quickly shoot up when there are lots of partition column 
values and columns. HIVE-6455 addresses this issue.

Thanks
Prasanth Jayachandran

On Jul 22, 2014, at 10:51 AM, Gajendran, Vishnu 
vis...@amazon.commailto:vis...@amazon.com wrote:

adding u...@hive.apache.orgmailto:u...@hive.apache.org for wider audience

From: Gajendran, Vishnu
Sent: Tuesday, July 22, 2014 10:42 AM
To: dev@hive.apache.orgmailto:dev@hive.apache.org
Subject: hive 13: dynamic partition inserts

Hello,

I am seeing a difference between hive 11 and hive 13 when inserting to a table 
with dynamic partitions.

In Hive 11, when I set hive.merge.mapfiles=false before doing a dynamic 
partition insert, I see number of files (generated my each mapper) in the 
specified hdfs location as expected. But, in Hive 13, when I set 
hive.merge.mapfiles=false, I just see one file in specified hdfs location for 
the same query. I think hive is not honoring the hive.merge.mapfiles parameter 
and it merged all the mapper outputs to a single file.

In Hive 11, 19 mappers were executed for the dynamic partition insert task. But 
in Hive 13, 19 mappers and 2 reducers were executed.

When I checked the query plan for hive 11, there is only a map operator task 
for dynamic partition insert. But, in hive 13, I see both map operator and 
reduce operator task.

Is there any changes in hive 13 regarding dymamic partition inserts? Any 
comments on this issue is greatly appreciated.

Thanks,
vishnu


CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, you are hereby notified that any 
printing, copying, dissemination, distribution, disclosure or forwarding of 
this communication is strictly prohibited. If you have received this 
communication in error, please contact the sender immediately and delete it 
from your system. Thank You.


[jira] [Commented] (HIVE-7436) Load Spark configuration into Hive driver

2014-07-22 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14070669#comment-14070669
 ] 

Xuefu Zhang commented on HIVE-7436:
---

Thanks, [~chengxiang li]. Patch looks good to me. I will commit it shortly.

As to capacity control, we don't have to worry it about for now, but feel free 
to create a JIRA for that.

 Load Spark configuration into Hive driver
 -

 Key: HIVE-7436
 URL: https://issues.apache.org/jira/browse/HIVE-7436
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
 Attachments: HIVE-7436-Spark.1.patch, HIVE-7436-Spark.2.patch, 
 HIVE-7436-Spark.3.patch


 load Spark configuration into Hive driver, there are 3 ways to setup spark 
 configurations:
 #  Java property.
 #  Configure properties in spark configuration file(spark-defaults.conf).
 #  Hive configuration file(hive-site.xml).
 The below configuration has more priority, and would overwrite previous 
 configuration with the same property name.
 Please refer to [http://spark.apache.org/docs/latest/configuration.html] for 
 all configurable properties of spark, and you can configure spark 
 configuration in Hive through following ways:
 # Configure through spark configuration file.
 #* Create spark-defaults.conf, and place it in the /etc/spark/conf 
 configuration directory. configure properties in spark-defaults.conf in java 
 properties format.
 #* Create the $SPARK_CONF_DIR environment variable and set it to the location 
 of spark-defaults.conf.
 export SPARK_CONF_DIR=/etc/spark/conf
 #* Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
 export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH
 # Configure through hive configuration file.
 #* edit hive-site.xml in hive conf directory, configure properties in 
 spark-defaults.conf in xml format.
 Hive driver default spark properties:
 ||name||default value||description||
 |spark.master|local|Spark master url.|
 |spark.app.name|Hive on Spark|Default Spark application name.|
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-7463) Add rule for transitive inference

2014-07-22 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani reassigned HIVE-7463:
---

Assignee: Harish Butani

 Add rule for transitive inference
 -

 Key: HIVE-7463
 URL: https://issues.apache.org/jira/browse/HIVE-7463
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: Harish Butani

 R1.x=R2.x and R1.x=10 - R2.x = 10
 This applies to Inner Joins  some form of outer join conditions and filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7026) Support newly added role related APIs for v1 authorizer

2014-07-22 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7026:


   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks for the contribution Navis!


 Support newly added role related APIs for v1 authorizer
 ---

 Key: HIVE-7026
 URL: https://issues.apache.org/jira/browse/HIVE-7026
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Navis
Assignee: Navis
Priority: Trivial
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7026.1.patch.txt, HIVE-7026.2.patch.txt, 
 HIVE-7026.3.patch.txt, HIVE-7026.4.patch.txt


 Support SHOW_CURRENT_ROLE and SHOW_ROLE_PRINCIPALS for v1 authorizer. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7026) Support newly added role related APIs for v1 authorizer

2014-07-22 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7026:


Labels: TODOC14  (was: )

 Support newly added role related APIs for v1 authorizer
 ---

 Key: HIVE-7026
 URL: https://issues.apache.org/jira/browse/HIVE-7026
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Navis
Assignee: Navis
Priority: Trivial
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-7026.1.patch.txt, HIVE-7026.2.patch.txt, 
 HIVE-7026.3.patch.txt, HIVE-7026.4.patch.txt


 Support SHOW_CURRENT_ROLE and SHOW_ROLE_PRINCIPALS for v1 authorizer. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7472) CLONE - Import fails for tables created with default text, sequence and orc file formats using HCatalog API

2014-07-22 Thread Sushanth Sowmyan (JIRA)
Sushanth Sowmyan created HIVE-7472:
--

 Summary: CLONE - Import fails for tables created with default 
text, sequence and orc file formats using HCatalog API
 Key: HIVE-7472
 URL: https://issues.apache.org/jira/browse/HIVE-7472
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.11.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0


A table was created using HCatalog API with out specifying the file format, it 
defaults to:
{code}
fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
{code}

But, when hive fetches the table from the metastore, it strangely replaces the 
output format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
and the comparison between source and target table fails.

The code in org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable 
does a string comparison of classes and fails.
{code}
  // check IF/OF/Serde
  String existingifc = table.getInputFormatClass().getName();
  String importedifc = tableDesc.getInputFormat();
  String existingofc = table.getOutputFormatClass().getName();
  String importedofc = tableDesc.getOutputFormat();
  if ((!existingifc.equals(importedifc))
  || (!existingofc.equals(importedofc))) {
throw new SemanticException(
ErrorMsg.INCOMPATIBLE_SCHEMA
.getMsg( Table inputformat/outputformats do not match));
  }
{code}

This only affects tables with text and sequence file formats but not rc or orc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7472) CLONE - Import fails for tables created with default text, sequence and orc file formats using HCatalog API

2014-07-22 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-7472:
---

Affects Version/s: (was: 0.11.0)
   0.14.0
   0.13.1

 CLONE - Import fails for tables created with default text, sequence and orc 
 file formats using HCatalog API
 ---

 Key: HIVE-7472
 URL: https://issues.apache.org/jira/browse/HIVE-7472
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.14.0, 0.13.1
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan

 A table was created using HCatalog API with out specifying the file format, 
 it defaults to:
 {code}
 fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
 outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
 {code}
 But, when hive fetches the table from the metastore, it strangely replaces 
 the output format with 
 org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
 and the comparison between source and target table fails.
 The code in org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable 
 does a string comparison of classes and fails.
 {code}
   // check IF/OF/Serde
   String existingifc = table.getInputFormatClass().getName();
   String importedifc = tableDesc.getInputFormat();
   String existingofc = table.getOutputFormatClass().getName();
   String importedofc = tableDesc.getOutputFormat();
   if ((!existingifc.equals(importedifc))
   || (!existingofc.equals(importedofc))) {
 throw new SemanticException(
 ErrorMsg.INCOMPATIBLE_SCHEMA
 .getMsg( Table inputformat/outputformats do not match));
   }
 {code}
 This only affects tables with text and sequence file formats but not rc or 
 orc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7472) CLONE - Import fails for tables created with default text, sequence and orc file formats using HCatalog API

2014-07-22 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-7472:
---

Fix Version/s: (was: 0.13.0)

 CLONE - Import fails for tables created with default text, sequence and orc 
 file formats using HCatalog API
 ---

 Key: HIVE-7472
 URL: https://issues.apache.org/jira/browse/HIVE-7472
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.14.0, 0.13.1
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan

 A table was created using HCatalog API with out specifying the file format, 
 it defaults to:
 {code}
 fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
 outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
 {code}
 But, when hive fetches the table from the metastore, it strangely replaces 
 the output format with 
 org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
 and the comparison between source and target table fails.
 The code in org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable 
 does a string comparison of classes and fails.
 {code}
   // check IF/OF/Serde
   String existingifc = table.getInputFormatClass().getName();
   String importedifc = tableDesc.getInputFormat();
   String existingofc = table.getOutputFormatClass().getName();
   String importedofc = tableDesc.getOutputFormat();
   if ((!existingifc.equals(importedifc))
   || (!existingofc.equals(importedofc))) {
 throw new SemanticException(
 ErrorMsg.INCOMPATIBLE_SCHEMA
 .getMsg( Table inputformat/outputformats do not match));
   }
 {code}
 This only affects tables with text and sequence file formats but not rc or 
 orc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7472) CLONE - Import fails for tables created with default text, sequence and orc file formats using HCatalog API

2014-07-22 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-7472:
---

Description: 

Cloning HIVE-5550 because HIVE-5550 fixed org.apache.hcatalog.*, and not 
org.apache.hive.hcatalog.* . And that other package needs this change too. And 
with 0.14 pruning of org.apache.hcatalog.*, we miss this patch altogether.




A table was created using HCatalog API with out specifying the file format, it 
defaults to:
{code}
fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
{code}

But, when hive fetches the table from the metastore, it strangely replaces the 
output format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
and the comparison between source and target table fails.

The code in org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable 
does a string comparison of classes and fails.
{code}
  // check IF/OF/Serde
  String existingifc = table.getInputFormatClass().getName();
  String importedifc = tableDesc.getInputFormat();
  String existingofc = table.getOutputFormatClass().getName();
  String importedofc = tableDesc.getOutputFormat();
  if ((!existingifc.equals(importedifc))
  || (!existingofc.equals(importedofc))) {
throw new SemanticException(
ErrorMsg.INCOMPATIBLE_SCHEMA
.getMsg( Table inputformat/outputformats do not match));
  }
{code}

This only affects tables with text and sequence file formats but not rc or orc.

  was:
A table was created using HCatalog API with out specifying the file format, it 
defaults to:
{code}
fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
{code}

But, when hive fetches the table from the metastore, it strangely replaces the 
output format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
and the comparison between source and target table fails.

The code in org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable 
does a string comparison of classes and fails.
{code}
  // check IF/OF/Serde
  String existingifc = table.getInputFormatClass().getName();
  String importedifc = tableDesc.getInputFormat();
  String existingofc = table.getOutputFormatClass().getName();
  String importedofc = tableDesc.getOutputFormat();
  if ((!existingifc.equals(importedifc))
  || (!existingofc.equals(importedofc))) {
throw new SemanticException(
ErrorMsg.INCOMPATIBLE_SCHEMA
.getMsg( Table inputformat/outputformats do not match));
  }
{code}

This only affects tables with text and sequence file formats but not rc or orc.


 CLONE - Import fails for tables created with default text, sequence and orc 
 file formats using HCatalog API
 ---

 Key: HIVE-7472
 URL: https://issues.apache.org/jira/browse/HIVE-7472
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.14.0, 0.13.1
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan

 Cloning HIVE-5550 because HIVE-5550 fixed org.apache.hcatalog.*, and not 
 org.apache.hive.hcatalog.* . And that other package needs this change too. 
 And with 0.14 pruning of org.apache.hcatalog.*, we miss this patch altogether.
 
 A table was created using HCatalog API with out specifying the file format, 
 it defaults to:
 {code}
 fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
 outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
 {code}
 But, when hive fetches the table from the metastore, it strangely replaces 
 the output format with 
 org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
 and the comparison between source and target table fails.
 The code in org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable 
 does a string comparison of classes and fails.
 {code}
   // check IF/OF/Serde
   String existingifc = table.getInputFormatClass().getName();
   String importedifc = tableDesc.getInputFormat();
   String existingofc = table.getOutputFormatClass().getName();
   String importedofc = tableDesc.getOutputFormat();
   if ((!existingifc.equals(importedifc))
   || (!existingofc.equals(importedofc))) {
 throw new SemanticException(
 ErrorMsg.INCOMPATIBLE_SCHEMA
 .getMsg( Table inputformat/outputformats do not match));
   }
 {code}
 This only affects tables with text and sequence file formats but not rc or 
 orc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23738: HIVE-5160: HS2 should support .hiverc

2014-07-22 Thread Lefty Leverenz

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23738/#review48405
---

Ship it!


Ship It!

- Lefty Leverenz


On July 22, 2014, 8:24 a.m., Dong Chen wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23738/
 ---
 
 (Updated July 22, 2014, 8:24 a.m.)
 
 
 Review request for hive.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-5160: HS2 should support .hiverc
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/common/cli/HiveFileProcessor.java 
 PRE-CREATION 
   common/src/java/org/apache/hadoop/hive/common/cli/IHiveFileProcessor.java 
 PRE-CREATION 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 593c566 
   conf/hive-default.xml.template 653f5cc 
   service/src/java/org/apache/hive/service/cli/session/HiveSessionBase.java 
 a5c8e9b 
   service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
 7a3286d 
   
 service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
  e79b129 
   service/src/java/org/apache/hive/service/cli/session/SessionManager.java 
 6650c05 
   
 service/src/test/org/apache/hive/service/cli/session/TestSessionGlobalInitFile.java
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23738/diff/
 
 
 Testing
 ---
 
 UT passed.
 
 
 Thanks,
 
 Dong Chen
 




Review Request 23820: HIVE-7445:Improve LOGS for Hive when a query is not able to acquire locks

2014-07-22 Thread Chaoyu Tang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23820/
---

Review request for hive, Brock Noland, Prasad Mujumdar, Szehon Ho, and Xuefu 
Zhang.


Bugs: HIVE-7445
https://issues.apache.org/jira/browse/HIVE-7445


Repository: hive-git


Description
---

This patch enables ZookeeperHiveLockManager.java, when in debug mode, to log 
out information about contentious locks if lockmgr fails to acquire a lock for 
a query. The changes include:
1. Collect and log out contentious lock information in 
ZookeeperHiveLockManager.java
2. Improve lock retry counting and existing logging, and modified some qtests 
output files to match these logging
 


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveLockObject.java 1cc3074 
  
ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
 65b9136 
  ql/src/test/results/clientnegative/insert_into1.q.out a38b679 
  ql/src/test/results/clientnegative/insert_into2.q.out f21823a 
  ql/src/test/results/clientnegative/insert_into3.q.out 5f1581e 
  ql/src/test/results/clientnegative/insert_into4.q.out 5dcdd50 
  ql/src/test/results/clientnegative/lockneg1.q.out 6a76cd7 
  ql/src/test/results/clientnegative/lockneg_try_db_lock_conflict.q.out a9833a8 
  ql/src/test/results/clientnegative/lockneg_try_drop_locked_db.q.out d67365a 
  ql/src/test/results/clientnegative/lockneg_try_lock_db_in_use.q.out 89d3265 

Diff: https://reviews.apache.org/r/23820/diff/


Testing
---

1. Manual tests: following debug logging was printed out as expected during a 
lock acquisition failure.
---
Unable to acquire IMPLICIT, EXCLUSIVE lock default@sample_07 after 2 attempts.
14/07/22 11:44:40 ERROR ZooKeeperHiveLockManager: Unable to acquire IMPLICIT, 
EXCLUSIVE lock default@sample_07 after 2 attempts.
14/07/22 11:44:40 DEBUG ZooKeeperHiveLockManager: Requested lock 
default@sample_07:: mode:IMPLICIT,EXCLUSIVE; query:insert into table sample_07 
select * from sample_08
14/07/22 11:44:40 DEBUG ZooKeeperHiveLockManager: Conflicting lock to 
default@sample_07:: mode:IMPLICIT;query:select code from 
sample_07;queryId:root_20140722084242_98f8d9d7-d110-45c0-8c8b-12da2e5172d9;clientIp:10.20.92.233
---

2. Precommit tests
The test failure testCliDriver_ql_rewrite_gbtoidx is a preexisting one and I 
think it is not related to this change.


Thanks,

Chaoyu Tang



[jira] [Updated] (HIVE-7445) Improve LOGS for Hive when a query is not able to acquire locks

2014-07-22 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-7445:
--

Attachment: HIVE-7445.1.patch

Hi @Szehon, I made the changes based on your comments (see attached 
HIVE-7445.1.patch) and also posted it in RB 
https://reviews.apache.org/r/23820/. Please review it and let me know if there 
is any problem. Thanks

 Improve LOGS for Hive when a query is not able to acquire locks
 ---

 Key: HIVE-7445
 URL: https://issues.apache.org/jira/browse/HIVE-7445
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability, Logging
Affects Versions: 0.13.1
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-7445.1.patch, HIVE-7445.patch


 Currently the error thrown when you cannot acquire a lock is:
 Error in acquireLocks... 
 FAILED: Error in acquiring locks: Locks on the underlying objects cannot be 
 acquired. retry after some time
 This error is insufficient if the user would like to understand what is 
 blocking them and insufficient from a diagnosability perspective because it 
 is difficult to know what query is blocking the lock acquisition.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7473) Null values in DECIMAL columns cause serialization issues with HCatalog

2014-07-22 Thread Craig Condit (JIRA)
Craig Condit created HIVE-7473:
--

 Summary: Null values in DECIMAL columns cause serialization issues 
with HCatalog
 Key: HIVE-7473
 URL: https://issues.apache.org/jira/browse/HIVE-7473
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1
Reporter: Craig Condit


WritableHiveDecimalObjectInspector appears to be missing null checks in 
getPrimitiveWritableObject(Object) and getPrimitiveJavaObject(Object). The same 
checks do exist in WritableHiveVarcharObjectInspector.

Attempting to read from a table in HCatalog containing null values for decimal 
columns results in the following exception (Pig used here):

{noformat}
Error: org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
converting read value to tuple at 
org.apache.hive.hcatalog.pig.HCatBaseLoader.getNext(HCatBaseLoader.java:76) at 
org.apache.hive.hcatalog.pig.HCatLoader.getNext(HCatLoader.java:58) at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
 at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
 at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
 at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:415) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) Caused by: 
java.lang.NullPointerException at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:43)
 at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:26)
 at 
org.apache.hive.hcatalog.data.HCatRecordSerDe.serializePrimitiveField(HCatRecordSerDe.java:269)
 at 
org.apache.hive.hcatalog.data.HCatRecordSerDe.serializeField(HCatRecordSerDe.java:192)
 at org.apache.hive.hcatalog.data.LazyHCatRecord.get(LazyHCatRecord.java:53) at 
org.apache.hive.hcatalog.data.LazyHCatRecord.get(LazyHCatRecord.java:97) at 
org.apache.hive.hcatalog.mapreduce.HCatRecordReader.nextKeyValue(HCatRecordReader.java:204)
 at org.apache.hive.hcatalog.pig.HCatBaseLoader.getNext(HCatBaseLoader.java:63) 
... 13 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7473) Null values in DECIMAL columns cause serialization issues with HCatalog

2014-07-22 Thread Craig Condit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Condit updated HIVE-7473:
---

Description: 
WritableHiveDecimalObjectInspector appears to be missing null checks in 
getPrimitiveWritableObject(Object) and getPrimitiveJavaObject(Object). The same 
checks do exist in WritableHiveVarcharObjectInspector.

Attempting to read from a table in HCatalog containing null values for decimal 
columns results in the following exception (Pig used here):

{noformat}
Error: org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
converting read value to tuple at
  org.apache.hive.hcatalog.pig.HCatBaseLoader.getNext(HCatBaseLoader.java:76) at
  org.apache.hive.hcatalog.pig.HCatLoader.getNext(HCatLoader.java:58) at
  
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
 at
  
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
 at
  
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
 at
  
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
 at
  org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at
  org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at
  org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at
  org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at
  java.security.AccessController.doPrivileged(Native Method) at
  javax.security.auth.Subject.doAs(Subject.java:415) at
  
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at
  org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: java.lang.NullPointerException at 
  
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:43)
 at
  
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:26)
 at 
  
org.apache.hive.hcatalog.data.HCatRecordSerDe.serializePrimitiveField(HCatRecordSerDe.java:269)
 at
  
org.apache.hive.hcatalog.data.HCatRecordSerDe.serializeField(HCatRecordSerDe.java:192)
 at
  org.apache.hive.hcatalog.data.LazyHCatRecord.get(LazyHCatRecord.java:53) at
  org.apache.hive.hcatalog.data.LazyHCatRecord.get(LazyHCatRecord.java:97) at
  
org.apache.hive.hcatalog.mapreduce.HCatRecordReader.nextKeyValue(HCatRecordReader.java:204)
 at
  org.apache.hive.hcatalog.pig.HCatBaseLoader.getNext(HCatBaseLoader.java:63)
  ... 13 more
{noformat}

  was:
WritableHiveDecimalObjectInspector appears to be missing null checks in 
getPrimitiveWritableObject(Object) and getPrimitiveJavaObject(Object). The same 
checks do exist in WritableHiveVarcharObjectInspector.

Attempting to read from a table in HCatalog containing null values for decimal 
columns results in the following exception (Pig used here):

{noformat}
Error: org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
converting read value to tuple at 
org.apache.hive.hcatalog.pig.HCatBaseLoader.getNext(HCatBaseLoader.java:76) at 
org.apache.hive.hcatalog.pig.HCatLoader.getNext(HCatLoader.java:58) at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
 at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
 at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
 at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:415) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) Caused by: 
java.lang.NullPointerException at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:43)
 at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:26)
 at 
org.apache.hive.hcatalog.data.HCatRecordSerDe.serializePrimitiveField(HCatRecordSerDe.java:269)
 at 
org.apache.hive.hcatalog.data.HCatRecordSerDe.serializeField(HCatRecordSerDe.java:192)
 at org.apache.hive.hcatalog.data.LazyHCatRecord.get(LazyHCatRecord.java:53) at 
org.apache.hive.hcatalog.data.LazyHCatRecord.get(LazyHCatRecord.java:97) at 

[jira] [Updated] (HIVE-7473) Null values in DECIMAL columns cause serialization issues with HCatalog

2014-07-22 Thread Craig Condit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Condit updated HIVE-7473:
---

Attachment: HIVE-7473.patch

Patch which fixes the issue.

 Null values in DECIMAL columns cause serialization issues with HCatalog
 ---

 Key: HIVE-7473
 URL: https://issues.apache.org/jira/browse/HIVE-7473
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.1
Reporter: Craig Condit
 Attachments: HIVE-7473.patch


 WritableHiveDecimalObjectInspector appears to be missing null checks in 
 getPrimitiveWritableObject(Object) and getPrimitiveJavaObject(Object). The 
 same checks do exist in WritableHiveVarcharObjectInspector.
 Attempting to read from a table in HCatalog containing null values for 
 decimal columns results in the following exception (Pig used here):
 {noformat}
 Error: org.apache.pig.backend.executionengine.ExecException: ERROR 6018: 
 Error converting read value to tuple at
   org.apache.hive.hcatalog.pig.HCatBaseLoader.getNext(HCatBaseLoader.java:76) 
 at
   org.apache.hive.hcatalog.pig.HCatLoader.getNext(HCatLoader.java:58) at
   
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
  at
   
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
  at
   
 org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
  at
   
 org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
  at
   org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at
   org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at
   org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at
   org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at
   java.security.AccessController.doPrivileged(Native Method) at
   javax.security.auth.Subject.doAs(Subject.java:415) at
   
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
  at
   org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
 Caused by: java.lang.NullPointerException at 
   
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:43)
  at
   
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveJavaObject(WritableHiveDecimalObjectInspector.java:26)
  at 
   
 org.apache.hive.hcatalog.data.HCatRecordSerDe.serializePrimitiveField(HCatRecordSerDe.java:269)
  at
   
 org.apache.hive.hcatalog.data.HCatRecordSerDe.serializeField(HCatRecordSerDe.java:192)
  at
   org.apache.hive.hcatalog.data.LazyHCatRecord.get(LazyHCatRecord.java:53) at
   org.apache.hive.hcatalog.data.LazyHCatRecord.get(LazyHCatRecord.java:97) at
   
 org.apache.hive.hcatalog.mapreduce.HCatRecordReader.nextKeyValue(HCatRecordReader.java:204)
  at
   org.apache.hive.hcatalog.pig.HCatBaseLoader.getNext(HCatBaseLoader.java:63)
   ... 13 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23744: HIVE-7451 : pass function name in create/drop function to authorization api

2014-07-22 Thread Thejas Nair


 On July 22, 2014, 5:48 p.m., Jason Dere wrote:
  ql/src/java/org/apache/hadoop/hive/ql/metadata/SessionHiveMetaStoreClient.java,
   line 16
  https://reviews.apache.org/r/23744/diff/1/?file=636932#file636932line16
 
  No changes relevant to patch - whitespace/imports removed. I guess it's 
  not so bad since this seems to be the only such file, I would make more of 
  a stink if there were lots of files like this in the patch.

For some reason I was getting build errors while testing because of the extra 
semi-colon here. I don't remember exactly if the issue was only when I was 
trying to debug using eclipse.


 On July 22, 2014, 5:48 p.m., Jason Dere wrote:
  ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java, 
  line 180
  https://reviews.apache.org/r/23744/diff/1/?file=636933#file636933line180
 
  Temp functions don't actually have an associated database, might be 
  more appropriate to set null DB here?
  
  Default DB used for temp functions in the WriteEntity created in line 
  174, just enable us to check that user has admin privileges for creating 
  temp functions.

Changing the temp function use case to not lookup the default db. So this 
database variable is going to be null in case of temporary functions.


 On July 22, 2014, 5:48 p.m., Jason Dere wrote:
  ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java,
   line 134
  https://reviews.apache.org/r/23744/diff/1/?file=636935#file636935line134
 
  Should database name (for metastore functions only, not really 
  applicable for temp functions) be included here as well)?

Yes, I think it makes sense to include the dbname if it is not null.


 On July 22, 2014, 5:48 p.m., Jason Dere wrote:
  ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java,
   line 111
  https://reviews.apache.org/r/23744/diff/1/?file=636939#file636939line111
 
  If we ever support execute privileges for UDFS then for that case we 
  would likely want to check the metastore for execute privileges here. Would 
  there be a way to have both kinds of privilege checking behavior here?

We would also need to make changes in metastore to support function execute 
privileges. We can make changes here as well at that time. As this part is 
implementation specific, we can change it when the feature is added.


- Thejas


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23744/#review48353
---


On July 21, 2014, 5:33 p.m., Thejas Nair wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23744/
 ---
 
 (Updated July 21, 2014, 5:33 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7451
 https://issues.apache.org/jira/browse/HIVE-7451
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 see https://issues.apache.org/jira/browse/HIVE-7451
 
 
 Diffs
 -
 
   contrib/src/test/results/clientnegative/case_with_row_sequence.q.out 
 db564ff 
   contrib/src/test/results/clientnegative/invalid_row_sequence.q.out 89646a2 
   contrib/src/test/results/clientnegative/udtf_explode2.q.out 87dc534 
   contrib/src/test/results/clientpositive/dboutput.q.out 909ae2e 
   contrib/src/test/results/clientpositive/lateral_view_explode2.q.out 4b849fa 
   contrib/src/test/results/clientpositive/udaf_example_avg.q.out 3786078 
   contrib/src/test/results/clientpositive/udaf_example_group_concat.q.out 
 83b4802 
   contrib/src/test/results/clientpositive/udaf_example_max.q.out b68ec61 
   contrib/src/test/results/clientpositive/udaf_example_max_n.q.out 62632e3 
   contrib/src/test/results/clientpositive/udaf_example_min.q.out ec3a134 
   contrib/src/test/results/clientpositive/udaf_example_min_n.q.out 2e802e0 
   contrib/src/test/results/clientpositive/udf_example_add.q.out 4510ba4 
   contrib/src/test/results/clientpositive/udf_example_arraymapstruct.q.out 
 1e3bca4 
   contrib/src/test/results/clientpositive/udf_example_format.q.out 83e508a 
   contrib/src/test/results/clientpositive/udf_row_sequence.q.out 3b58cb5 
   contrib/src/test/results/clientpositive/udtf_explode2.q.out 47512c3 
   contrib/src/test/results/clientpositive/udtf_output_on_close.q.out 4ce0481 
   
 itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
  3618185 
   ql/src/java/org/apache/hadoop/hive/ql/Driver.java c89f90c 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 40ec4e5 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/Entity.java 2a38aad 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 26836b6 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/SessionHiveMetaStoreClient.java
  37b1669 
   

Re: Review Request 23744: HIVE-7451 : pass function name in create/drop function to authorization api

2014-07-22 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23744/#review48403
---


- Thejas Nair


On July 21, 2014, 5:33 p.m., Thejas Nair wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23744/
 ---
 
 (Updated July 21, 2014, 5:33 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7451
 https://issues.apache.org/jira/browse/HIVE-7451
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 see https://issues.apache.org/jira/browse/HIVE-7451
 
 
 Diffs
 -
 
   contrib/src/test/results/clientnegative/case_with_row_sequence.q.out 
 db564ff 
   contrib/src/test/results/clientnegative/invalid_row_sequence.q.out 89646a2 
   contrib/src/test/results/clientnegative/udtf_explode2.q.out 87dc534 
   contrib/src/test/results/clientpositive/dboutput.q.out 909ae2e 
   contrib/src/test/results/clientpositive/lateral_view_explode2.q.out 4b849fa 
   contrib/src/test/results/clientpositive/udaf_example_avg.q.out 3786078 
   contrib/src/test/results/clientpositive/udaf_example_group_concat.q.out 
 83b4802 
   contrib/src/test/results/clientpositive/udaf_example_max.q.out b68ec61 
   contrib/src/test/results/clientpositive/udaf_example_max_n.q.out 62632e3 
   contrib/src/test/results/clientpositive/udaf_example_min.q.out ec3a134 
   contrib/src/test/results/clientpositive/udaf_example_min_n.q.out 2e802e0 
   contrib/src/test/results/clientpositive/udf_example_add.q.out 4510ba4 
   contrib/src/test/results/clientpositive/udf_example_arraymapstruct.q.out 
 1e3bca4 
   contrib/src/test/results/clientpositive/udf_example_format.q.out 83e508a 
   contrib/src/test/results/clientpositive/udf_row_sequence.q.out 3b58cb5 
   contrib/src/test/results/clientpositive/udtf_explode2.q.out 47512c3 
   contrib/src/test/results/clientpositive/udtf_output_on_close.q.out 4ce0481 
   
 itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
  3618185 
   ql/src/java/org/apache/hadoop/hive/ql/Driver.java c89f90c 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 40ec4e5 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/Entity.java 2a38aad 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 26836b6 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/SessionHiveMetaStoreClient.java
  37b1669 
   ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java 
 e64ef76 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationUtils.java
  604c39d 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java
  8cdff5b 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java
  1ac6cab 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
  6b635ce 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
  932b980 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
  c472cef 
   ql/src/test/results/clientnegative/authorization_addjar.q.out 68c3c60 
   ql/src/test/results/clientnegative/authorization_addpartition.q.out a14080a 
   ql/src/test/results/clientnegative/authorization_alter_db_owner.q.out 
 928e9f5 
   
 ql/src/test/results/clientnegative/authorization_alter_db_owner_default.q.out 
 d4a617e 
   ql/src/test/results/clientnegative/authorization_compile.q.out cf5e4d1 
   ql/src/test/results/clientnegative/authorization_create_func1.q.out 8863e91 
   ql/src/test/results/clientnegative/authorization_create_func2.q.out 8863e91 
   ql/src/test/results/clientnegative/authorization_create_macro1.q.out 
 e4d410c 
   ql/src/test/results/clientnegative/authorization_createview.q.out 3d0d191 
   ql/src/test/results/clientnegative/authorization_ctas.q.out c9d0130 
   ql/src/test/results/clientnegative/authorization_deletejar.q.out 71b11fd 
   ql/src/test/results/clientnegative/authorization_desc_table_nosel.q.out 
 4583f56 
   ql/src/test/results/clientnegative/authorization_dfs.q.out e95f563 
   ql/src/test/results/clientnegative/authorization_drop_db_cascade.q.out 
 0bf82fc 
   ql/src/test/results/clientnegative/authorization_drop_db_empty.q.out 
 93a3f1c 
   ql/src/test/results/clientnegative/authorization_droppartition.q.out 
 3efabfe 
   ql/src/test/results/clientnegative/authorization_fail_8.q.out 10dd71b 
   ql/src/test/results/clientnegative/authorization_grant_table_allpriv.q.out 
 ab4fd1c 
   ql/src/test/results/clientnegative/authorization_grant_table_fail1.q.out 
 0975a9c 
   
 ql/src/test/results/clientnegative/authorization_grant_table_fail_nogrant.q.out
  8e3d71c 
   

[jira] [Updated] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat

2014-07-22 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HIVE-6584:
---

Attachment: HIVE-6584.12.patch

Patch v12 should fix the two test failures. One comes from changes made in 
HBASE-11335. The other has to do with assumptions around default filesystem 
path that are unrelated to HBase.

 Add HiveHBaseTableSnapshotInputFormat
 -

 Key: HIVE-6584
 URL: https://issues.apache.org/jira/browse/HIVE-6584
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.14.0

 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, 
 HIVE-6584.10.patch, HIVE-6584.11.patch, HIVE-6584.12.patch, 
 HIVE-6584.2.patch, HIVE-6584.3.patch, HIVE-6584.4.patch, HIVE-6584.5.patch, 
 HIVE-6584.6.patch, HIVE-6584.7.patch, HIVE-6584.8.patch, HIVE-6584.9.patch


 HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
 This allows a MR job to consume a stable, read-only view of an HBase table 
 directly off of HDFS. Bypassing the online region server API provides a nice 
 performance boost for the full scan. HBASE-10642 is backporting that feature 
 to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's 
 available, we should add an input format. A follow-on patch could work out 
 how to integrate this functionality into the StorageHandler, similar to how 
 HIVE-6473 integrates the HFileOutputFormat into existing table definitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23824: Add HiveHBaseTableSnapshotInputFormat

2014-07-22 Thread nick dimiduk

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23824/
---

Review request for hive, Ashutosh Chauhan, Navis Ryu, Sushanth Sowmyan, and 
Swarnim Kulkarni.


Bugs: HIVE-6584
https://issues.apache.org/jira/browse/HIVE-6584


Repository: hive-git


Description
---

HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
This allows a MR job to consume a stable, read-only view of an HBase table 
directly off of HDFS. Bypassing the online region server API provides a nice 
performance boost for the full scan. HBASE-10642 is backporting that feature to 
0.94/0.96 and also adding a mapred implementation. Once that's available, we 
should add an input format. A follow-on patch could work out how to integrate 
this functionality into the StorageHandler, similar to how HIVE-6473 integrates 
the HFileOutputFormat into existing table definitions.

See JIRA for further conversation.


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 593c566 
  conf/hive-default.xml.template ba922d0 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSplit.java 998c15c 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
dbf5e51 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseInputFormatUtil.java
 PRE-CREATION 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
 1032cc9 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableSnapshotInputFormat.java
 PRE-CREATION 
  hbase-handler/src/test/queries/positive/hbase_handler_snapshot.q PRE-CREATION 
  hbase-handler/src/test/results/positive/external_table_ppd.q.out 6f1adf4 
  hbase-handler/src/test/results/positive/hbase_binary_storage_queries.q.out 
b92db11 
  hbase-handler/src/test/results/positive/hbase_handler_snapshot.q.out 
PRE-CREATION 
  hbase-handler/src/test/templates/TestHBaseCliDriver.vm 01d596a 
  itests/util/src/main/java/org/apache/hadoop/hive/hbase/HBaseQTestUtil.java 
96a0de2 
  itests/util/src/main/java/org/apache/hadoop/hive/hbase/HBaseTestSetup.java 
cdc0a65 
  itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 2fefa06 
  pom.xml b5a5697 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java c80a2a3 

Diff: https://reviews.apache.org/r/23824/diff/


Testing
---

Unit tests, local-mode testing, pseudo-distributed mode testing, and tested on 
a small distributed cluster. Tests included hbase versions 0.98.3 and the HEAD 
of 0.98 branch.


Thanks,

nick dimiduk



[jira] [Assigned] (HIVE-7474) Expression Converter needs to handle Flattened expressions

2014-07-22 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran reassigned HIVE-7474:


Assignee: Laljo John Pullokkaran

 Expression Converter needs to handle Flattened expressions
 --

 Key: HIVE-7474
 URL: https://issues.apache.org/jira/browse/HIVE-7474
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: Laljo John Pullokkaran





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7474) Expression Converter needs to handle Flattened expressions

2014-07-22 Thread Laljo John Pullokkaran (JIRA)
Laljo John Pullokkaran created HIVE-7474:


 Summary: Expression Converter needs to handle Flattened expressions
 Key: HIVE-7474
 URL: https://issues.apache.org/jira/browse/HIVE-7474
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran






--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23744: HIVE-7451 : pass function name in create/drop function to authorization api

2014-07-22 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23744/
---

(Updated July 22, 2014, 9:41 p.m.)


Review request for hive.


Changes
---

HIVE-7451.3.patch - rebased on trunk and addressing review comments


Bugs: HIVE-7451
https://issues.apache.org/jira/browse/HIVE-7451


Repository: hive-git


Description
---

see https://issues.apache.org/jira/browse/HIVE-7451


Diffs (updated)
-

  contrib/src/test/results/clientnegative/case_with_row_sequence.q.out db564ff 
  contrib/src/test/results/clientnegative/invalid_row_sequence.q.out 89646a2 
  contrib/src/test/results/clientnegative/udtf_explode2.q.out 87dc534 
  contrib/src/test/results/clientpositive/dboutput.q.out 909ae2e 
  contrib/src/test/results/clientpositive/lateral_view_explode2.q.out 4b849fa 
  contrib/src/test/results/clientpositive/udaf_example_avg.q.out 3786078 
  contrib/src/test/results/clientpositive/udaf_example_group_concat.q.out 
83b4802 
  contrib/src/test/results/clientpositive/udaf_example_max.q.out b68ec61 
  contrib/src/test/results/clientpositive/udaf_example_max_n.q.out 62632e3 
  contrib/src/test/results/clientpositive/udaf_example_min.q.out ec3a134 
  contrib/src/test/results/clientpositive/udaf_example_min_n.q.out 2e802e0 
  contrib/src/test/results/clientpositive/udf_example_add.q.out 4510ba4 
  contrib/src/test/results/clientpositive/udf_example_arraymapstruct.q.out 
1e3bca4 
  contrib/src/test/results/clientpositive/udf_example_format.q.out 83e508a 
  contrib/src/test/results/clientpositive/udf_row_sequence.q.out 3b58cb5 
  contrib/src/test/results/clientpositive/udtf_explode2.q.out 47512c3 
  contrib/src/test/results/clientpositive/udtf_output_on_close.q.out 4ce0481 
  
itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
 3618185 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java c89f90c 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java c80a2a3 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/Entity.java 2a38aad 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 26836b6 
  
ql/src/java/org/apache/hadoop/hive/ql/metadata/SessionHiveMetaStoreClient.java 
d258bc6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java 
e64ef76 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationUtils.java
 e86442a 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java
 912be6b 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveV1Authorizer.java
 60c9f14 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java
 1ac6cab 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
 f1220d7 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
 a16f42a 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
 c472cef 
  ql/src/test/results/clientnegative/authorization_addjar.q.out 68c3c60 
  ql/src/test/results/clientnegative/authorization_addpartition.q.out a14080a 
  ql/src/test/results/clientnegative/authorization_alter_db_owner.q.out 928e9f5 
  ql/src/test/results/clientnegative/authorization_alter_db_owner_default.q.out 
d4a617e 
  ql/src/test/results/clientnegative/authorization_compile.q.out cf5e4d1 
  ql/src/test/results/clientnegative/authorization_create_func1.q.out 8863e91 
  ql/src/test/results/clientnegative/authorization_create_func2.q.out 8863e91 
  ql/src/test/results/clientnegative/authorization_create_macro1.q.out e4d410c 
  ql/src/test/results/clientnegative/authorization_createview.q.out 3d0d191 
  ql/src/test/results/clientnegative/authorization_ctas.q.out c9d0130 
  ql/src/test/results/clientnegative/authorization_deletejar.q.out 71b11fd 
  ql/src/test/results/clientnegative/authorization_desc_table_nosel.q.out 
4583f56 
  ql/src/test/results/clientnegative/authorization_dfs.q.out e95f563 
  ql/src/test/results/clientnegative/authorization_drop_db_cascade.q.out 
0bf82fc 
  ql/src/test/results/clientnegative/authorization_drop_db_empty.q.out 93a3f1c 
  ql/src/test/results/clientnegative/authorization_droppartition.q.out 3efabfe 
  ql/src/test/results/clientnegative/authorization_fail_8.q.out 9918801 
  ql/src/test/results/clientnegative/authorization_grant_table_allpriv.q.out 
ab4fd1c 
  ql/src/test/results/clientnegative/authorization_grant_table_fail1.q.out 
0975a9c 
  
ql/src/test/results/clientnegative/authorization_grant_table_fail_nogrant.q.out 
8e3d71c 
  ql/src/test/results/clientnegative/authorization_insert_noinspriv.q.out 
332d8a4 
  ql/src/test/results/clientnegative/authorization_insert_noselectpriv.q.out 
1423e75 
  

[jira] [Updated] (HIVE-7451) pass function name in create/drop function to authorization api

2014-07-22 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7451:


Attachment: HIVE-7451.3.patch

HIVE-7451.3.patch - rebased on trunk and addressing review comments

 pass function name in create/drop function to authorization api
 ---

 Key: HIVE-7451
 URL: https://issues.apache.org/jira/browse/HIVE-7451
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7451.1.patch, HIVE-7451.2.patch, HIVE-7451.3.patch


 If function names are passed to the authorization api for create/drop 
 function calls, then authorization decisions can be made based on the 
 function names as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7468) UDF translation needs to use Hive UDF name

2014-07-22 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran updated HIVE-7468:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 UDF translation needs to use Hive UDF name
 --

 Key: HIVE-7468
 URL: https://issues.apache.org/jira/browse/HIVE-7468
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: Laljo John Pullokkaran
 Attachments: HIVE-7468.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7474) Expression Converter needs to handle Flattened expressions

2014-07-22 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran updated HIVE-7474:
-

Attachment: HIVE-7474.patch

 Expression Converter needs to handle Flattened expressions
 --

 Key: HIVE-7474
 URL: https://issues.apache.org/jira/browse/HIVE-7474
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: Laljo John Pullokkaran
 Attachments: HIVE-7474.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7475) Beeline requires newline at the end of each query in a file

2014-07-22 Thread thomas norden (JIRA)
thomas norden created HIVE-7475:
---

 Summary: Beeline requires newline at the end of each query in a 
file
 Key: HIVE-7475
 URL: https://issues.apache.org/jira/browse/HIVE-7475
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
Reporter: thomas norden
Priority: Trivial


When using the -f option on beeline its required to have a newline at the end 
of each query otherwise the connection is closed before the query is run.

{code}
$ cat test.hql
show databases;%
$ beeline -u jdbc:hive2://localhost:1 --incremental=true -f test.hql
scan complete in 3ms
Connecting to jdbc:hive2://localhost:1
Connected to: Apache Hive (version 0.13.1)
Driver: Hive JDBC (version 0.13.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 0.13.1 by Apache Hive
0: jdbc:hive2://localhost:1 show databases;Closing: 0: 
jdbc:hive2://localhost:1
{code}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7476) CTAS does not work properly for s3

2014-07-22 Thread Jian Fang (JIRA)
Jian Fang created HIVE-7476:
---

 Summary: CTAS does not work properly for s3
 Key: HIVE-7476
 URL: https://issues.apache.org/jira/browse/HIVE-7476
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1
 Environment: Linux
Reporter: Jian Fang


When we use CTAS to create a new table in s3, the table location is not set 
correctly. As a result, the data from the existing table cannot be inserted 
into the new created table.

We can use the following example to reproduce this issue.

set hive.metastore.warehouse.dir=${OUTPUT};
drop table s3_dir_test;
drop table s3_1;
drop table s3_2;
create external table s3_dir_test(strct structa:int, b:string, c:string)
row format delimited
fields terminated by '\t'
collection items terminated by ' '
location '${INPUT}';
create table s3_1(strct structa:int, b:string, c:string)
row format delimited
fields terminated by '\t'
collection items terminated by ' ';
insert overwrite table s3_1 select * from s3_dir_test;
select * from s3_1;
create table s3_2 as select * from s3_1;
select * from s3_1;
select * from s3_2;

The data could be as follows.

1 abc 10.5
2 def 11.5
3 ajss 90.23232
4 djns 89.02002
5 random 2.99
6 data 3.002
7 ne 71.9084

The root cause is that the SemanticAnalyzer class did not handle s3 location 
properly for CTAS.

A patch will be provided shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23738: HIVE-5160: HS2 should support .hiverc

2014-07-22 Thread Szehon Ho

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23738/#review48439
---

Ship it!


Thanks Dong!

- Szehon Ho


On July 22, 2014, 8:24 a.m., Dong Chen wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23738/
 ---
 
 (Updated July 22, 2014, 8:24 a.m.)
 
 
 Review request for hive.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-5160: HS2 should support .hiverc
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/common/cli/HiveFileProcessor.java 
 PRE-CREATION 
   common/src/java/org/apache/hadoop/hive/common/cli/IHiveFileProcessor.java 
 PRE-CREATION 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 593c566 
   conf/hive-default.xml.template 653f5cc 
   service/src/java/org/apache/hive/service/cli/session/HiveSessionBase.java 
 a5c8e9b 
   service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
 7a3286d 
   
 service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
  e79b129 
   service/src/java/org/apache/hive/service/cli/session/SessionManager.java 
 6650c05 
   
 service/src/test/org/apache/hive/service/cli/session/TestSessionGlobalInitFile.java
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23738/diff/
 
 
 Testing
 ---
 
 UT passed.
 
 
 Thanks,
 
 Dong Chen
 




[jira] [Updated] (HIVE-7476) CTAS does not work properly for s3

2014-07-22 Thread Jian Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Fang updated HIVE-7476:


Description: 
When we use CTAS to create a new table in s3, the table location is not set 
correctly. As a result, the data from the existing table cannot be inserted 
into the new created table.

We can use the following example to reproduce this issue.

set hive.metastore.warehouse.dir=OUTPUT_PATH;
drop table s3_dir_test;
drop table s3_1;
drop table s3_2;
create external table s3_dir_test(strct structa:int, b:string, c:string)
row format delimited
fields terminated by '\t'
collection items terminated by ' '
location 'INPUT_PATH';
create table s3_1(strct structa:int, b:string, c:string)
row format delimited
fields terminated by '\t'
collection items terminated by ' ';
insert overwrite table s3_1 select * from s3_dir_test;
select * from s3_1;
create table s3_2 as select * from s3_1;
select * from s3_1;
select * from s3_2;

The data could be as follows.

1 abc 10.5
2 def 11.5
3 ajss 90.23232
4 djns 89.02002
5 random 2.99
6 data 3.002
7 ne 71.9084

The root cause is that the SemanticAnalyzer class did not handle s3 location 
properly for CTAS.

A patch will be provided shortly.

  was:
When we use CTAS to create a new table in s3, the table location is not set 
correctly. As a result, the data from the existing table cannot be inserted 
into the new created table.

We can use the following example to reproduce this issue.

set hive.metastore.warehouse.dir=${OUTPUT};
drop table s3_dir_test;
drop table s3_1;
drop table s3_2;
create external table s3_dir_test(strct structa:int, b:string, c:string)
row format delimited
fields terminated by '\t'
collection items terminated by ' '
location '${INPUT}';
create table s3_1(strct structa:int, b:string, c:string)
row format delimited
fields terminated by '\t'
collection items terminated by ' ';
insert overwrite table s3_1 select * from s3_dir_test;
select * from s3_1;
create table s3_2 as select * from s3_1;
select * from s3_1;
select * from s3_2;

The data could be as follows.

1 abc 10.5
2 def 11.5
3 ajss 90.23232
4 djns 89.02002
5 random 2.99
6 data 3.002
7 ne 71.9084

The root cause is that the SemanticAnalyzer class did not handle s3 location 
properly for CTAS.

A patch will be provided shortly.


 CTAS does not work properly for s3
 --

 Key: HIVE-7476
 URL: https://issues.apache.org/jira/browse/HIVE-7476
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1
 Environment: Linux
Reporter: Jian Fang

 When we use CTAS to create a new table in s3, the table location is not set 
 correctly. As a result, the data from the existing table cannot be inserted 
 into the new created table.
 We can use the following example to reproduce this issue.
 set hive.metastore.warehouse.dir=OUTPUT_PATH;
 drop table s3_dir_test;
 drop table s3_1;
 drop table s3_2;
 create external table s3_dir_test(strct structa:int, b:string, c:string)
 row format delimited
 fields terminated by '\t'
 collection items terminated by ' '
 location 'INPUT_PATH';
 create table s3_1(strct structa:int, b:string, c:string)
 row format delimited
 fields terminated by '\t'
 collection items terminated by ' ';
 insert overwrite table s3_1 select * from s3_dir_test;
 select * from s3_1;
 create table s3_2 as select * from s3_1;
 select * from s3_1;
 select * from s3_2;
 The data could be as follows.
 1 abc 10.5
 2 def 11.5
 3 ajss 90.23232
 4 djns 89.02002
 5 random 2.99
 6 data 3.002
 7 ne 71.9084
 The root cause is that the SemanticAnalyzer class did not handle s3 location 
 properly for CTAS.
 A patch will be provided shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5160) HS2 should support .hiverc

2014-07-22 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071020#comment-14071020
 ] 

Szehon Ho commented on HIVE-5160:
-

+1 pending tests, thanks for making these changes.  It seems Lefty also 
approved on the review-board.

[~brocknoland] can we add Dong Chen to contributor list so that he can get the 
credit?

 HS2 should support .hiverc
 --

 Key: HIVE-5160
 URL: https://issues.apache.org/jira/browse/HIVE-5160
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Thejas M Nair
 Attachments: HIVE-5160.1.patch, HIVE-5160.patch


 It would be useful to support the .hiverc functionality with hive server2 as 
 well.
 .hiverc is processed by CliDriver, so it works only with hive cli. It would 
 be useful to be able to do things like register a standard set of jars and 
 functions for all users. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7096) Support grouped splits in Tez partitioned broadcast join

2014-07-22 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-7096:
-

Status: Patch Available  (was: Open)

 Support grouped splits in Tez partitioned broadcast join
 

 Key: HIVE-7096
 URL: https://issues.apache.org/jira/browse/HIVE-7096
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Vikram Dixit K
 Attachments: HIVE-7096.1.patch


 Same checks for schema + deser + file format done in HiveSplitGenerator need 
 to be done in the CustomPartitionVertex.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7096) Support grouped splits in Tez partitioned broadcast join

2014-07-22 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-7096:
-

Attachment: HIVE-7096.1.patch

 Support grouped splits in Tez partitioned broadcast join
 

 Key: HIVE-7096
 URL: https://issues.apache.org/jira/browse/HIVE-7096
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Vikram Dixit K
 Attachments: HIVE-7096.1.patch


 Same checks for schema + deser + file format done in HiveSplitGenerator need 
 to be done in the CustomPartitionVertex.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7477) Upgrade hive to use tez 0.4.1

2014-07-22 Thread Vikram Dixit K (JIRA)
Vikram Dixit K created HIVE-7477:


 Summary: Upgrade hive to use tez 0.4.1
 Key: HIVE-7477
 URL: https://issues.apache.org/jira/browse/HIVE-7477
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K


Tez has released 0.4.1 that has bug fixes we need.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7477) Upgrade hive to use tez 0.4.1

2014-07-22 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-7477:
-

Status: Patch Available  (was: Open)

 Upgrade hive to use tez 0.4.1
 -

 Key: HIVE-7477
 URL: https://issues.apache.org/jira/browse/HIVE-7477
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-7477.1.patch


 Tez has released 0.4.1 that has bug fixes we need.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7445) Improve LOGS for Hive when a query is not able to acquire locks

2014-07-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071028#comment-14071028
 ] 

Hive QA commented on HIVE-7445:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12657166/HIVE-7445.1.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5736 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_8
org.apache.hive.jdbc.TestJdbcDriver2.testParentReferences
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/8/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/8/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-8/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12657166

 Improve LOGS for Hive when a query is not able to acquire locks
 ---

 Key: HIVE-7445
 URL: https://issues.apache.org/jira/browse/HIVE-7445
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability, Logging
Affects Versions: 0.13.1
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-7445.1.patch, HIVE-7445.patch


 Currently the error thrown when you cannot acquire a lock is:
 Error in acquireLocks... 
 FAILED: Error in acquiring locks: Locks on the underlying objects cannot be 
 acquired. retry after some time
 This error is insufficient if the user would like to understand what is 
 blocking them and insufficient from a diagnosability perspective because it 
 is difficult to know what query is blocking the lock acquisition.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7477) Upgrade hive to use tez 0.4.1

2014-07-22 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-7477:
-

Attachment: HIVE-7477.1.patch

 Upgrade hive to use tez 0.4.1
 -

 Key: HIVE-7477
 URL: https://issues.apache.org/jira/browse/HIVE-7477
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-7477.1.patch


 Tez has released 0.4.1 that has bug fixes we need.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7474) Expression Converter needs to handle Flattened expressions

2014-07-22 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-7474:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch. Thanks [~jpullokkaran]!

 Expression Converter needs to handle Flattened expressions
 --

 Key: HIVE-7474
 URL: https://issues.apache.org/jira/browse/HIVE-7474
 Project: Hive
  Issue Type: Sub-task
Reporter: Laljo John Pullokkaran
Assignee: Laljo John Pullokkaran
 Attachments: HIVE-7474.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23820: HIVE-7445:Improve LOGS for Hive when a query is not able to acquire locks

2014-07-22 Thread Szehon Ho

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23820/#review48441
---


Hey Chaoyu thanks, the logic looks good now.  Can you also fix all the 
red-space?  It's two lines per indent in hive, thanks.


ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
https://reviews.apache.org/r/23820/#comment85045

Why was this necessary to change?


- Szehon Ho


On July 22, 2014, 7:21 p.m., Chaoyu Tang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23820/
 ---
 
 (Updated July 22, 2014, 7:21 p.m.)
 
 
 Review request for hive, Brock Noland, Prasad Mujumdar, Szehon Ho, and Xuefu 
 Zhang.
 
 
 Bugs: HIVE-7445
 https://issues.apache.org/jira/browse/HIVE-7445
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This patch enables ZookeeperHiveLockManager.java, when in debug mode, to log 
 out information about contentious locks if lockmgr fails to acquire a lock 
 for a query. The changes include:
 1. Collect and log out contentious lock information in 
 ZookeeperHiveLockManager.java
 2. Improve lock retry counting and existing logging, and modified some qtests 
 output files to match these logging
  
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveLockObject.java 1cc3074 
   
 ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
  65b9136 
   ql/src/test/results/clientnegative/insert_into1.q.out a38b679 
   ql/src/test/results/clientnegative/insert_into2.q.out f21823a 
   ql/src/test/results/clientnegative/insert_into3.q.out 5f1581e 
   ql/src/test/results/clientnegative/insert_into4.q.out 5dcdd50 
   ql/src/test/results/clientnegative/lockneg1.q.out 6a76cd7 
   ql/src/test/results/clientnegative/lockneg_try_db_lock_conflict.q.out 
 a9833a8 
   ql/src/test/results/clientnegative/lockneg_try_drop_locked_db.q.out d67365a 
   ql/src/test/results/clientnegative/lockneg_try_lock_db_in_use.q.out 89d3265 
 
 Diff: https://reviews.apache.org/r/23820/diff/
 
 
 Testing
 ---
 
 1. Manual tests: following debug logging was printed out as expected during a 
 lock acquisition failure.
 ---
 Unable to acquire IMPLICIT, EXCLUSIVE lock default@sample_07 after 2 attempts.
 14/07/22 11:44:40 ERROR ZooKeeperHiveLockManager: Unable to acquire IMPLICIT, 
 EXCLUSIVE lock default@sample_07 after 2 attempts.
 14/07/22 11:44:40 DEBUG ZooKeeperHiveLockManager: Requested lock 
 default@sample_07:: mode:IMPLICIT,EXCLUSIVE; query:insert into table 
 sample_07 select * from sample_08
 14/07/22 11:44:40 DEBUG ZooKeeperHiveLockManager: Conflicting lock to 
 default@sample_07:: mode:IMPLICIT;query:select code from 
 sample_07;queryId:root_20140722084242_98f8d9d7-d110-45c0-8c8b-12da2e5172d9;clientIp:10.20.92.233
 ---
 
 2. Precommit tests
 The test failure testCliDriver_ql_rewrite_gbtoidx is a preexisting one and I 
 think it is not related to this change.
 
 
 Thanks,
 
 Chaoyu Tang
 




[jira] [Updated] (HIVE-7476) CTAS does not work properly for s3

2014-07-22 Thread Jian Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Fang updated HIVE-7476:


Attachment: HIVE-7476.patch

Patch for HIVE-7476

 CTAS does not work properly for s3
 --

 Key: HIVE-7476
 URL: https://issues.apache.org/jira/browse/HIVE-7476
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1
 Environment: Linux
Reporter: Jian Fang
 Attachments: HIVE-7476.patch


 When we use CTAS to create a new table in s3, the table location is not set 
 correctly. As a result, the data from the existing table cannot be inserted 
 into the new created table.
 We can use the following example to reproduce this issue.
 set hive.metastore.warehouse.dir=OUTPUT_PATH;
 drop table s3_dir_test;
 drop table s3_1;
 drop table s3_2;
 create external table s3_dir_test(strct structa:int, b:string, c:string)
 row format delimited
 fields terminated by '\t'
 collection items terminated by ' '
 location 'INPUT_PATH';
 create table s3_1(strct structa:int, b:string, c:string)
 row format delimited
 fields terminated by '\t'
 collection items terminated by ' ';
 insert overwrite table s3_1 select * from s3_dir_test;
 select * from s3_1;
 create table s3_2 as select * from s3_1;
 select * from s3_1;
 select * from s3_2;
 The data could be as follows.
 1 abc 10.5
 2 def 11.5
 3 ajss 90.23232
 4 djns 89.02002
 5 random 2.99
 6 data 3.002
 7 ne 71.9084
 The root cause is that the SemanticAnalyzer class did not handle s3 location 
 properly for CTAS.
 A patch will be provided shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7477) Upgrade hive to use tez 0.4.1

2014-07-22 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071066#comment-14071066
 ] 

Ashutosh Chauhan commented on HIVE-7477:


+1 
with this change random failure of  dynpart_sort_optimization.q in Hive QA 
should go away.

 Upgrade hive to use tez 0.4.1
 -

 Key: HIVE-7477
 URL: https://issues.apache.org/jira/browse/HIVE-7477
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-7477.1.patch


 Tez has released 0.4.1 that has bug fixes we need.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7476) CTAS does not work properly for s3

2014-07-22 Thread Jian Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Fang updated HIVE-7476:


Status: Patch Available  (was: Open)

Patch SemanticAnalyzer so that CTAS work properly for s3.

 CTAS does not work properly for s3
 --

 Key: HIVE-7476
 URL: https://issues.apache.org/jira/browse/HIVE-7476
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1
 Environment: Linux
Reporter: Jian Fang
 Attachments: HIVE-7476.patch


 When we use CTAS to create a new table in s3, the table location is not set 
 correctly. As a result, the data from the existing table cannot be inserted 
 into the new created table.
 We can use the following example to reproduce this issue.
 set hive.metastore.warehouse.dir=OUTPUT_PATH;
 drop table s3_dir_test;
 drop table s3_1;
 drop table s3_2;
 create external table s3_dir_test(strct structa:int, b:string, c:string)
 row format delimited
 fields terminated by '\t'
 collection items terminated by ' '
 location 'INPUT_PATH';
 create table s3_1(strct structa:int, b:string, c:string)
 row format delimited
 fields terminated by '\t'
 collection items terminated by ' ';
 insert overwrite table s3_1 select * from s3_dir_test;
 select * from s3_1;
 create table s3_2 as select * from s3_1;
 select * from s3_1;
 select * from s3_2;
 The data could be as follows.
 1 abc 10.5
 2 def 11.5
 3 ajss 90.23232
 4 djns 89.02002
 5 random 2.99
 6 data 3.002
 7 ne 71.9084
 The root cause is that the SemanticAnalyzer class did not handle s3 location 
 properly for CTAS.
 A patch will be provided shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Subject: Error running unit tests from eclipse (weird classpath issue)

2014-07-22 Thread Pavel Chadnov
Hey Guys,


I'm trying to run Hive unit tests on eclipse and have few failures. One of
the interesting one is throwing this exception as shown below when ran from
eclipse, this one passes fine from the console.


java.lang.IncompatibleClassChangeError: Implementing class

at java.lang.ClassLoader.defineClass1(Native Method)

at java.lang.ClassLoader.defineClass(ClassLoader.java:800)

at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)

at java.net.URLClassLoader.access$100(URLClassLoader.java:71)

...

...

at java.lang.Class.forName(Class.java:190)

at org.apache.hadoop.hive.shims.ShimLoader.createShim(ShimLoader.java:120)

at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:115)

at
org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:80)

at org.apache.hadoop.hive.conf.HiveConf$ConfVars.clinit(HiveConf.java:254)

at org.apache.hadoop.hive.ql.exec.Utilities.getPlanPath(Utilities.java:652)

at org.apache.hadoop.hive.ql.exec.Utilities.setPlanPath(Utilities.java:641)

at org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:584)

at org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:575)

at
org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:568)

at
org.apache.hadoop.hive.ql.io.TestSymlinkTextInputFormat.setUp(TestSymlinkTextInputFormat.java:84)

at junit.framework.TestCase.runBare(TestCase.java:132)


I tried adding hadoop-shims project in the classpath by manually adding
them but no luck. Would really appreciate any help here.


Thanks,

Pavel

-- 
Regards,
Pavel Chadnov


[jira] [Commented] (HIVE-5160) HS2 should support .hiverc

2014-07-22 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071073#comment-14071073
 ] 

Lefty Leverenz commented on HIVE-5160:
--

Yes, +1 as far as docs go.  (I'm not qualified to judge the code, but pushing 
the Ship It button was fun.)

 HS2 should support .hiverc
 --

 Key: HIVE-5160
 URL: https://issues.apache.org/jira/browse/HIVE-5160
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Thejas M Nair
 Attachments: HIVE-5160.1.patch, HIVE-5160.patch


 It would be useful to support the .hiverc functionality with hive server2 as 
 well.
 .hiverc is processed by CliDriver, so it works only with hive cli. It would 
 be useful to be able to do things like register a standard set of jars and 
 functions for all users. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Error running unit tests from eclipse (weird classpath issue)

2014-07-22 Thread Pavel Chadnov
Hey Guys,


I'm trying to run Hive unit tests on eclipse and have few failures. One of
the interesting one is throwing this exception as shown below when ran from
eclipse, this one passes fine from the console.


java.lang.IncompatibleClassChangeError: Implementing class

at java.lang.ClassLoader.defineClass1(Native Method)

at java.lang.ClassLoader.defineClass(ClassLoader.java:800)

at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)

at java.net.URLClassLoader.access$100(URLClassLoader.java:71)

...

...

at java.lang.Class.forName(Class.java:190)

at org.apache.hadoop.hive.shims.ShimLoader.createShim(ShimLoader.java:120)

at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:115)

at
org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:80)

at org.apache.hadoop.hive.conf.HiveConf$ConfVars.clinit(HiveConf.java:254)

at org.apache.hadoop.hive.ql.exec.Utilities.getPlanPath(Utilities.java:652)

at org.apache.hadoop.hive.ql.exec.Utilities.setPlanPath(Utilities.java:641)

at org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:584)

at org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:575)

at
org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:568)

at
org.apache.hadoop.hive.ql.io.TestSymlinkTextInputFormat.setUp(TestSymlinkTextInputFormat.java:84)

at junit.framework.TestCase.runBare(TestCase.java:132)


I tried adding hadoop-shims project in the classpath by manually adding
them but no luck. Would really appreciate any help here.


Thanks,

Pavel


  1   2   >