[jira] [Updated] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-21 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9327:
--
Attachment: HIVE-9327.04.patch

 CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
 ---

 Key: HIVE-9327
 URL: https://issues.apache.org/jira/browse/HIVE-9327
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0

 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, 
 HIVE-9327.03.patch, HIVE-9327.04.patch, HIVE-9327.patch


 ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
 would be ideal to remove this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-9272) Tests for utf-8 support

2015-01-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reopened HIVE-9272:
--

dev@hive.apache.org has a thread titled Precommit test error with utf8

 Tests for utf-8 support
 ---

 Key: HIVE-9272
 URL: https://issues.apache.org/jira/browse/HIVE-9272
 Project: Hive
  Issue Type: Test
  Components: Tests, WebHCat
Affects Versions: 0.14.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Aswathy Chellammal Sreekumar
Priority: Minor
 Fix For: 0.15.0

 Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, 
 HIVE-9272.4.patch, HIVE-9272.patch


 Including some test cases for utf8 support in webhcat. The first four tests 
 invoke hive, pig, mapred and streaming apis for testing the utf8 support for 
 data processed, file names and job name. The last test case tests the 
 filtering of job name with utf8 character



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9435) Fix auto_join21.q for Tez

2015-01-21 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HIVE-9435:
-

 Summary: Fix auto_join21.q for Tez
 Key: HIVE-9435
 URL: https://issues.apache.org/jira/browse/HIVE-9435
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor


Somehow, the golden file is updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9409) Avoid ser/de loggers as logging framework can be incompatible on driver and workers

2015-01-21 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9409:
--
Attachment: HIVE-9409.1.patch

Reattached the same patch to trigger another test run.

 Avoid ser/de loggers as logging framework can be incompatible on driver and 
 workers
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9409.1.patch, HIVE-9409.1.patch


 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27) by 
 default mode (i.e. just Hive on MR, not HiveOnSpark),  Error 
 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find cl
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 

[jira] [Updated] (HIVE-9409) Avoid ser/de loggers as logging framework can be incompatible on driver and workers

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9409:
---
Attachment: HIVE-9409.1.patch

 Avoid ser/de loggers as logging framework can be incompatible on driver and 
 workers
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9409.1.patch, HIVE-9409.1.patch, HIVE-9409.1.patch


 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27) by 
 default mode (i.e. just Hive on MR, not HiveOnSpark),  Error 
 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find cl
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 

[jira] [Updated] (HIVE-9139) Clean up GenSparkProcContext.clonedReduceSinks and related code [Spark Branch]

2015-01-21 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao updated HIVE-9139:
---
Attachment: HIVE-9139.1-spark.patch

 Clean up GenSparkProcContext.clonedReduceSinks and related code [Spark Branch]
 --

 Key: HIVE-9139
 URL: https://issues.apache.org/jira/browse/HIVE-9139
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Xuefu Zhang
Assignee: Chao
 Fix For: spark-branch

 Attachments: HIVE-9139.1-spark.patch


 While reviewing HIVE-9041, I noticed this field seems not applicable for 
 Spark. It was inherited from Tez. We should remove it and related code to 
 reduce noise in the code if it's not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3798) Can't escape reserved keywords used as table names

2015-01-21 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286041#comment-14286041
 ] 

Pengcheng Xiong commented on HIVE-3798:
---

[~jghoman], could you please try again against HIVE 0.14? It seems that it 
works now.

hive show tables;
OK
comment
lineitem
src
src1
srcpart
Time taken: 0.023 seconds, Fetched: 5 row(s)
hive describe `comment`;
OK
key string
value   string
Time taken: 0.051 seconds, Fetched: 2 row(s)

 Can't escape reserved keywords used as table names
 --

 Key: HIVE-3798
 URL: https://issues.apache.org/jira/browse/HIVE-3798
 Project: Hive
  Issue Type: Bug
Reporter: Jakob Homan
Assignee: Jakob Homan

 {noformat}hive (some_table) show tables;
 OK
 ...
 comment
 ...
 Time taken: 0.076 seconds
 hive (some_table) describe comment;
 FAILED: Parse Error: line 1:0 cannot recognize input near 'describe' 
 'comment' 'EOF' in describe statement
 hive (some_table) describe `comment`; 
 OK
 Table `comment` does not exist 
 Time taken: 0.042 seconds
 {noformat}
 Describe should honor character escaping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-9409) Avoid ser/de loggers as logging framework can be incompatible on driver and workers

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9409:
---
Comment: was deleted

(was: 

{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693536/HIVE-9409.1.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2458/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2458/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2458/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2458/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ rm -rf
+ svn update
svn: Error converting entry in directory 
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn: 
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693536 - PreCommit-HIVE-TRUNK-Build)

 Avoid ser/de loggers as logging framework can be incompatible on driver and 
 workers
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9409.1.patch, HIVE-9409.1.patch, HIVE-9409.1.patch


 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27) by 
 default mode (i.e. just Hive on MR, not HiveOnSpark),  Error 
 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)

[jira] [Commented] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-21 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285997#comment-14285997
 ] 

Xuefu Zhang commented on HIVE-9410:
---

We have auto test, udf_custom_add.q, which passes all the time. I'm wondering 
why it doesn't catch the problem. If the test is deficient, we should enhance 
it.

If we are trying to add the jar to the classpath of remote driver, I don't see 
how the patch is doing that. The patch seems adding the jar to the classpath of 
HiveServer2 instead. I could be misunderstanding though.

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 

[jira] [Created] (HIVE-9432) CBO (Calcite Return Path): Removing QB from ParseContext

2015-01-21 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-9432:
-

 Summary: CBO (Calcite Return Path): Removing QB from ParseContext
 Key: HIVE-9432
 URL: https://issues.apache.org/jira/browse/HIVE-9432
 Project: Hive
  Issue Type: Sub-task
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9208) MetaStore DB schema inconsistent for MS SQL Server in use of varchar/nvarchar

2015-01-21 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286066#comment-14286066
 ] 

Eugene Koifman commented on HIVE-9208:
--

Most of the diffs change varchar(767) to nvarchar(767), but in some cases the 
starting types are varchar(128) or varchar(4000).  Are you sure these are 
partition keys?

 MetaStore DB schema inconsistent for MS SQL Server in use of varchar/nvarchar
 -

 Key: HIVE-9208
 URL: https://issues.apache.org/jira/browse/HIVE-9208
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.14.0
Reporter: Eugene Koifman
Assignee: Xiaobing Zhou
 Attachments: HIVE-9208.1.patch, HIVE-9208.2.patch


 hive-schema-0.15.0.mssql.sql has PARTITIONS.PART_NAME as NVARCHAR but 
 COMPLETED_TXN_COMPONENTS.CTC_PARTITON, COMPACTION_QUEUE.CQ_PARTITION, 
 HIVE_LOCKS.HL_PARTITION, TXN_COMPONENTS.TC_PARTITION all use VARCHAR.  This 
 cannot be right since they all store the same value.
 the same is true of hive-schema-0.14.0.mssql.sql and the two corresponding 
 hvie-txn-schema-... files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9259) Fix ClassCastException when CBO is enabled for HOS [Spark Branch]

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-9259.

   Resolution: Fixed
Fix Version/s: 0.15.0

Fixed without patch. Thank you Chao!

 Fix ClassCastException when CBO is enabled for HOS [Spark Branch]
 -

 Key: HIVE-9259
 URL: https://issues.apache.org/jira/browse/HIVE-9259
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Brock Noland
Assignee: Chao
 Fix For: 0.15.0


 {noformat}
 2015-01-05 22:10:19,414 ERROR [HiveServer2-Handler-Pool: Thread-33]: 
 parse.SemanticAnalyzer (SemanticAnalyzer.java:analyzeInternal(10109)) - CBO 
 failed, skipping CBO.
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.ql.optimizer.calcite.HiveTypeSystemImpl cannot be cast 
 to org.eigenbase.reltype.RelDataTypeSystem
 at 
 net.hydromatic.optiq.jdbc.OptiqConnectionImpl.init(OptiqConnectionImpl.java:92)
 at 
 net.hydromatic.optiq.jdbc.OptiqJdbc41Factory$OptiqJdbc41Connection.init(OptiqJdbc41Factory.java:103)
 at 
 net.hydromatic.optiq.jdbc.OptiqJdbc41Factory.newConnection(OptiqJdbc41Factory.java:49)
 at 
 net.hydromatic.optiq.jdbc.OptiqJdbc41Factory.newConnection(OptiqJdbc41Factory.java:34)
 at 
 net.hydromatic.optiq.jdbc.OptiqFactory.newConnection(OptiqFactory.java:52)
 at 
 net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:135)
 at java.sql.DriverManager.getConnection(DriverManager.java:571)
 at java.sql.DriverManager.getConnection(DriverManager.java:187)
 at 
 org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:140)
 at 
 org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:105)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer$CalciteBasedPlanner.getOptimizedAST(SemanticAnalyzer.java:12560)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer$CalciteBasedPlanner.access$400(SemanticAnalyzer.java:12540)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10070)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:420)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:306)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1108)
 at 
 org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1102)
 at 
 org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:101)
 at 
 org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:172)
 at 
 org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
 at 
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:388)
 at 
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:375)
 at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
 at 
 org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
 at 
 org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at 
 org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
 at com.sun.proxy.$Proxy25.executeStatementAsync(Unknown Source)
 at 
 org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:259)
 at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:415)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at 

[jira] [Updated] (HIVE-9433) select DB.TABLE.* will fail

2015-01-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-9433:
--
Description: 
to reproduce, in q test environment, simply type

select default.src.* from src; 

will give you

NoViableAltException(300@[])
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11322)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6696)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7026)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7086)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseXorExpression(HiveParser_IdentifiersParser.java:7270)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceStarExpression(HiveParser_IdentifiersParser.java:7430)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedencePlusExpression(HiveParser_IdentifiersParser.java:7590)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAmpersandExpression(HiveParser_IdentifiersParser.java:7750)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseOrExpression(HiveParser_IdentifiersParser.java:7909)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceEqualExpression(HiveParser_IdentifiersParser.java:8439)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceNotExpression(HiveParser_IdentifiersParser.java:9452)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAndExpression(HiveParser_IdentifiersParser.java:9571)

This is a Parser bug...
   Assignee: Pengcheng Xiong

 select DB.TABLE.* will fail
 ---

 Key: HIVE-9433
 URL: https://issues.apache.org/jira/browse/HIVE-9433
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong

 to reproduce, in q test environment, simply type
 select default.src.* from src; 
 will give you
 NoViableAltException(300@[])
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11322)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6696)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7026)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7086)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseXorExpression(HiveParser_IdentifiersParser.java:7270)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceStarExpression(HiveParser_IdentifiersParser.java:7430)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedencePlusExpression(HiveParser_IdentifiersParser.java:7590)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAmpersandExpression(HiveParser_IdentifiersParser.java:7750)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseOrExpression(HiveParser_IdentifiersParser.java:7909)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceEqualExpression(HiveParser_IdentifiersParser.java:8439)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceNotExpression(HiveParser_IdentifiersParser.java:9452)
   at 
 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAndExpression(HiveParser_IdentifiersParser.java:9571)
 This is a Parser bug...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9259) Fix ClassCastException when CBO is enabled for HOS [Spark Branch]

2015-01-21 Thread Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285920#comment-14285920
 ] 

Chao commented on HIVE-9259:


[~brocknoland] Can we close this one now?

 Fix ClassCastException when CBO is enabled for HOS [Spark Branch]
 -

 Key: HIVE-9259
 URL: https://issues.apache.org/jira/browse/HIVE-9259
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Brock Noland
Assignee: Chao

 {noformat}
 2015-01-05 22:10:19,414 ERROR [HiveServer2-Handler-Pool: Thread-33]: 
 parse.SemanticAnalyzer (SemanticAnalyzer.java:analyzeInternal(10109)) - CBO 
 failed, skipping CBO.
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.ql.optimizer.calcite.HiveTypeSystemImpl cannot be cast 
 to org.eigenbase.reltype.RelDataTypeSystem
 at 
 net.hydromatic.optiq.jdbc.OptiqConnectionImpl.init(OptiqConnectionImpl.java:92)
 at 
 net.hydromatic.optiq.jdbc.OptiqJdbc41Factory$OptiqJdbc41Connection.init(OptiqJdbc41Factory.java:103)
 at 
 net.hydromatic.optiq.jdbc.OptiqJdbc41Factory.newConnection(OptiqJdbc41Factory.java:49)
 at 
 net.hydromatic.optiq.jdbc.OptiqJdbc41Factory.newConnection(OptiqJdbc41Factory.java:34)
 at 
 net.hydromatic.optiq.jdbc.OptiqFactory.newConnection(OptiqFactory.java:52)
 at 
 net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:135)
 at java.sql.DriverManager.getConnection(DriverManager.java:571)
 at java.sql.DriverManager.getConnection(DriverManager.java:187)
 at 
 org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:140)
 at 
 org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:105)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer$CalciteBasedPlanner.getOptimizedAST(SemanticAnalyzer.java:12560)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer$CalciteBasedPlanner.access$400(SemanticAnalyzer.java:12540)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10070)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:420)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:306)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1108)
 at 
 org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1102)
 at 
 org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:101)
 at 
 org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:172)
 at 
 org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
 at 
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:388)
 at 
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:375)
 at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
 at 
 org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
 at 
 org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at 
 org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
 at com.sun.proxy.$Proxy25.executeStatementAsync(Unknown Source)
 at 
 org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:259)
 at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:415)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
  

[jira] [Issue Comment Deleted] (HIVE-9409) Avoid ser/de loggers as logging framework can be incompatible on driver and workers

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9409:
---
Comment: was deleted

(was: 

{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693649/HIVE-9409.1.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2463/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2463/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2463/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2463/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
svn: Can't open file 
'itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/api/.svn/lock': 
Permission denied
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693649 - PreCommit-HIVE-TRUNK-Build)

 Avoid ser/de loggers as logging framework can be incompatible on driver and 
 workers
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9409.1.patch, HIVE-9409.1.patch, HIVE-9409.1.patch


 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27) by 
 default mode (i.e. just Hive on MR, not HiveOnSpark),  Error 
 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
 

[jira] [Work started] (HIVE-9432) CBO (Calcite Return Path): Removing QB from ParseContext

2015-01-21 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-9432 started by Jesus Camacho Rodriguez.
-
 CBO (Calcite Return Path): Removing QB from ParseContext
 

 Key: HIVE-9432
 URL: https://issues.apache.org/jira/browse/HIVE-9432
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9424) SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark Branch]

2015-01-21 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285956#comment-14285956
 ] 

Xuefu Zhang commented on HIVE-9424:
---

[~csun], to clarify, w/o this patch, there is no problem. If this is case, I 
don't think this is a blocker (for 0.15). On the contrary, I'm a little 
concerned to include this in 0.15. Please share your thought.

 SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark 
 Branch]
 ---

 Key: HIVE-9424
 URL: https://issues.apache.org/jira/browse/HIVE-9424
 Project: Hive
  Issue Type: Bug
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
Priority: Blocker
 Attachments: HIVE-9424.1-spark.patch


 Sometimes, in SparkMapJoinResolver, after a new SparkTask is generated and 
 added to context, it may be processed by the {{dispatch}} method in this 
 class. This could introduce wrong plan since {{generateLocalWork}} will 
 overwrite existing local work in the SparkTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9433) select DB.TABLE.* will fail

2015-01-21 Thread Pengcheng Xiong (JIRA)
Pengcheng Xiong created HIVE-9433:
-

 Summary: select DB.TABLE.* will fail
 Key: HIVE-9433
 URL: https://issues.apache.org/jira/browse/HIVE-9433
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9253) MetaStore server should support timeout for long running requests

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9253:
---
Attachment: HIVE-9253.2.patch

 MetaStore server should support timeout for long running requests
 -

 Key: HIVE-9253
 URL: https://issues.apache.org/jira/browse/HIVE-9253
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Dong Chen
Assignee: Dong Chen
 Attachments: HIVE-9253.1.patch, HIVE-9253.2.patch, HIVE-9253.2.patch, 
 HIVE-9253.patch


 In the description of HIVE-7195, one issue is that MetaStore client timeout 
 is quite dumb. The client will timeout and the server has no idea the client 
 is gone.
 The server should support timeout when the request from client runs a long 
 time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-01-21 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-9277:

Attachment: High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf

Uploaded design doc version 1.

 Hybrid Hybrid Grace Hash Join
 -

 Key: HIVE-9277
 URL: https://issues.apache.org/jira/browse/HIVE-9277
 Project: Hive
  Issue Type: New Feature
  Components: Physical Optimizer
Reporter: Wei Zheng
Assignee: Wei Zheng
  Labels: join
 Attachments: High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf


 We are proposing an enhanced hash join algorithm called “hybrid hybrid grace 
 hash join”. We can benefit from this feature as illustrated below:
 o The query will not fail even if the estimated memory requirement is 
 slightly wrong
 o Expensive garbage collection overhead can be avoided when hash table grows
 o Join execution using a Map join operator even though the small table 
 doesn't fit in memory as spilling some data from the build and probe sides 
 will still be cheaper than having to shuffle the large fact table
 The design was based on Hadoop’s parallel processing capability and 
 significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9431) CBO (Calcite Return Path): Removing AST from ParseContext

2015-01-21 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-9431:
-

 Summary: CBO (Calcite Return Path): Removing AST from ParseContext
 Key: HIVE-9431
 URL: https://issues.apache.org/jira/browse/HIVE-9431
 Project: Hive
  Issue Type: Sub-task
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-9431) CBO (Calcite Return Path): Removing AST from ParseContext

2015-01-21 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-9431 started by Jesus Camacho Rodriguez.
-
 CBO (Calcite Return Path): Removing AST from ParseContext
 -

 Key: HIVE-9431
 URL: https://issues.apache.org/jira/browse/HIVE-9431
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9410) ClassNotFoundException occurs during hive query case execution with UDF defined [Spark Branch]

2015-01-21 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9410:
--
Summary: ClassNotFoundException occurs during hive query case execution 
with UDF defined [Spark Branch]  (was: Spark branch, ClassNotFoundException 
occurs during hive query case execution with UDF defined [Spark Branch])

 ClassNotFoundException occurs during hive query case execution with UDF 
 defined [Spark Branch]
 --

 Key: HIVE-9410
 URL: https://issues.apache.org/jira/browse/HIVE-9410
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS 6.5
 JDK1.7
Reporter: Xin Hao
Assignee: Chengxiang Li
 Attachments: HIVE-9410.1-spark.patch


 We have a hive query case with UDF defined (i.e. BigBench case Q10, Q18 
 etc.). It will be passed for default Hive (on MR) mode, while failed for Hive 
 On Spark mode (both Standalone and Yarn-Client). 
 Although we use 'add jar .jar;' to add the UDF jar explicitly, the issue 
 still exists. 
 BTW, if we put the UDF jar into $HIVE_HOME/lib dir, the case will be passed.
 Detail Error Message is as below (NOTE: 
 de.bankmark.bigbench.queries.q10.SentimentUDF is the UDF which contained in 
 jar bigbenchqueriesmr.jar, and we have add command like 'add jar 
 /location/to/bigbenchqueriesmr.jar;' into .sql explicitly)
 INFO  [pool-1-thread-1]: client.RemoteDriver (RemoteDriver.java:call(316)) - 
 Failed to run job 8dd120cb-1a4d-4d1c-ba31-61eac648c27d
 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: de.bankmark.bigbench.queries.q10.SentimentUDF
 Serialization trace:
 genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc)
 conf (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.FilterOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 right (org.apache.commons.lang3.tuple.ImmutablePair)
 edgeProperties (org.apache.hadoop.hive.ql.plan.SparkWork)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:99)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
 at 
 org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
 at 
 org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:112)
 ...
 Caused by: java.lang.ClassNotFoundException: 
 de.bankmark.bigbench.queries.q10.SentimentUDF
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at 
 org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
 ... 55 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30125: HIVE-9431

2015-01-21 Thread Jesús Camacho Rodríguez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30125/
---

Review request for hive.


Bugs: HIVE-9431
https://issues.apache.org/jira/browse/HIVE-9431


Repository: hive-git


Description
---

CBO (Calcite Return Path): Removing AST from ParseContext


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java 
b838bff598bdc6c8d4c2728967d2c2bf0ee63e9e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
4364f2830d35b89266bf79948263dd64998fe5cc 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java 
f2eb4d27fca6472a9d4b777a54fcce8a729b3cd6 

Diff: https://reviews.apache.org/r/30125/diff/


Testing
---

Existing tests.


Thanks,

Jesús Camacho Rodríguez



Re: Precommit test error with utf8

2015-01-21 Thread Szehon Ho
Not really sure.  Checked the build box's client-side locales and they are
pretty standard (en_US.UTF8).  I think if its a client side issue, it would
be saying Error converting UTF-8 to native, but its the other way around,
saying Error converting native to UTF8.  But I'm not an expert on this.

The build machine was able to apply the patch fine, but it cant check out
those files from svn, explaining why it passed pre-commit but not
afterwards.

On Wed, Jan 21, 2015 at 11:10 AM, Eugene Koifman ekoif...@hortonworks.com
wrote:

 this was added in https://issues.apache.org/jira/browse/HIVE-9272.  The
 build bot ran fine with the patch.  I also ran mvn clean package install
 -Phadoop-2,dist -DskipTests.

 Does anyone have ideas on why this is an issue?

 On Wed, Jan 21, 2015 at 10:52 AM, Brock Noland br...@cloudera.com wrote:

  Thank you Szehon! I also could not reproduce manually. I went and did
 
 rm -rf $(svn status --no-ignore)
 svn status --no-ignore
 
  manually. I hope that fixes it.
 
  On Tue, Jan 20, 2015 at 11:23 PM, Szehon Ho sze...@cloudera.com wrote:
   The builds have been hitting some strange error in trying to update the
   source in svn due to the new test files added in hcatalog in different
   locales.
  
   I wasn't able to reproduce the issue manually and dont know the exact
   cause, but just FYI that the builds are broken now.
  
   Thanks
   Szehon
  
   ++ svn status --no-ignore
   svn: Error converting entry in directory
   'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
   svn: Can't convert string from native encoding to 'UTF-8':
   svn:
 
 artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
   + rm -rf
   + svn update
   svn: Error converting entry in directory
   'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
   svn: Can't convert string from native encoding to 'UTF-8':
   svn:
 
 artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
 



 --

 Thanks,
 Eugene

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



[jira] [Updated] (HIVE-9289) TODO : Store user name in session [Spark Branch]

2015-01-21 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-9289:
---
Attachment: HIVE-9289.2-spark.patch

 TODO : Store user name in session [Spark Branch]
 

 Key: HIVE-9289
 URL: https://issues.apache.org/jira/browse/HIVE-9289
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-9289.1-spark.patch, HIVE-9289.2-spark.patch


 TODO  : this we need to store the session username somewhere else as 
 getUGIForConf never used the conf SparkSessionManagerImpl.java 
 /hive-exec/src/java/org/apache/hadoop/hive/ql/exec/spark/session line 145 
 Java Task



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9435) Fix auto_join21.q for Tez

2015-01-21 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9435:
--
Status: Patch Available  (was: Open)

 Fix auto_join21.q for Tez
 -

 Key: HIVE-9435
 URL: https://issues.apache.org/jira/browse/HIVE-9435
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: HIVE-9435.1.patch


 Somehow, the golden file is updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9435) Fix auto_join21.q for Tez

2015-01-21 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9435:
--
Attachment: HIVE-9435.1.patch

 Fix auto_join21.q for Tez
 -

 Key: HIVE-9435
 URL: https://issues.apache.org/jira/browse/HIVE-9435
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: HIVE-9435.1.patch


 Somehow, the golden file is updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


ptest was down (was: Re: svn: Error converting entry in directory 'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8)

2015-01-21 Thread Brock Noland
Hi,

Quite strange. I guess svn could not update them with in the
background. I have now manually ran those same commands, without
issue, and I think ptest is ready to run again.

If you have issues, let me know.

Brock

On Tue, Jan 20, 2015 at 10:01 PM, Alexander Pivovarov
apivova...@gmail.com wrote:
 last recent build failed with error

 svn: Error converting entry in directory
 'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8

 http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2444/console


 svn: Error converting entry in directory
 'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
 svn: Can't convert string from native encoding to 'UTF-8':
 svn: 
 artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
 + rm -rf target datanucleus.log ant/target shims/target
 shims/0.20S/target shims/0.23/target shims/aggregator/target
 shims/common/target shims/scheduler/target packaging/target
 hbase-handler/target testutils/target jdbc/target metastore/target
 itests/target itests/thirdparty itests/hcatalog-unit/target
 itests/test-serde/target itests/qtest/target
 itests/hive-unit-hadoop2/target itests/hive-minikdc/target
 itests/hive-unit/target itests/custom-serde/target itests/util/target
 itests/qtest-spark/target hcatalog/target
 hcatalog/src/test/e2e/templeton/tests/utf8.conf
 + svn update
 svn: Error converting entry in directory
 'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
 svn: Can't convert string from native encoding to 'UTF-8':
 svn: 
 artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
 + exit 1
 '
 at org.apache.hive.ptest.execution.Phase.execLocally(Phase.java:67)
 at 
 org.apache.hive.ptest.execution.PrepPhase.execute(PrepPhase.java:66)
 at org.apache.hive.ptest.execution.PTest.run(PTest.java:164)
 at 
 org.apache.hive.ptest.api.server.TestExecutor.run(TestExecutor.java:120)

 2015-01-20 17:08:59,861  INFO PTest.run:200 0 failed tests
 2015-01-20 17:08:59,861  INFO PTest.run:207 Executed 0 tests
 2015-01-20 17:08:59,861  INFO PTest.run:209 PERF: Phase PrepPhase took 0 
 minutes
 2015-01-20 17:08:59,862  INFO JIRAService.postComment:141 Comment:

 {color:red}Overall{color}: -1 no tests executed


[jira] [Updated] (HIVE-9139) Clean up GenSparkProcContext.clonedReduceSinks and related code [Spark Branch]

2015-01-21 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao updated HIVE-9139:
---
Fix Version/s: spark-branch
Affects Version/s: spark-branch
   Status: Patch Available  (was: Open)

 Clean up GenSparkProcContext.clonedReduceSinks and related code [Spark Branch]
 --

 Key: HIVE-9139
 URL: https://issues.apache.org/jira/browse/HIVE-9139
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Xuefu Zhang
Assignee: Chao
 Fix For: spark-branch

 Attachments: HIVE-9139.1-spark.patch


 While reviewing HIVE-9041, I noticed this field seems not applicable for 
 Spark. It was inherited from Tez. We should remove it and related code to 
 reduce noise in the code if it's not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-21 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9327:
--
Status: Open  (was: Patch Available)

 CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
 ---

 Key: HIVE-9327
 URL: https://issues.apache.org/jira/browse/HIVE-9327
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0

 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, 
 HIVE-9327.03.patch, HIVE-9327.patch


 ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
 would be ideal to remove this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Precommit test error with utf8

2015-01-21 Thread Eugene Koifman
this was added in https://issues.apache.org/jira/browse/HIVE-9272.  The
build bot ran fine with the patch.  I also ran mvn clean package install
-Phadoop-2,dist -DskipTests.

Does anyone have ideas on why this is an issue?

On Wed, Jan 21, 2015 at 10:52 AM, Brock Noland br...@cloudera.com wrote:

 Thank you Szehon! I also could not reproduce manually. I went and did

rm -rf $(svn status --no-ignore)
svn status --no-ignore

 manually. I hope that fixes it.

 On Tue, Jan 20, 2015 at 11:23 PM, Szehon Ho sze...@cloudera.com wrote:
  The builds have been hitting some strange error in trying to update the
  source in svn due to the new test files added in hcatalog in different
  locales.
 
  I wasn't able to reproduce the issue manually and dont know the exact
  cause, but just FYI that the builds are broken now.
 
  Thanks
  Szehon
 
  ++ svn status --no-ignore
  svn: Error converting entry in directory
  'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
  svn: Can't convert string from native encoding to 'UTF-8':
  svn:
 artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
  + rm -rf
  + svn update
  svn: Error converting entry in directory
  'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
  svn: Can't convert string from native encoding to 'UTF-8':
  svn:
 artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt




-- 

Thanks,
Eugene

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Updated] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-21 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9327:
--
Status: Patch Available  (was: Open)

 CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
 ---

 Key: HIVE-9327
 URL: https://issues.apache.org/jira/browse/HIVE-9327
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0

 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, 
 HIVE-9327.03.patch, HIVE-9327.patch


 ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
 would be ideal to remove this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-21 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9327:
--
Attachment: (was: HIVE-9327.04.patch)

 CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
 ---

 Key: HIVE-9327
 URL: https://issues.apache.org/jira/browse/HIVE-9327
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Fix For: 0.15.0

 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, 
 HIVE-9327.03.patch, HIVE-9327.patch


 ParseContext includes a map of Operator to RowResolver (OpParseContext). It 
 would be ideal to remove this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-21 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-9402:
-
Status: Patch Available  (was: Open)

 Create GREATEST and LEAST udf
 -

 Key: HIVE-9402
 URL: https://issues.apache.org/jira/browse/HIVE-9402
 Project: Hive
  Issue Type: Task
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9402.1.patch, HIVE-9402.2.patch, HIVE-9402.3.patch, 
 HIVE-9402.4.patch, HIVE-9402.4.patch, HIVE-9402.5.patch, HIVE-9402.5.patch, 
 HIVE-9402.6.patch


 GREATEST function returns the greatest value in a list of values
 Signature: T greatest(T v1, T v2, ...)
 all values should be the same type (like in COALESCE)
 LEAST returns the least value in a list of values
 Signature: T least(T v1, T v2, ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-21 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-9402:
-
Status: Open  (was: Patch Available)

 Create GREATEST and LEAST udf
 -

 Key: HIVE-9402
 URL: https://issues.apache.org/jira/browse/HIVE-9402
 Project: Hive
  Issue Type: Task
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9402.1.patch, HIVE-9402.2.patch, HIVE-9402.3.patch, 
 HIVE-9402.4.patch, HIVE-9402.4.patch, HIVE-9402.5.patch, HIVE-9402.5.patch, 
 HIVE-9402.6.patch


 GREATEST function returns the greatest value in a list of values
 Signature: T greatest(T v1, T v2, ...)
 all values should be the same type (like in COALESCE)
 LEAST returns the least value in a list of values
 Signature: T least(T v1, T v2, ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Precommit test error with utf8

2015-01-21 Thread Brock Noland
Thank you Szehon! I also could not reproduce manually. I went and did

   rm -rf $(svn status --no-ignore)
   svn status --no-ignore

manually. I hope that fixes it.

On Tue, Jan 20, 2015 at 11:23 PM, Szehon Ho sze...@cloudera.com wrote:
 The builds have been hitting some strange error in trying to update the
 source in svn due to the new test files added in hcatalog in different
 locales.

 I wasn't able to reproduce the issue manually and dont know the exact
 cause, but just FYI that the builds are broken now.

 Thanks
 Szehon

 ++ svn status --no-ignore
 svn: Error converting entry in directory
 'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
 svn: Can't convert string from native encoding to 'UTF-8':
 svn: 
 artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt
 + rm -rf
 + svn update
 svn: Error converting entry in directory
 'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
 svn: Can't convert string from native encoding to 'UTF-8':
 svn: 
 artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt


[jira] [Commented] (HIVE-9424) SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark Branch]

2015-01-21 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286137#comment-14286137
 ] 

Xuefu Zhang commented on HIVE-9424:
---

I see. Please feel free to include this in your prototyping. We will visit this 
when we review that work. Thanks.

 SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark 
 Branch]
 ---

 Key: HIVE-9424
 URL: https://issues.apache.org/jira/browse/HIVE-9424
 Project: Hive
  Issue Type: Bug
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9424.1-spark.patch


 Sometimes, in SparkMapJoinResolver, after a new SparkTask is generated and 
 added to context, it may be processed by the {{dispatch}} method in this 
 class. This could introduce wrong plan since {{generateLocalWork}} will 
 overwrite existing local work in the SparkTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9434) Shim the method Path.getPathWithoutSchemeAndAuthority

2015-01-21 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9434:
--

 Summary: Shim the method Path.getPathWithoutSchemeAndAuthority
 Key: HIVE-9434
 URL: https://issues.apache.org/jira/browse/HIVE-9434
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.15.0
Reporter: Brock Noland


Since Hadoop 1 does not have the method {{Path. 
getPathWithoutSchemeAndAuthority}} we need to shim it out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9409) Avoid ser/de loggers as logging framework can be incompatible on driver and workers

2015-01-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286002#comment-14286002
 ] 

Hive QA commented on HIVE-9409:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693649/HIVE-9409.1.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2463/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2463/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2463/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2463/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
svn: Can't open file 
'itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/api/.svn/lock': 
Permission denied
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693649 - PreCommit-HIVE-TRUNK-Build

 Avoid ser/de loggers as logging framework can be incompatible on driver and 
 workers
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9409.1.patch, HIVE-9409.1.patch


 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27) by 
 default mode (i.e. just Hive on MR, not HiveOnSpark),  Error 
 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
  

[jira] [Commented] (HIVE-9264) Merge encryption branch to trunk

2015-01-21 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286129#comment-14286129
 ] 

Prasanth Jayachandran commented on HIVE-9264:
-

[~brocknoland] This commit broke hadoop-1 build. 
Path.getPathWithoutSchemeAndAuthority() method does not seem to exist in hadoop 
1.x. 
https://github.com/apache/hive/blame/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2402

 Merge encryption branch to trunk
 

 Key: HIVE-9264
 URL: https://issues.apache.org/jira/browse/HIVE-9264
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-9264.1.patch, HIVE-9264.2.patch, HIVE-9264.2.patch, 
 HIVE-9264.2.patch, HIVE-9264.3.patch, HIVE-9264.3.patch, HIVE-9264.3.patch, 
 HIVE-9264.addendum.patch


 The team working on the encryption branch would like to merge their work to 
 trunk. This jira will track that effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9264) Merge encryption branch to trunk

2015-01-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286141#comment-14286141
 ] 

Brock Noland commented on HIVE-9264:


Thank you [~prasanth_j] for pointing that out. I created HIVE-9434 to fix that. 
[~Ferd] or [~dongc] - would you have a chance to pickup HIVE-9434?

 Merge encryption branch to trunk
 

 Key: HIVE-9264
 URL: https://issues.apache.org/jira/browse/HIVE-9264
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-9264.1.patch, HIVE-9264.2.patch, HIVE-9264.2.patch, 
 HIVE-9264.2.patch, HIVE-9264.3.patch, HIVE-9264.3.patch, HIVE-9264.3.patch, 
 HIVE-9264.addendum.patch


 The team working on the encryption branch would like to merge their work to 
 trunk. This jira will track that effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9434) Shim the method Path.getPathWithoutSchemeAndAuthority

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9434:
---
Fix Version/s: 0.15.0

 Shim the method Path.getPathWithoutSchemeAndAuthority
 -

 Key: HIVE-9434
 URL: https://issues.apache.org/jira/browse/HIVE-9434
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.15.0
Reporter: Brock Noland
 Fix For: 0.15.0


 Since Hadoop 1 does not have the method 
 {{Path.getPathWithoutSchemeAndAuthority}} we need to shim it out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9434) Shim the method Path.getPathWithoutSchemeAndAuthority

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9434:
---
Description: Since Hadoop 1 does not have the method 
{{Path.getPathWithoutSchemeAndAuthority}} we need to shim it out.  (was: Since 
Hadoop 1 does not have the method {{Path. getPathWithoutSchemeAndAuthority}} we 
need to shim it out.)

 Shim the method Path.getPathWithoutSchemeAndAuthority
 -

 Key: HIVE-9434
 URL: https://issues.apache.org/jira/browse/HIVE-9434
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.15.0
Reporter: Brock Noland

 Since Hadoop 1 does not have the method 
 {{Path.getPathWithoutSchemeAndAuthority}} we need to shim it out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9289) TODO : Store user name in session [Spark Branch]

2015-01-21 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-9289:
---
Status: Open  (was: Patch Available)

 TODO : Store user name in session [Spark Branch]
 

 Key: HIVE-9289
 URL: https://issues.apache.org/jira/browse/HIVE-9289
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-9289.1-spark.patch


 TODO  : this we need to store the session username somewhere else as 
 getUGIForConf never used the conf SparkSessionManagerImpl.java 
 /hive-exec/src/java/org/apache/hadoop/hive/ql/exec/spark/session line 145 
 Java Task



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9424) SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark Branch]

2015-01-21 Thread Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286124#comment-14286124
 ] 

Chao commented on HIVE-9424:


[~xuefuz] I think the issue will not happen with the current impl - it could 
only occur when the main SparkTask is not in the root task set, due to the 
post order traversing of the TaskGraphWalker. In my dynamic partition 
implementation this case could happen, since the main task could be child of 
pruning task.

Yes, I think it's OK not to include it in 0.15. It will not affect anything 
that we have right now.



 SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark 
 Branch]
 ---

 Key: HIVE-9424
 URL: https://issues.apache.org/jira/browse/HIVE-9424
 Project: Hive
  Issue Type: Bug
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
Priority: Blocker
 Attachments: HIVE-9424.1-spark.patch


 Sometimes, in SparkMapJoinResolver, after a new SparkTask is generated and 
 added to context, it may be processed by the {{dispatch}} method in this 
 class. This could introduce wrong plan since {{generateLocalWork}} will 
 overwrite existing local work in the SparkTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9424) SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark Branch]

2015-01-21 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9424:
--
Priority: Major  (was: Blocker)

 SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark 
 Branch]
 ---

 Key: HIVE-9424
 URL: https://issues.apache.org/jira/browse/HIVE-9424
 Project: Hive
  Issue Type: Bug
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
 Attachments: HIVE-9424.1-spark.patch


 Sometimes, in SparkMapJoinResolver, after a new SparkTask is generated and 
 added to context, it may be processed by the {{dispatch}} method in this 
 class. This could introduce wrong plan since {{generateLocalWork}} will 
 overwrite existing local work in the SparkTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9289) TODO : Store user name in session [Spark Branch]

2015-01-21 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-9289:
---
Status: Patch Available  (was: Open)

 TODO : Store user name in session [Spark Branch]
 

 Key: HIVE-9289
 URL: https://issues.apache.org/jira/browse/HIVE-9289
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-9289.1-spark.patch, HIVE-9289.2-spark.patch


 TODO  : this we need to store the session username somewhere else as 
 getUGIForConf never used the conf SparkSessionManagerImpl.java 
 /hive-exec/src/java/org/apache/hadoop/hive/ql/exec/spark/session line 145 
 Java Task



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9289) TODO : Store user name in session [Spark Branch]

2015-01-21 Thread Chinna Rao Lalam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286161#comment-14286161
 ] 

Chinna Rao Lalam commented on HIVE-9289:


I have verified this code, reusing the session is not happening because as 
[~chengxiang li] explained 
Hive Client-SessionHandler(session id inside)- 
HiveSessionImpl-SessionState-SparkSession this linear mapping is maintained.
Updated the patch by removing that code.

 TODO : Store user name in session [Spark Branch]
 

 Key: HIVE-9289
 URL: https://issues.apache.org/jira/browse/HIVE-9289
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-9289.1-spark.patch, HIVE-9289.2-spark.patch


 TODO  : this we need to store the session username somewhere else as 
 getUGIForConf never used the conf SparkSessionManagerImpl.java 
 /hive-exec/src/java/org/apache/hadoop/hive/ql/exec/spark/session line 145 
 Java Task



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9402) Create GREATEST and LEAST udf

2015-01-21 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-9402:
-
Attachment: HIVE-9402.7.patch

uploading patch (same as v6) to trigger precommit tests.

 Create GREATEST and LEAST udf
 -

 Key: HIVE-9402
 URL: https://issues.apache.org/jira/browse/HIVE-9402
 Project: Hive
  Issue Type: Task
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-9402.1.patch, HIVE-9402.2.patch, HIVE-9402.3.patch, 
 HIVE-9402.4.patch, HIVE-9402.4.patch, HIVE-9402.5.patch, HIVE-9402.5.patch, 
 HIVE-9402.6.patch, HIVE-9402.7.patch


 GREATEST function returns the greatest value in a list of values
 Signature: T greatest(T v1, T v2, ...)
 all values should be the same type (like in COALESCE)
 LEAST returns the least value in a list of values
 Signature: T least(T v1, T v2, ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9424) SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark Branch]

2015-01-21 Thread Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285849#comment-14285849
 ] 

Chao commented on HIVE-9424:


[~brocknoland] I'm fine making it a blocker - at least it blocks dynamic 
partition pruning.
Also, I don't think the test failures are related.

 SparkMapJoinResolver shouldn't process newly generated SparkTask [Spark 
 Branch]
 ---

 Key: HIVE-9424
 URL: https://issues.apache.org/jira/browse/HIVE-9424
 Project: Hive
  Issue Type: Bug
Affects Versions: spark-branch
Reporter: Chao
Assignee: Chao
Priority: Blocker
 Attachments: HIVE-9424.1-spark.patch


 Sometimes, in SparkMapJoinResolver, after a new SparkTask is generated and 
 added to context, it may be processed by the {{dispatch}} method in this 
 class. This could introduce wrong plan since {{generateLocalWork}} will 
 overwrite existing local work in the SparkTask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8966) Delta files created by hive hcatalog streaming cannot be compacted

2015-01-21 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286267#comment-14286267
 ] 

Vikram Dixit K commented on HIVE-8966:
--

+1 for a branch 1.0.

 Delta files created by hive hcatalog streaming cannot be compacted
 --

 Key: HIVE-8966
 URL: https://issues.apache.org/jira/browse/HIVE-8966
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.14.0
 Environment: hive
Reporter: Jihong Liu
Assignee: Alan Gates
Priority: Critical
 Fix For: 0.14.1

 Attachments: HIVE-8966.2.patch, HIVE-8966.3.patch, HIVE-8966.4.patch, 
 HIVE-8966.5.patch, HIVE-8966.patch


 hive hcatalog streaming will also create a file like bucket_n_flush_length in 
 each delta directory. Where n is the bucket number. But the 
 compactor.CompactorMR think this file also needs to compact. However this 
 file of course cannot be compacted, so compactor.CompactorMR will not 
 continue to do the compaction. 
 Did a test, after removed the bucket_n_flush_length file, then the alter 
 table partition compact finished successfully. If don't delete that file, 
 nothing will be compacted. 
 This is probably a very severity bug. Both 0.13 and 0.14 have this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9438) The standalone-jdbc jar does missing some jars

2015-01-21 Thread Ashish Kumar Singh (JIRA)
Ashish Kumar Singh created HIVE-9438:


 Summary: The standalone-jdbc jar does missing some jars
 Key: HIVE-9438
 URL: https://issues.apache.org/jira/browse/HIVE-9438
 Project: Hive
  Issue Type: Bug
Reporter: Ashish Kumar Singh
Priority: Blocker
 Fix For: 0.15.0


The standalone-jdbc jar does not contain all the jars required for secure 
connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9438) The standalone-jdbc jar missing some jars

2015-01-21 Thread Ashish Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Kumar Singh updated HIVE-9438:
-
Summary: The standalone-jdbc jar missing some jars  (was: The 
standalone-jdbc jar does missing some jars)

 The standalone-jdbc jar missing some jars
 -

 Key: HIVE-9438
 URL: https://issues.apache.org/jira/browse/HIVE-9438
 Project: Hive
  Issue Type: Bug
Reporter: Ashish Kumar Singh
Priority: Blocker
 Fix For: 0.15.0


 The standalone-jdbc jar does not contain all the jars required for secure 
 connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9053) select constant in union all followed by group by gives wrong result

2015-01-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-9053:
--
Attachment: HIVE-9053.patch-branch-1.0

 select constant in union all followed by group by gives wrong result
 

 Key: HIVE-9053
 URL: https://issues.apache.org/jira/browse/HIVE-9053
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0, 0.14.0
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Fix For: 0.15.0

 Attachments: HIVE-9053.01.patch, HIVE-9053.02.patch, 
 HIVE-9053.03.patch, HIVE-9053.04.patch, HIVE-9053.patch-branch-1.0


 Here is the the way to reproduce with q test:
 select key from (select '1' as key from src union all select key from src)tab 
 group by key;
 will give
 OK
 NULL
 1
 This is not correct as src contains many other keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9103) Support backup task for join related optimization [Spark Branch]

2015-01-21 Thread Chao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao reassigned HIVE-9103:
--

Assignee: Chao  (was: Szehon Ho)

 Support backup task for join related optimization [Spark Branch]
 

 Key: HIVE-9103
 URL: https://issues.apache.org/jira/browse/HIVE-9103
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chao

 In MR, backup task can be executed if the original task, which probably 
 contains certain (join) optimization fails. This JIRA is to track this topic 
 for Spark. We need to determine if we need this and implement if necessary.
 This is a followup of HIVE-9099.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8485) HMS on Oracle incompatibility

2015-01-21 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286251#comment-14286251
 ] 

Vikram Dixit K commented on HIVE-8485:
--

[~sushanth] can this be committed to branch 1.0 when ready instead of branch 
0.14

 HMS on Oracle incompatibility
 -

 Key: HIVE-8485
 URL: https://issues.apache.org/jira/browse/HIVE-8485
 Project: Hive
  Issue Type: Bug
  Components: Metastore
 Environment: Oracle as metastore DB
Reporter: Ryan Pridgeon
Assignee: Chaoyu Tang
 Attachments: HIVE-8485.2.patch, HIVE-8485.patch


 Oracle does not distinguish between empty strings and NULL,which proves 
 problematic for DataNucleus.
 In the event a user creates a table with some property stored as an empty 
 string the table will no longer be accessible.
 i.e. TBLPROPERTIES ('serialization.null.format'='')
 If they try to select, describe, drop, etc the client prints the following 
 exception.
 ERROR ql.Driver: FAILED: SemanticException [Error 10001]: Table not found 
 table name
 The work around for this was to go into the hive metastore on the Oracle 
 database and replace NULL with some other string. Users could then drop the 
 tables or alter their data to use the new null format they just set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Created branch 1.0

2015-01-21 Thread Vikram Dixit K
Hi Folks,

I have created branch 1.0 as discussed earlier. All the jiras that have
0.14 as the fix version should be committed to 1.0 branch instead. The list
of jiras that are being tracked for 1.0 are as follows:

HIVE-8485
HIVE-9053
HIVE-8996.

Please let me know if you want to include more jiras here. I am working on
generating javadocs for this. I hope to have an RC out once these jiras get
in.

Regards
Vikram.

On Tue, Jan 20, 2015 at 1:00 PM, Vaibhav Gumashta vgumas...@hortonworks.com
 wrote:

 Hi Vikram,

 I'd like to get this in: HIVE-8890
 https://issues.apache.org/jira/browse/HIVE-8890 [HiveServer2 dynamic
 service discovery: use persistent ephemeral nodes curator recipe].

 Thanks,
 --Vaibhav

 On Mon, Jan 19, 2015 at 9:29 PM, Alan Gates ga...@hortonworks.com wrote:

  I'd really like to get HIVE-8966 in there, since it breaks streaming
  ingest.  The patch is ready to go, it's just waiting on a review, which
  Owen has promised to do soon.
 
  Alan.
 
Vikram Dixit K vikram.di...@gmail.com
   January 19, 2015 at 18:53
  Hi All,
 
  I am going to be creating the branch 1.0 as mentioned earlier, tomorrow.
 I
  have the following list of jiras that I want to get committed to the
 branch
  before creating an RC.
 
  HIVE-9112
  HIVE-6997 : Delete hive server 1
  HIVE-8485
  HIVE-9053
 
  Please let me know if you would like me to include any other jiras.
 
  Thanks
  Vikram.
 
 
  On Fri, Jan 16, 2015 at 1:35 PM, Vikram Dixit K vikram.di...@gmail.com
  vikram.di...@gmail.com
 
 
 
Thejas Nair the...@hortonworks.com
   January 1, 2015 at 10:23
  Yes, 1.0 is a good opportunity to remove some of the deprecated
  components. The change to remove HiveServer1 is already there in trunk
  , we should include that.
  We can also use 1.0 release to clarify the public vs private status of
  some of the APIs.
 
  Thanks for the reminder about the documentation status of 1.0. I will
  look at some of them.
 
 
  On Wed, Dec 31, 2014 at 12:12 AM, Lefty Leverenz
 
Lefty Leverenz leftylever...@gmail.com
   December 31, 2014 at 0:12
  Oh, now I get it. The 1.0.0 *branch* of Hive. Okay.
 
  -- Lefty
 
  On Tue, Dec 30, 2014 at 11:43 PM, Lefty Leverenz 
 leftylever...@gmail.com
  leftylever...@gmail.com
 
Lefty Leverenz leftylever...@gmail.com
   December 30, 2014 at 23:43
  I thought x.x.# releases were just for fixups, x.#.x could include new
  features, and #.x.x were major releases that might have some
  backward-incompatible changes. But I guess we haven't agreed on that.
 
  As for documentation, we still have 84 jiras with TODOC14 labels
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
 
  .
  Not to mention 25 TODOC13 labels
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
 
  ,
  eleven TODOC12
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
 
  ,
  seven TODOC11
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
 
  ,
  and seven TODOC10
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
 
  
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
 
  .
 
  That's 134 doc tasks to finish for a Hive 1.0.0 release -- preferably by
  the release date, not after. Because expectations are higher for 1.0.0
  releases.
 
 
  -- Lefty
 
  On Tue, Dec 30, 2014 at 5:23 PM, Vikram Dixit K vikram.di...@gmail.com
  vikram.di...@gmail.com
 
Vikram Dixit K vikram.di...@gmail.com
   December 30, 2014 at 17:23
  Hi Folks,
 
  Given that there have been a number of fixes that have gone into branch
  0.14 in the past 8 weeks, I would like to make a release of 0.14.1 soon.
 I
  would like to fix some of the release issues as well this time around. I
 am
  thinking of some time around 15th January for getting a RC out. Please
 let
  me know if you have any concerns. Also, from a previous thread, I would
  like to make this release the 1.0 branch of hive. The process for getting
  jiras into this release is going to be the same as the previous one viz.:
 
  1. Mark the jira with fix version 0.14.1 and update the status to
  blocker/critical.
  2. If a committer +1s the patch for 0.14.1, it is good to go in. Please
  mention me in the jira in case you are not sure if the jira should make
 it
  for 0.14.1.
 
  Thanks
  Vikram.
 
 
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or 

[jira] [Commented] (HIVE-9289) TODO : Store user name in session [Spark Branch]

2015-01-21 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286176#comment-14286176
 ] 

Xuefu Zhang commented on HIVE-9289:
---

+1 pending on test.

 TODO : Store user name in session [Spark Branch]
 

 Key: HIVE-9289
 URL: https://issues.apache.org/jira/browse/HIVE-9289
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-9289.1-spark.patch, HIVE-9289.2-spark.patch


 TODO  : this we need to store the session username somewhere else as 
 getUGIForConf never used the conf SparkSessionManagerImpl.java 
 /hive-exec/src/java/org/apache/hadoop/hive/ql/exec/spark/session line 145 
 Java Task



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9253) MetaStore server should support timeout for long running requests

2015-01-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286233#comment-14286233
 ] 

Hive QA commented on HIVE-9253:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693662/HIVE-9253.2.patch

{color:red}ERROR:{color} -1 due to 160 failed/errored test(s), 7346 tests 
executed
*Failed tests:*
{noformat}
TestCustomAuthentication - did not produce a TEST-*.xml file
TestPigHBaseStorageHandler - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_protect_mode
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_or_replace_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_database_drop
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dbtxnmgr_ddl1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_index_removes_partition_dirs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_multi_partitions
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_partition_with_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_partitions_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_partitions_filter2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_partitions_filter3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_drop_partitions_ignore_protection
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_escape1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_11_managed_external
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby7_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_neg_float
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auth
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_unused
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_bitmap
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_bitmap_auto_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_bitmap_rc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_compact
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_compact_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_stale_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_testxpath2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_inputddl6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join18
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_empty
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_map_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_reorder2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_reorder4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lateral_view_cp
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lineage1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_file_with_space_in_the_name
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_nonpart_authsuccess
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lock4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapreduce4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapreduce5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multi_insert_move_tasks_share_dependencies
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_newline
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_noalias_subq1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nomore_ambiguous_table_col
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nullgroup5

[jira] [Updated] (HIVE-9103) Support backup task for join related optimization [Spark Branch]

2015-01-21 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9103:
--
Priority: Blocker  (was: Major)

 Support backup task for join related optimization [Spark Branch]
 

 Key: HIVE-9103
 URL: https://issues.apache.org/jira/browse/HIVE-9103
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chao
Priority: Blocker

 In MR, backup task can be executed if the original task, which probably 
 contains certain (join) optimization fails. This JIRA is to track this topic 
 for Spark. We need to determine if we need this and implement if necessary.
 This is a followup of HIVE-9099.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9436) RetryingMetaStoreClient does not retry JDOExceptions

2015-01-21 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-9436:
---
Affects Version/s: 0.13.1

 RetryingMetaStoreClient does not retry JDOExceptions
 

 Key: HIVE-9436
 URL: https://issues.apache.org/jira/browse/HIVE-9436
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 0.13.1
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-9436.patch


 RetryingMetaStoreClient has a bug in the following bit of code:
 {code}
 } else if ((e.getCause() instanceof MetaException) 
 e.getCause().getMessage().matches(JDO[a-zA-Z]*Exception)) {
   caughtException = (MetaException) e.getCause();
 } else {
   throw e.getCause();
 }
 {code}
 The bug here is that java String.matches matches the entire string to the 
 regex, and thus, that match will fail if the message contains anything before 
 or after JDO[a-zA-Z]\*Exception. The solution, however, is very simple, we 
 should match .\*JDO[a-zA-Z]\*Exception.\*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9436) RetryingMetaStoreClient does not retry JDOExceptions

2015-01-21 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-9436:
---
Affects Version/s: (was: 0.13.1)
   (was: 0.12.0)
   (was: 0.11.0)
   (was: 0.10.0)

 RetryingMetaStoreClient does not retry JDOExceptions
 

 Key: HIVE-9436
 URL: https://issues.apache.org/jira/browse/HIVE-9436
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-9436.patch


 RetryingMetaStoreClient has a bug in the following bit of code:
 {code}
 } else if ((e.getCause() instanceof MetaException) 
 e.getCause().getMessage().matches(JDO[a-zA-Z]*Exception)) {
   caughtException = (MetaException) e.getCause();
 } else {
   throw e.getCause();
 }
 {code}
 The bug here is that java String.matches matches the entire string to the 
 regex, and thus, that match will fail if the message contains anything before 
 or after JDO[a-zA-Z]\*Exception. The solution, however, is very simple, we 
 should match .\*JDO[a-zA-Z]\*Exception.\*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9436) RetryingMetaStoreClient does not retry JDOExceptions

2015-01-21 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-9436:
---
Attachment: HIVE-9436.patch

Attaching patch. [~thejas], could you please review?

 RetryingMetaStoreClient does not retry JDOExceptions
 

 Key: HIVE-9436
 URL: https://issues.apache.org/jira/browse/HIVE-9436
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-9436.patch


 RetryingMetaStoreClient has a bug in the following bit of code:
 {code}
 } else if ((e.getCause() instanceof MetaException) 
 e.getCause().getMessage().matches(JDO[a-zA-Z]*Exception)) {
   caughtException = (MetaException) e.getCause();
 } else {
   throw e.getCause();
 }
 {code}
 The bug here is that java String.matches matches the entire string to the 
 regex, and thus, that match will fail if the message contains anything before 
 or after JDO[a-zA-Z]\*Exception. The solution, however, is very simple, we 
 should match .\*JDO[a-zA-Z]\*Exception.\*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9436) RetryingMetaStoreClient does not retry JDOExceptions

2015-01-21 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-9436:
---
Affects Version/s: 0.10.0
   0.11.0
   0.12.0
   0.14.0
   0.13.1
   Status: Patch Available  (was: Open)

 RetryingMetaStoreClient does not retry JDOExceptions
 

 Key: HIVE-9436
 URL: https://issues.apache.org/jira/browse/HIVE-9436
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1, 0.14.0, 0.12.0, 0.11.0, 0.10.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-9436.patch


 RetryingMetaStoreClient has a bug in the following bit of code:
 {code}
 } else if ((e.getCause() instanceof MetaException) 
 e.getCause().getMessage().matches(JDO[a-zA-Z]*Exception)) {
   caughtException = (MetaException) e.getCause();
 } else {
   throw e.getCause();
 }
 {code}
 The bug here is that java String.matches matches the entire string to the 
 regex, and thus, that match will fail if the message contains anything before 
 or after JDO[a-zA-Z]\*Exception. The solution, however, is very simple, we 
 should match .\*JDO[a-zA-Z]\*Exception.\*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9310) CLI JLine does not flush history back to ~/.hivehistory

2015-01-21 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286424#comment-14286424
 ] 

Prasanth Jayachandran commented on HIVE-9310:
-

[~gopalv] The flushing of the history does not seem to work. history.flush() is 
not getting called. Can we flush the history before processing each command? or 
in some other place which is reliable. 

 CLI JLine does not flush history back to ~/.hivehistory
 ---

 Key: HIVE-9310
 URL: https://issues.apache.org/jira/browse/HIVE-9310
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.15.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Minor
 Fix For: 0.15.0

 Attachments: HIVE-9310.1.patch


 Hive CLI does not seem to be saving history anymore.
 In JLine with the PersistentHistory class, to keep history across sessions, 
 you need to do {{reader.getHistory().flush()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9435) Fix auto_join21.q for Tez

2015-01-21 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286180#comment-14286180
 ] 

Szehon Ho commented on HIVE-9435:
-

+1

 Fix auto_join21.q for Tez
 -

 Key: HIVE-9435
 URL: https://issues.apache.org/jira/browse/HIVE-9435
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: HIVE-9435.1.patch


 Somehow, the golden file is updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9436) RetryingMetaStoreClient does not retry JDOExceptions

2015-01-21 Thread Sushanth Sowmyan (JIRA)
Sushanth Sowmyan created HIVE-9436:
--

 Summary: RetryingMetaStoreClient does not retry JDOExceptions
 Key: HIVE-9436
 URL: https://issues.apache.org/jira/browse/HIVE-9436
 Project: Hive
  Issue Type: Bug
Reporter: Sushanth Sowmyan


RetryingMetaStoreClient has a bug in the following bit of code:

{code}
} else if ((e.getCause() instanceof MetaException) 
e.getCause().getMessage().matches(JDO[a-zA-Z]*Exception)) {
  caughtException = (MetaException) e.getCause();
} else {
  throw e.getCause();
}
{code}


The bug here is that java String.matches matches the entire string to the 
regex, and thus, that match will fail if the message contains anything before 
or after JDO[a-zA-Z]\*Exception. The solution, however, is very simple, we 
should match .\*JDO[a-zA-Z]\*Exception.\*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9436) RetryingMetaStoreClient does not retry JDOExceptions

2015-01-21 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan reassigned HIVE-9436:
--

Assignee: Sushanth Sowmyan

 RetryingMetaStoreClient does not retry JDOExceptions
 

 Key: HIVE-9436
 URL: https://issues.apache.org/jira/browse/HIVE-9436
 Project: Hive
  Issue Type: Bug
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan

 RetryingMetaStoreClient has a bug in the following bit of code:
 {code}
 } else if ((e.getCause() instanceof MetaException) 
 e.getCause().getMessage().matches(JDO[a-zA-Z]*Exception)) {
   caughtException = (MetaException) e.getCause();
 } else {
   throw e.getCause();
 }
 {code}
 The bug here is that java String.matches matches the entire string to the 
 regex, and thus, that match will fail if the message contains anything before 
 or after JDO[a-zA-Z]\*Exception. The solution, however, is very simple, we 
 should match .\*JDO[a-zA-Z]\*Exception.\*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9053) select constant in union all followed by group by gives wrong result

2015-01-21 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286369#comment-14286369
 ] 

Pengcheng Xiong commented on HIVE-9053:
---

[~vikram.dixit], as per your request, the patch is attached. Please let me know 
if there is any problem. Thanks. 

 select constant in union all followed by group by gives wrong result
 

 Key: HIVE-9053
 URL: https://issues.apache.org/jira/browse/HIVE-9053
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0, 0.14.0
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Fix For: 0.15.0

 Attachments: HIVE-9053.01.patch, HIVE-9053.02.patch, 
 HIVE-9053.03.patch, HIVE-9053.04.patch, HIVE-9053.patch-branch-1.0


 Here is the the way to reproduce with q test:
 select key from (select '1' as key from src union all select key from src)tab 
 group by key;
 will give
 OK
 NULL
 1
 This is not correct as src contains many other keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9053) select constant in union all followed by group by gives wrong result

2015-01-21 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286257#comment-14286257
 ] 

Vikram Dixit K commented on HIVE-9053:
--

[~pxiong] can you create a patch based on branch 1.0 instead of branch 0.14.

 select constant in union all followed by group by gives wrong result
 

 Key: HIVE-9053
 URL: https://issues.apache.org/jira/browse/HIVE-9053
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0, 0.14.0
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Fix For: 0.15.0

 Attachments: HIVE-9053.01.patch, HIVE-9053.02.patch, 
 HIVE-9053.03.patch, HIVE-9053.04.patch


 Here is the the way to reproduce with q test:
 select key from (select '1' as key from src union all select key from src)tab 
 group by key;
 will give
 OK
 NULL
 1
 This is not correct as src contains many other keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9437) Beeline does not add any existing HADOOP_CLASSPATH

2015-01-21 Thread Ashish Kumar Singh (JIRA)
Ashish Kumar Singh created HIVE-9437:


 Summary: Beeline does not add any existing HADOOP_CLASSPATH
 Key: HIVE-9437
 URL: https://issues.apache.org/jira/browse/HIVE-9437
 Project: Hive
  Issue Type: Bug
Reporter: Ashish Kumar Singh
Priority: Blocker
 Fix For: 0.15.0


Beeline does not add any existing HADOOP_CLASSPATH in the environment to 
HADOOP_CLASSPATH here: 
https://github.com/apache/hive/blob/trunk/bin/ext/beeline.sh#L28



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3798) Can't escape reserved keywords used as table names

2015-01-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3798:
---
Component/s: Parser

 Can't escape reserved keywords used as table names
 --

 Key: HIVE-3798
 URL: https://issues.apache.org/jira/browse/HIVE-3798
 Project: Hive
  Issue Type: Bug
  Components: Parser
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.14.0


 {noformat}hive (some_table) show tables;
 OK
 ...
 comment
 ...
 Time taken: 0.076 seconds
 hive (some_table) describe comment;
 FAILED: Parse Error: line 1:0 cannot recognize input near 'describe' 
 'comment' 'EOF' in describe statement
 hive (some_table) describe `comment`; 
 OK
 Table `comment` does not exist 
 Time taken: 0.042 seconds
 {noformat}
 Describe should honor character escaping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-3798) Can't escape reserved keywords used as table names

2015-01-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-3798.

   Resolution: Fixed
Fix Version/s: 0.14.0

 Can't escape reserved keywords used as table names
 --

 Key: HIVE-3798
 URL: https://issues.apache.org/jira/browse/HIVE-3798
 Project: Hive
  Issue Type: Bug
  Components: Parser
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.14.0


 {noformat}hive (some_table) show tables;
 OK
 ...
 comment
 ...
 Time taken: 0.076 seconds
 hive (some_table) describe comment;
 FAILED: Parse Error: line 1:0 cannot recognize input near 'describe' 
 'comment' 'EOF' in describe statement
 hive (some_table) describe `comment`; 
 OK
 Table `comment` does not exist 
 Time taken: 0.042 seconds
 {noformat}
 Describe should honor character escaping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3798) Can't escape reserved keywords used as table names

2015-01-21 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286335#comment-14286335
 ] 

Jakob Homan commented on HIVE-3798:
---

I don't currently have an Hive 0.14 install to test this, but happy it's fixed. 
 Thanks.

 Can't escape reserved keywords used as table names
 --

 Key: HIVE-3798
 URL: https://issues.apache.org/jira/browse/HIVE-3798
 Project: Hive
  Issue Type: Bug
  Components: Parser
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.14.0


 {noformat}hive (some_table) show tables;
 OK
 ...
 comment
 ...
 Time taken: 0.076 seconds
 hive (some_table) describe comment;
 FAILED: Parse Error: line 1:0 cannot recognize input near 'describe' 
 'comment' 'EOF' in describe statement
 hive (some_table) describe `comment`; 
 OK
 Table `comment` does not exist 
 Time taken: 0.042 seconds
 {noformat}
 Describe should honor character escaping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6617) Reduce ambiguity in grammar

2015-01-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-6617:
--
Attachment: HIVE-6617.06.patch

 Reduce ambiguity in grammar
 ---

 Key: HIVE-6617
 URL: https://issues.apache.org/jira/browse/HIVE-6617
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Pengcheng Xiong
 Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
 HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, HIVE-6617.06.patch


 As of today, antlr reports 214 warnings. Need to bring down this number, 
 ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8485) HMS on Oracle incompatibility

2015-01-21 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286304#comment-14286304
 ] 

Chaoyu Tang commented on HIVE-8485:
---

[~sushanth] Thanks for taking care of this. The patch looks good and I thought 
of the exactly same implementation as you are having except that I also 
considered to enforce hive.metastore.orm.retrieveMapNullsAsEmptyStrings to only 
effect when the backend database is Oracle. I was also wondering if there is a 
case that the user would really like to use the null or multiple empty space 
string (instead of the empty string) as the parameter value, and if these case 
are valid.

 HMS on Oracle incompatibility
 -

 Key: HIVE-8485
 URL: https://issues.apache.org/jira/browse/HIVE-8485
 Project: Hive
  Issue Type: Bug
  Components: Metastore
 Environment: Oracle as metastore DB
Reporter: Ryan Pridgeon
Assignee: Chaoyu Tang
 Attachments: HIVE-8485.2.patch, HIVE-8485.patch


 Oracle does not distinguish between empty strings and NULL,which proves 
 problematic for DataNucleus.
 In the event a user creates a table with some property stored as an empty 
 string the table will no longer be accessible.
 i.e. TBLPROPERTIES ('serialization.null.format'='')
 If they try to select, describe, drop, etc the client prints the following 
 exception.
 ERROR ql.Driver: FAILED: SemanticException [Error 10001]: Table not found 
 table name
 The work around for this was to go into the hive metastore on the Oracle 
 database and replace NULL with some other string. Users could then drop the 
 tables or alter their data to use the new null format they just set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9409) Avoid ser/de loggers as logging framework can be incompatible on driver and workers

2015-01-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286406#comment-14286406
 ] 

Hive QA commented on HIVE-9409:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693660/HIVE-9409.1.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 7346 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2465/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2465/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2465/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12693660 - PreCommit-HIVE-TRUNK-Build

 Avoid ser/de loggers as logging framework can be incompatible on driver and 
 workers
 ---

 Key: HIVE-9409
 URL: https://issues.apache.org/jira/browse/HIVE-9409
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
 Environment: CentOS6.5  
 Java version: 1.7.0_67
Reporter: Xin Hao
Assignee: Rui Li
 Attachments: HIVE-9409.1.patch, HIVE-9409.1.patch, HIVE-9409.1.patch


 When we use current [Spark Branch] to build hive package. deploy it on our 
 cluster and execute hive queries (e.g. BigBench case Q10, Q18, Q19, Q27) by 
 default mode (i.e. just Hive on MR, not HiveOnSpark),  Error 
 'java.lang.ClassNotFoundException: 
 org.apache.commons.logging.impl.SLF4JLocationAwareLog' will occurs.
 For other released apache or CDH hive version(e.g. apache hive 0.14), there 
 is no this issue.
 By the way, if we use 'add jar /location/to/jcl-over-slf4j-1.7.5.jar' before 
 hive query execution, the issue will be workaround. 
 The detail diagnostic messages are as below:
 ==
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: Failed to load plan: 
 hdfs://bhx1:8020/tmp/hive/root/4a4cbeb2-cf42-4eb7-a78a-7ecea6af2aff/hive_2015-01-17_10-45-51_360_5581900288096206774-1/-mr-10004/1c6c4667-8b81-41ed-a42e-fe099ae3379f/map.xml:
  org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find 
 class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:431)
 at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:287)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
 at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:657)
 at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:169)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find cl
 Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to 
 find class: org.apache.commons.logging.impl.SLF4JLocationAwareLog
 Serialization trace:
 LOG (org.apache.hadoop.hive.ql.exec.UDTFOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork 

[jira] [Updated] (HIVE-9439) merge ORC disk ranges as we go when reading RGs

2015-01-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9439:
---
Attachment: HIVE-9439.patch

Attaching the patch. We expect subsequent RGs to be close... not sure if it 
makes sense to populate and check lastRange between different streams; would 
the be expected to be close together?

 merge ORC disk ranges as we go when reading RGs
 ---

 Key: HIVE-9439
 URL: https://issues.apache.org/jira/browse/HIVE-9439
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor
 Attachments: HIVE-9439.patch


 Currently we get ranges for all the RGs individually, then merge them. We can 
 do some (probably most of) the merging as we go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Created branch 1.0

2015-01-21 Thread Eugene Koifman
could we include HIVE-9390  HIVE-9404?  This has been committed to trunk.
They add useful retry logic to support insert/update/delete functionality.

On Wed, Jan 21, 2015 at 1:06 PM, Vikram Dixit K vikram.di...@gmail.com
wrote:

 Hi Folks,

 I have created branch 1.0 as discussed earlier. All the jiras that have
 0.14 as the fix version should be committed to 1.0 branch instead. The list
 of jiras that are being tracked for 1.0 are as follows:

 HIVE-8485
 HIVE-9053
 HIVE-8996.

 Please let me know if you want to include more jiras here. I am working on
 generating javadocs for this. I hope to have an RC out once these jiras get
 in.

 Regards
 Vikram.

 On Tue, Jan 20, 2015 at 1:00 PM, Vaibhav Gumashta 
 vgumas...@hortonworks.com
  wrote:

  Hi Vikram,
 
  I'd like to get this in: HIVE-8890
  https://issues.apache.org/jira/browse/HIVE-8890 [HiveServer2 dynamic
  service discovery: use persistent ephemeral nodes curator recipe].
 
  Thanks,
  --Vaibhav
 
  On Mon, Jan 19, 2015 at 9:29 PM, Alan Gates ga...@hortonworks.com
 wrote:
 
   I'd really like to get HIVE-8966 in there, since it breaks streaming
   ingest.  The patch is ready to go, it's just waiting on a review, which
   Owen has promised to do soon.
  
   Alan.
  
 Vikram Dixit K vikram.di...@gmail.com
January 19, 2015 at 18:53
   Hi All,
  
   I am going to be creating the branch 1.0 as mentioned earlier,
 tomorrow.
  I
   have the following list of jiras that I want to get committed to the
  branch
   before creating an RC.
  
   HIVE-9112
   HIVE-6997 : Delete hive server 1
   HIVE-8485
   HIVE-9053
  
   Please let me know if you would like me to include any other jiras.
  
   Thanks
   Vikram.
  
  
   On Fri, Jan 16, 2015 at 1:35 PM, Vikram Dixit K 
 vikram.di...@gmail.com
   vikram.di...@gmail.com
  
  
  
 Thejas Nair the...@hortonworks.com
January 1, 2015 at 10:23
   Yes, 1.0 is a good opportunity to remove some of the deprecated
   components. The change to remove HiveServer1 is already there in trunk
   , we should include that.
   We can also use 1.0 release to clarify the public vs private status of
   some of the APIs.
  
   Thanks for the reminder about the documentation status of 1.0. I will
   look at some of them.
  
  
   On Wed, Dec 31, 2014 at 12:12 AM, Lefty Leverenz
  
 Lefty Leverenz leftylever...@gmail.com
December 31, 2014 at 0:12
   Oh, now I get it. The 1.0.0 *branch* of Hive. Okay.
  
   -- Lefty
  
   On Tue, Dec 30, 2014 at 11:43 PM, Lefty Leverenz 
  leftylever...@gmail.com
   leftylever...@gmail.com
  
 Lefty Leverenz leftylever...@gmail.com
December 30, 2014 at 23:43
   I thought x.x.# releases were just for fixups, x.#.x could include new
   features, and #.x.x were major releases that might have some
   backward-incompatible changes. But I guess we haven't agreed on that.
  
   As for documentation, we still have 84 jiras with TODOC14 labels
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
  
   .
   Not to mention 25 TODOC13 labels
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
  
   ,
   eleven TODOC12
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
  
   ,
   seven TODOC11
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
  
   ,
   and seven TODOC10
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
  
   .
  
   That's 134 doc tasks to finish for a Hive 1.0.0 release -- preferably
 by
   the release date, not after. Because expectations are higher for 1.0.0
   releases.
  
  
   -- Lefty
  
   On Tue, Dec 30, 2014 at 5:23 PM, Vikram Dixit K 
 vikram.di...@gmail.com
   vikram.di...@gmail.com
  
 Vikram Dixit K vikram.di...@gmail.com
December 30, 2014 at 17:23
   Hi Folks,
  
   Given that there have been a number of fixes that have gone into branch
   0.14 in the past 8 weeks, I would like to make a release of 0.14.1
 soon.
  I
   would like to fix some of the release issues as well this time around.
 I
  am
   thinking of some time around 15th January for getting a RC out. Please
  let
   me know if you have any concerns. Also, from a previous thread, I would
   like to make this release the 1.0 branch of hive. The process for
 getting
   jiras into this release is going to be 

Re: Created branch 1.0

2015-01-21 Thread Lefty Leverenz
So my initial impression was correct -- instead of calling it release
0.14.1, we're calling it 1.0.0.  Or am I hopelessly confused?

Will 0.15.0 be 1.1.0?  (If so, I'll need to edit a dozen wikidocs.)

Will release numbers get changed in JIRA issues?  Presumably that's not
possible in old comments, so we should document the equivalences
somewhere.  A JIRA issue for that with a well-phrased summary could help
future searchers.


-- Lefty

On Wed, Jan 21, 2015 at 2:47 PM, Eugene Koifman ekoif...@hortonworks.com
wrote:

 could we include HIVE-9390  HIVE-9404?  This has been committed to trunk.
 They add useful retry logic to support insert/update/delete functionality.

 On Wed, Jan 21, 2015 at 1:06 PM, Vikram Dixit K vikram.di...@gmail.com
 wrote:

  Hi Folks,
 
  I have created branch 1.0 as discussed earlier. All the jiras that have
  0.14 as the fix version should be committed to 1.0 branch instead. The
 list
  of jiras that are being tracked for 1.0 are as follows:
 
  HIVE-8485
  HIVE-9053
  HIVE-8996.
 
  Please let me know if you want to include more jiras here. I am working
 on
  generating javadocs for this. I hope to have an RC out once these jiras
 get
  in.
 
  Regards
  Vikram.
 
  On Tue, Jan 20, 2015 at 1:00 PM, Vaibhav Gumashta 
  vgumas...@hortonworks.com
   wrote:
 
   Hi Vikram,
  
   I'd like to get this in: HIVE-8890
   https://issues.apache.org/jira/browse/HIVE-8890 [HiveServer2 dynamic
   service discovery: use persistent ephemeral nodes curator recipe].
  
   Thanks,
   --Vaibhav
  
   On Mon, Jan 19, 2015 at 9:29 PM, Alan Gates ga...@hortonworks.com
  wrote:
  
I'd really like to get HIVE-8966 in there, since it breaks streaming
ingest.  The patch is ready to go, it's just waiting on a review,
 which
Owen has promised to do soon.
   
Alan.
   
  Vikram Dixit K vikram.di...@gmail.com
 January 19, 2015 at 18:53
Hi All,
   
I am going to be creating the branch 1.0 as mentioned earlier,
  tomorrow.
   I
have the following list of jiras that I want to get committed to the
   branch
before creating an RC.
   
HIVE-9112
HIVE-6997 : Delete hive server 1
HIVE-8485
HIVE-9053
   
Please let me know if you would like me to include any other jiras.
   
Thanks
Vikram.
   
   
On Fri, Jan 16, 2015 at 1:35 PM, Vikram Dixit K 
  vikram.di...@gmail.com
vikram.di...@gmail.com
   
   
   
  Thejas Nair the...@hortonworks.com
 January 1, 2015 at 10:23
Yes, 1.0 is a good opportunity to remove some of the deprecated
components. The change to remove HiveServer1 is already there in
 trunk
, we should include that.
We can also use 1.0 release to clarify the public vs private status
 of
some of the APIs.
   
Thanks for the reminder about the documentation status of 1.0. I will
look at some of them.
   
   
On Wed, Dec 31, 2014 at 12:12 AM, Lefty Leverenz
   
  Lefty Leverenz leftylever...@gmail.com
 December 31, 2014 at 0:12
Oh, now I get it. The 1.0.0 *branch* of Hive. Okay.
   
-- Lefty
   
On Tue, Dec 30, 2014 at 11:43 PM, Lefty Leverenz 
   leftylever...@gmail.com
leftylever...@gmail.com
   
  Lefty Leverenz leftylever...@gmail.com
 December 30, 2014 at 23:43
I thought x.x.# releases were just for fixups, x.#.x could include
 new
features, and #.x.x were major releases that might have some
backward-incompatible changes. But I guess we haven't agreed on that.
   
As for documentation, we still have 84 jiras with TODOC14 labels
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
   
.
Not to mention 25 TODOC13 labels
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
   
,
eleven TODOC12
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
   
,
seven TODOC11
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
   
,
and seven TODOC10
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
   
.
   
That's 134 doc tasks to finish for a Hive 1.0.0 release -- preferably
  by
the release date, not after. Because expectations are higher for
 1.0.0
releases.
   
   
-- Lefty
   

Re: adding public domain Java files to Hive source

2015-01-21 Thread Sergey Shelukhin
Because there's no binary library as far as I can tell (see original
message)

On Wed, Jan 21, 2015 at 3:18 PM, Nick Dimiduk ndimi...@gmail.com wrote:

 I guess the PMC should be responsive to this kind of question.

 Can you not depend on a library containing these files? Why include the
 source directly?

 On Wed, Jan 21, 2015 at 3:16 PM, Sergey Shelukhin ser...@hortonworks.com
 wrote:

  Ping? Where do I write about such matters if not here.
 
  On Wed, Jan 14, 2015 at 11:43 AM, Sergey Shelukhin 
 ser...@hortonworks.com
  
  wrote:
 
   Suppose I want to use a Java source within Hive that has this header (I
   don't now, but I was considering it and may want it later ;)):
  
   /*
* Written by Doug Lea with assistance from members of JCP JSR-166
* Expert Group and released to the public domain, as explained at
* http://creativecommons.org/licenses/publicdomain
*/
  
   As far as I see the class is not available in binary distribution, and
   there are projects on github that use it as is and add their license on
  top.
   Can I add it to Apache (Hive) codebase?
   Should Apache license header be added? Should the original header be
   retained?
  
  
  
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Created branch 1.0

2015-01-21 Thread Brock Noland
Hi,

I should have just waited to check where this branch came from, but I
was quite surprised when I received this mail so I too quickly fired
off a response. I apologize to everyone for the spam. The reason I was
surprised is that I don't feel there was a consenus comming out of the
1.0 discussion: http://s.apache.org/hive-1.0-discuss

As mentioned in that thread, I am not favor of creating 1.0 from 0.14.
That will be horribly confusing. Too avoid confusion amongst our
users, and indeed our developers, the 1.0 branch should be created
from trunk and be a superset of the 0.15 release (minus anything we
delete due to depreciation).

Additionally, as Bill expressed on that thread, I see defining our
public API as a big aspect of moving to a 1.0.

Cheers.
Brock

On Wed, Jan 21, 2015 at 3:28 PM, Lefty Leverenz leftylever...@gmail.com wrote:
 So my initial impression was correct -- instead of calling it release
 0.14.1, we're calling it 1.0.0.  Or am I hopelessly confused?

 Will 0.15.0 be 1.1.0?  (If so, I'll need to edit a dozen wikidocs.)

 Will release numbers get changed in JIRA issues?  Presumably that's not
 possible in old comments, so we should document the equivalences
 somewhere.  A JIRA issue for that with a well-phrased summary could help
 future searchers.


 -- Lefty

 On Wed, Jan 21, 2015 at 2:47 PM, Eugene Koifman ekoif...@hortonworks.com
 wrote:

 could we include HIVE-9390  HIVE-9404?  This has been committed to trunk.
 They add useful retry logic to support insert/update/delete functionality.

 On Wed, Jan 21, 2015 at 1:06 PM, Vikram Dixit K vikram.di...@gmail.com
 wrote:

  Hi Folks,
 
  I have created branch 1.0 as discussed earlier. All the jiras that have
  0.14 as the fix version should be committed to 1.0 branch instead. The
 list
  of jiras that are being tracked for 1.0 are as follows:
 
  HIVE-8485
  HIVE-9053
  HIVE-8996.
 
  Please let me know if you want to include more jiras here. I am working
 on
  generating javadocs for this. I hope to have an RC out once these jiras
 get
  in.
 
  Regards
  Vikram.
 
  On Tue, Jan 20, 2015 at 1:00 PM, Vaibhav Gumashta 
  vgumas...@hortonworks.com
   wrote:
 
   Hi Vikram,
  
   I'd like to get this in: HIVE-8890
   https://issues.apache.org/jira/browse/HIVE-8890 [HiveServer2 dynamic
   service discovery: use persistent ephemeral nodes curator recipe].
  
   Thanks,
   --Vaibhav
  
   On Mon, Jan 19, 2015 at 9:29 PM, Alan Gates ga...@hortonworks.com
  wrote:
  
I'd really like to get HIVE-8966 in there, since it breaks streaming
ingest.  The patch is ready to go, it's just waiting on a review,
 which
Owen has promised to do soon.
   
Alan.
   
  Vikram Dixit K vikram.di...@gmail.com
 January 19, 2015 at 18:53
Hi All,
   
I am going to be creating the branch 1.0 as mentioned earlier,
  tomorrow.
   I
have the following list of jiras that I want to get committed to the
   branch
before creating an RC.
   
HIVE-9112
HIVE-6997 : Delete hive server 1
HIVE-8485
HIVE-9053
   
Please let me know if you would like me to include any other jiras.
   
Thanks
Vikram.
   
   
On Fri, Jan 16, 2015 at 1:35 PM, Vikram Dixit K 
  vikram.di...@gmail.com
vikram.di...@gmail.com
   
   
   
  Thejas Nair the...@hortonworks.com
 January 1, 2015 at 10:23
Yes, 1.0 is a good opportunity to remove some of the deprecated
components. The change to remove HiveServer1 is already there in
 trunk
, we should include that.
We can also use 1.0 release to clarify the public vs private status
 of
some of the APIs.
   
Thanks for the reminder about the documentation status of 1.0. I will
look at some of them.
   
   
On Wed, Dec 31, 2014 at 12:12 AM, Lefty Leverenz
   
  Lefty Leverenz leftylever...@gmail.com
 December 31, 2014 at 0:12
Oh, now I get it. The 1.0.0 *branch* of Hive. Okay.
   
-- Lefty
   
On Tue, Dec 30, 2014 at 11:43 PM, Lefty Leverenz 
   leftylever...@gmail.com
leftylever...@gmail.com
   
  Lefty Leverenz leftylever...@gmail.com
 December 30, 2014 at 23:43
I thought x.x.# releases were just for fixups, x.#.x could include
 new
features, and #.x.x were major releases that might have some
backward-incompatible changes. But I guess we haven't agreed on that.
   
As for documentation, we still have 84 jiras with TODOC14 labels
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
   
.
Not to mention 25 TODOC13 labels
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
   
,
eleven TODOC12
   

  
 
 

[jira] [Created] (HIVE-9440) Folders may not be pruned for Hadoop 2

2015-01-21 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HIVE-9440:
-

 Summary: Folders may not be pruned for Hadoop 2
 Key: HIVE-9440
 URL: https://issues.apache.org/jira/browse/HIVE-9440
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang


HIVE-9367 is not a complete fix. It fixed for Hadoop 1. For Hadoop2, this 
method is not invoked.
{noformat}
protected FileStatus[] listStatus(JobConf job) throws IOException;
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Undeliverable mail: Re: adding public domain Java files to Hive source

2015-01-21 Thread Ashutosh Chauhan
Done. I have removed two offending ids from list.

On Wed, Jan 21, 2015 at 3:22 PM, Nick Dimiduk ndimi...@gmail.com wrote:

 Seriously, these guys are still spamming this list? Why hasn't the dev-list
 admin booted these receivers yet? It's been *months*.

 On Wed, Jan 21, 2015 at 3:19 PM, mailer-dae...@mail.mailbrush.com wrote:

  Failed to deliver to 'bsc...@ebuddy.com'
  SMTP module(domain mail-in.ebuddy.com:25) reports:
   host mail-in.ebuddy.com:25 says:
   550 5.1.1 User unknown
 
 
  Original-Recipient: rfc822;bsc...@ebuddy.com
  Final-Recipient: rfc822;bsc...@ebuddy.com
  Action: failed
  Status: 5.0.0
 
 



Fwd: [jira] [Commented] (HCATALOG-541) The meta store client throws TimeOut exception if ~1000 clients are trying to call listPartition on the server

2015-01-21 Thread Lefty Leverenz
Now that HCatalog is part of the Hive project, messages about HCATALOG-###
issues should go to dev@hive.apache.org.

-- Lefty

-- Forwarded message --
From: Manish Malhotra (JIRA) j...@apache.org
Date: Wed, Jan 21, 2015 at 9:27 AM
Subject: [jira] [Commented] (HCATALOG-541) The meta store client throws
TimeOut exception if ~1000 clients are trying to call listPartition on the
server
To: hcatalog-...@incubator.apache.org



[
https://issues.apache.org/jira/browse/HCATALOG-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285924#comment-14285924
]

Manish Malhotra commented on HCATALOG-541:
--

Hi Travis and Arup,

I'm also facing similar problem while using Hive Thrift Server but without
HCatalog.
But I didnt see OOM error in the thrift server logs.

Pattern is mostly when the load on the Hive thrift server is high ( mostly
when most of the Hive ETL jobs are running) some time it start getting into
the mode where it doesnt respond in time and throws Socket Timeout.

And this happens for different operations and not only for list partitions.

Please update, if there is any update on this ticket, that might help my
situation as well.

Regards,
Manish

Stack Trace:

 at
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_database(ThriftHiveMetastore.java:412)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_database(ThriftHiveMetastore.java:399)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:736)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74)
at $Proxy7.getDatabase(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1110)
at
org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1099)
at org.apache.hadoop.hive.ql.exec.DDLTask.showTables(DDLTask.java:2206)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:334)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138)
at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1336)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1122)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:935)
at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:347)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:706)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 34 more
2015-01-20 22:44:12,978 ERROR exec.Task (SessionState.java:printError(401))
- FAILED: Error in metadata:
org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out
org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out
at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1114)
at
org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1099)
at 

[jira] [Commented] (HIVE-9440) Folders may not be pruned for Hadoop 2

2015-01-21 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286622#comment-14286622
 ] 

Xuefu Zhang commented on HIVE-9440:
---

+1 pending on tests.

 Folders may not be pruned for Hadoop 2
 --

 Key: HIVE-9440
 URL: https://issues.apache.org/jira/browse/HIVE-9440
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: HIVE-9440.1.patch


 HIVE-9367 is not a complete fix. It fixed for Hadoop 1. For Hadoop2, this 
 method is not invoked.
 {noformat}
 protected FileStatus[] listStatus(JobConf job) throws IOException;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Created branch 1.0

2015-01-21 Thread Thejas Nair
Hi Lefty,
Yes, you are right. Anything that is not fixed in 0.14 and is fixed in
1.0 would have 1.0 as the fixed version.
Yes, 0.15.0 would then become 1.1.0 .

Yes, it is a good idea to document this translation somewhere.


On Wed, Jan 21, 2015 at 3:28 PM, Lefty Leverenz leftylever...@gmail.com wrote:
 So my initial impression was correct -- instead of calling it release
 0.14.1, we're calling it 1.0.0.  Or am I hopelessly confused?

 Will 0.15.0 be 1.1.0?  (If so, I'll need to edit a dozen wikidocs.)

 Will release numbers get changed in JIRA issues?  Presumably that's not
 possible in old comments, so we should document the equivalences
 somewhere.  A JIRA issue for that with a well-phrased summary could help
 future searchers.


 -- Lefty

 On Wed, Jan 21, 2015 at 2:47 PM, Eugene Koifman ekoif...@hortonworks.com
 wrote:

 could we include HIVE-9390  HIVE-9404?  This has been committed to trunk.
 They add useful retry logic to support insert/update/delete functionality.

 On Wed, Jan 21, 2015 at 1:06 PM, Vikram Dixit K vikram.di...@gmail.com
 wrote:

  Hi Folks,
 
  I have created branch 1.0 as discussed earlier. All the jiras that have
  0.14 as the fix version should be committed to 1.0 branch instead. The
 list
  of jiras that are being tracked for 1.0 are as follows:
 
  HIVE-8485
  HIVE-9053
  HIVE-8996.
 
  Please let me know if you want to include more jiras here. I am working
 on
  generating javadocs for this. I hope to have an RC out once these jiras
 get
  in.
 
  Regards
  Vikram.
 
  On Tue, Jan 20, 2015 at 1:00 PM, Vaibhav Gumashta 
  vgumas...@hortonworks.com
   wrote:
 
   Hi Vikram,
  
   I'd like to get this in: HIVE-8890
   https://issues.apache.org/jira/browse/HIVE-8890 [HiveServer2 dynamic
   service discovery: use persistent ephemeral nodes curator recipe].
  
   Thanks,
   --Vaibhav
  
   On Mon, Jan 19, 2015 at 9:29 PM, Alan Gates ga...@hortonworks.com
  wrote:
  
I'd really like to get HIVE-8966 in there, since it breaks streaming
ingest.  The patch is ready to go, it's just waiting on a review,
 which
Owen has promised to do soon.
   
Alan.
   
  Vikram Dixit K vikram.di...@gmail.com
 January 19, 2015 at 18:53
Hi All,
   
I am going to be creating the branch 1.0 as mentioned earlier,
  tomorrow.
   I
have the following list of jiras that I want to get committed to the
   branch
before creating an RC.
   
HIVE-9112
HIVE-6997 : Delete hive server 1
HIVE-8485
HIVE-9053
   
Please let me know if you would like me to include any other jiras.
   
Thanks
Vikram.
   
   
On Fri, Jan 16, 2015 at 1:35 PM, Vikram Dixit K 
  vikram.di...@gmail.com
vikram.di...@gmail.com
   
   
   
  Thejas Nair the...@hortonworks.com
 January 1, 2015 at 10:23
Yes, 1.0 is a good opportunity to remove some of the deprecated
components. The change to remove HiveServer1 is already there in
 trunk
, we should include that.
We can also use 1.0 release to clarify the public vs private status
 of
some of the APIs.
   
Thanks for the reminder about the documentation status of 1.0. I will
look at some of them.
   
   
On Wed, Dec 31, 2014 at 12:12 AM, Lefty Leverenz
   
  Lefty Leverenz leftylever...@gmail.com
 December 31, 2014 at 0:12
Oh, now I get it. The 1.0.0 *branch* of Hive. Okay.
   
-- Lefty
   
On Tue, Dec 30, 2014 at 11:43 PM, Lefty Leverenz 
   leftylever...@gmail.com
leftylever...@gmail.com
   
  Lefty Leverenz leftylever...@gmail.com
 December 30, 2014 at 23:43
I thought x.x.# releases were just for fixups, x.#.x could include
 new
features, and #.x.x were major releases that might have some
backward-incompatible changes. But I guess we haven't agreed on that.
   
As for documentation, we still have 84 jiras with TODOC14 labels
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
   
.
Not to mention 25 TODOC13 labels
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
   
,
eleven TODOC12
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
   
,
seven TODOC11
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
   
,
and seven TODOC10
   

  
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
   
   

[jira] [Commented] (HIVE-9359) Export of a large table causes OOM in Metastore and Client

2015-01-21 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286719#comment-14286719
 ] 

Alan Gates commented on HIVE-9359:
--

+1.

 Export of a large table causes OOM in Metastore and Client
 --

 Key: HIVE-9359
 URL: https://issues.apache.org/jira/browse/HIVE-9359
 Project: Hive
  Issue Type: Bug
  Components: Import/Export, Metastore
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-9359.2.patch, HIVE-9359.patch


 Running hive export on a table with a large number of partitions winds up 
 making the metastore and client run out of memory. The number of places we 
 wind up having a copy of the entire partitions object wind up being as 
 follows:
 Metastore
* (temporarily) Metastore MPartition objects
* ListPartition that gets persisted before sending to thrift
* thrift copy of all of those partitions
 Client side
* thrift copy of partitions
* deepcopy of above to create ListPartition objects
* JSONObject that contains all of those above partition objects
* ListReadEntity which each encapsulates the aforesaid partition objects.
 This memory usage needs to be drastically reduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9439) merge ORC disk ranges as we go when reading RGs

2015-01-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9439:
---
Status: Patch Available  (was: Open)

 merge ORC disk ranges as we go when reading RGs
 ---

 Key: HIVE-9439
 URL: https://issues.apache.org/jira/browse/HIVE-9439
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor
 Attachments: HIVE-9439.patch


 Currently we get ranges for all the RGs individually, then merge them. We can 
 do some (probably most of) the merging as we go.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Created branch 1.0

2015-01-21 Thread Brock Noland
Vikram,

Which branch was the 1.0 branch created from?

Brock
On Jan 21, 2015 1:07 PM, Vikram Dixit K vikram.di...@gmail.com wrote:

 Hi Folks,

 I have created branch 1.0 as discussed earlier. All the jiras that have
 0.14 as the fix version should be committed to 1.0 branch instead. The list
 of jiras that are being tracked for 1.0 are as follows:

 HIVE-8485
 HIVE-9053
 HIVE-8996.

 Please let me know if you want to include more jiras here. I am working on
 generating javadocs for this. I hope to have an RC out once these jiras get
 in.

 Regards
 Vikram.

 On Tue, Jan 20, 2015 at 1:00 PM, Vaibhav Gumashta 
 vgumas...@hortonworks.com
  wrote:

  Hi Vikram,
 
  I'd like to get this in: HIVE-8890
  https://issues.apache.org/jira/browse/HIVE-8890 [HiveServer2 dynamic
  service discovery: use persistent ephemeral nodes curator recipe].
 
  Thanks,
  --Vaibhav
 
  On Mon, Jan 19, 2015 at 9:29 PM, Alan Gates ga...@hortonworks.com
 wrote:
 
   I'd really like to get HIVE-8966 in there, since it breaks streaming
   ingest.  The patch is ready to go, it's just waiting on a review, which
   Owen has promised to do soon.
  
   Alan.
  
 Vikram Dixit K vikram.di...@gmail.com
January 19, 2015 at 18:53
   Hi All,
  
   I am going to be creating the branch 1.0 as mentioned earlier,
 tomorrow.
  I
   have the following list of jiras that I want to get committed to the
  branch
   before creating an RC.
  
   HIVE-9112
   HIVE-6997 : Delete hive server 1
   HIVE-8485
   HIVE-9053
  
   Please let me know if you would like me to include any other jiras.
  
   Thanks
   Vikram.
  
  
   On Fri, Jan 16, 2015 at 1:35 PM, Vikram Dixit K 
 vikram.di...@gmail.com
   vikram.di...@gmail.com
  
  
  
 Thejas Nair the...@hortonworks.com
January 1, 2015 at 10:23
   Yes, 1.0 is a good opportunity to remove some of the deprecated
   components. The change to remove HiveServer1 is already there in trunk
   , we should include that.
   We can also use 1.0 release to clarify the public vs private status of
   some of the APIs.
  
   Thanks for the reminder about the documentation status of 1.0. I will
   look at some of them.
  
  
   On Wed, Dec 31, 2014 at 12:12 AM, Lefty Leverenz
  
 Lefty Leverenz leftylever...@gmail.com
December 31, 2014 at 0:12
   Oh, now I get it. The 1.0.0 *branch* of Hive. Okay.
  
   -- Lefty
  
   On Tue, Dec 30, 2014 at 11:43 PM, Lefty Leverenz 
  leftylever...@gmail.com
   leftylever...@gmail.com
  
 Lefty Leverenz leftylever...@gmail.com
December 30, 2014 at 23:43
   I thought x.x.# releases were just for fixups, x.#.x could include new
   features, and #.x.x were major releases that might have some
   backward-incompatible changes. But I guess we haven't agreed on that.
  
   As for documentation, we still have 84 jiras with TODOC14 labels
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC14
  
   .
   Not to mention 25 TODOC13 labels
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC13
  
   ,
   eleven TODOC12
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC12
  
   ,
   seven TODOC11
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC11
  
   ,
   and seven TODOC10
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
  
   
 
 https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20labels%20%3D%20TODOC10
  
   .
  
   That's 134 doc tasks to finish for a Hive 1.0.0 release -- preferably
 by
   the release date, not after. Because expectations are higher for 1.0.0
   releases.
  
  
   -- Lefty
  
   On Tue, Dec 30, 2014 at 5:23 PM, Vikram Dixit K 
 vikram.di...@gmail.com
   vikram.di...@gmail.com
  
 Vikram Dixit K vikram.di...@gmail.com
December 30, 2014 at 17:23
   Hi Folks,
  
   Given that there have been a number of fixes that have gone into branch
   0.14 in the past 8 weeks, I would like to make a release of 0.14.1
 soon.
  I
   would like to fix some of the release issues as well this time around.
 I
  am
   thinking of some time around 15th January for getting a RC out. Please
  let
   me know if you have any concerns. Also, from a previous thread, I would
   like to make this release the 1.0 branch of hive. The process for
 getting
   jiras into this release is going to be the same as the previous one
 viz.:
  
   1. Mark the jira with fix version 0.14.1 and update the 

[jira] [Commented] (HIVE-9436) RetryingMetaStoreClient does not retry JDOExceptions

2015-01-21 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286475#comment-14286475
 ] 

Thejas M Nair commented on HIVE-9436:
-

+1

 RetryingMetaStoreClient does not retry JDOExceptions
 

 Key: HIVE-9436
 URL: https://issues.apache.org/jira/browse/HIVE-9436
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 0.13.1
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-9436.patch


 RetryingMetaStoreClient has a bug in the following bit of code:
 {code}
 } else if ((e.getCause() instanceof MetaException) 
 e.getCause().getMessage().matches(JDO[a-zA-Z]*Exception)) {
   caughtException = (MetaException) e.getCause();
 } else {
   throw e.getCause();
 }
 {code}
 The bug here is that java String.matches matches the entire string to the 
 regex, and thus, that match will fail if the message contains anything before 
 or after JDO[a-zA-Z]\*Exception. The solution, however, is very simple, we 
 should match .\*JDO[a-zA-Z]\*Exception.\*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: adding public domain Java files to Hive source

2015-01-21 Thread Nick Dimiduk
I guess the PMC should be responsive to this kind of question.

Can you not depend on a library containing these files? Why include the
source directly?

On Wed, Jan 21, 2015 at 3:16 PM, Sergey Shelukhin ser...@hortonworks.com
wrote:

 Ping? Where do I write about such matters if not here.

 On Wed, Jan 14, 2015 at 11:43 AM, Sergey Shelukhin ser...@hortonworks.com
 
 wrote:

  Suppose I want to use a Java source within Hive that has this header (I
  don't now, but I was considering it and may want it later ;)):
 
  /*
   * Written by Doug Lea with assistance from members of JCP JSR-166
   * Expert Group and released to the public domain, as explained at
   * http://creativecommons.org/licenses/publicdomain
   */
 
  As far as I see the class is not available in binary distribution, and
  there are projects on github that use it as is and add their license on
 top.
  Can I add it to Apache (Hive) codebase?
  Should Apache license header be added? Should the original header be
  retained?
 
 
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



[jira] [Commented] (HIVE-9235) Turn off Parquet Vectorization until all data types work: DECIMAL, DATE, TIMESTAMP, CHAR, and VARCHAR

2015-01-21 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286534#comment-14286534
 ] 

Vikram Dixit K commented on HIVE-9235:
--

+1 for branch 1.0 as well.

 Turn off Parquet Vectorization until all data types work: DECIMAL, DATE, 
 TIMESTAMP, CHAR, and VARCHAR
 -

 Key: HIVE-9235
 URL: https://issues.apache.org/jira/browse/HIVE-9235
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Critical
 Attachments: HIVE-9235.01.patch, HIVE-9235.02.patch


 Title was: Make Parquet Vectorization of these data types work: DECIMAL, 
 DATE, TIMESTAMP, CHAR, and VARCHAR
 Support for doing vector column assign is missing for some data types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext

2015-01-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286561#comment-14286561
 ] 

Hive QA commented on HIVE-9327:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693672/HIVE-9327.04.patch

{color:red}ERROR:{color} -1 due to 242 failed/errored test(s), 7346 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join27
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_column_access_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constantPropagateForSubQuery
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_precision
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_display_colstats_tbllvl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_distinct_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_explain_logical
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_fetch_aggregation
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_gby_star
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_distinct_samekey
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_resolution
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadata_only_queries
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadata_only_queries_with_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadataonly1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_join_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_union_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_vc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_only_null
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_in
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_in_having
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_notin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_notin_having
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_views
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_table_access_keys_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union24
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_remove_6_subq
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_serde_typedbytes2
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_serde_typedbytes3
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_serde_typedbytes4
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_udaf_example_avg
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_udaf_example_group_concat
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_udaf_example_max
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_udaf_example_max_n
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_udaf_example_min
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver_udaf_example_min_n
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_1

[jira] [Updated] (HIVE-9408) Add hook interface so queries can be redacted before being placed in job.xml

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9408:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Thank you Xuefu! I have committed this to trunk.

 Add hook interface so queries can be redacted before being placed in job.xml
 

 Key: HIVE-9408
 URL: https://issues.apache.org/jira/browse/HIVE-9408
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.15.0

 Attachments: HIVE-9408.1.patch, HIVE-9408.2.patch, HIVE-9408.3.patch


 Today we take a query and place it in the job.xml file which is pushed to all 
 nodes the query runs on. However it's possible the query contains sensitive 
 information and should not directly be shown to users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-1869) TestMTQueries failing on jenkins

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-1869:
---
Fix Version/s: 0.15.0

 TestMTQueries failing on jenkins
 

 Key: HIVE-1869
 URL: https://issues.apache.org/jira/browse/HIVE-1869
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Testing Infrastructure
Affects Versions: 0.15.0
Reporter: Carl Steinbach
Assignee: Brock Noland
 Fix For: 0.15.0

 Attachments: HIVE-1869.1.patch, HIVE-1869.1.patch, TestMTQueries.log


 TestMTQueries has been failing intermittently on Hudson. The first failure I 
 can find
 a record of on Hudson is from svn rev 1052414 on December 24th, but it's 
 likely that the failures actually started earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-1869) TestMTQueries failing on jenkins

2015-01-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-1869:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you Xuefu! I have committed this to trunk!

 TestMTQueries failing on jenkins
 

 Key: HIVE-1869
 URL: https://issues.apache.org/jira/browse/HIVE-1869
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Testing Infrastructure
Affects Versions: 0.15.0
Reporter: Carl Steinbach
Assignee: Brock Noland
 Attachments: HIVE-1869.1.patch, HIVE-1869.1.patch, TestMTQueries.log


 TestMTQueries has been failing intermittently on Hudson. The first failure I 
 can find
 a record of on Hudson is from svn rev 1052414 on December 24th, but it's 
 likely that the failures actually started earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9341) Apply ColumnPrunning for noop PTFs

2015-01-21 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-9341:

Attachment: HIVE-9341.4.patch.txt

 Apply ColumnPrunning for noop PTFs
 --

 Key: HIVE-9341
 URL: https://issues.apache.org/jira/browse/HIVE-9341
 Project: Hive
  Issue Type: Improvement
  Components: PTF-Windowing
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-9341.1.patch.txt, HIVE-9341.2.patch.txt, 
 HIVE-9341.3.patch.txt, HIVE-9341.4.patch.txt


 Currently, PTF disables CP optimization, which can make a huge burden. For 
 example,
 {noformat}
 select p_mfgr, p_name, p_size,
 rank() over (partition by p_mfgr order by p_name) as r,
 dense_rank() over (partition by p_mfgr order by p_name) as dr,
 sum(p_retailprice) over (partition by p_mfgr order by p_name rows between 
 unbounded preceding and current row) as s1
 from noop(on part 
   partition by p_mfgr
   order by p_name
   );
 STAGE PLANS:
   Stage: Stage-1
 Map Reduce
   Map Operator Tree:
   TableScan
 alias: part
 Statistics: Num rows: 26 Data size: 3147 Basic stats: COMPLETE 
 Column stats: NONE
 Reduce Output Operator
   key expressions: p_mfgr (type: string), p_name (type: string)
   sort order: ++
   Map-reduce partition columns: p_mfgr (type: string)
   Statistics: Num rows: 26 Data size: 3147 Basic stats: COMPLETE 
 Column stats: NONE
   value expressions: p_partkey (type: int), p_name (type: 
 string), p_mfgr (type: string), p_brand (type: string), p_type (type: 
 string), p_size (type: int), p_container (type: string), p_retailprice (type: 
 double), p_comment (type: string), BLOCK__OFFSET__INSIDE__FILE (type: 
 bigint), INPUT__FILE__NAME (type: string), ROW__ID (type: 
 structtransactionid:bigint,bucketid:int,rowid:bigint)
 ...
 {noformat}
 There should be a generic way to discern referenced columns but before that, 
 we know CP can be safely applied to noop functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: adding public domain Java files to Hive source

2015-01-21 Thread Thejas Nair
Apache website talks about it. Looks like attribution is required,
which means you will need to add a reference to this inclusion in the
NOTICE file.
http://www.apache.org/legal/resolved.html#can-works-placed-in-the-public-domain-be-included-in-apache-products


On Wed, Jan 21, 2015 at 3:27 PM, Sergey Shelukhin
ser...@hortonworks.com wrote:
 Because there's no binary library as far as I can tell (see original
 message)

 On Wed, Jan 21, 2015 at 3:18 PM, Nick Dimiduk ndimi...@gmail.com wrote:

 I guess the PMC should be responsive to this kind of question.

 Can you not depend on a library containing these files? Why include the
 source directly?

 On Wed, Jan 21, 2015 at 3:16 PM, Sergey Shelukhin ser...@hortonworks.com
 wrote:

  Ping? Where do I write about such matters if not here.
 
  On Wed, Jan 14, 2015 at 11:43 AM, Sergey Shelukhin 
 ser...@hortonworks.com
  
  wrote:
 
   Suppose I want to use a Java source within Hive that has this header (I
   don't now, but I was considering it and may want it later ;)):
  
   /*
* Written by Doug Lea with assistance from members of JCP JSR-166
* Expert Group and released to the public domain, as explained at
* http://creativecommons.org/licenses/publicdomain
*/
  
   As far as I see the class is not available in binary distribution, and
   there are projects on github that use it as is and add their license on
  top.
   Can I add it to Apache (Hive) codebase?
   Should Apache license header be added? Should the original header be
   retained?
  
  
  
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


  1   2   3   >