[jira] [Commented] (HIVE-8920) IOContext problem with multiple MapWorks cloned for multi-insert [Spark Branch]

2014-12-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14260905#comment-14260905
 ] 

Hive QA commented on HIVE-8920:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12689440/HIVE-8920.3-spark.patch

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 7281 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_join4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_windowing
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/598/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/598/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-598/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12689440 - PreCommit-HIVE-SPARK-Build

 IOContext problem with multiple MapWorks cloned for multi-insert [Spark 
 Branch]
 ---

 Key: HIVE-8920
 URL: https://issues.apache.org/jira/browse/HIVE-8920
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Chao
Assignee: Xuefu Zhang
 Attachments: HIVE-8920.1-spark.patch, HIVE-8920.2-spark.patch, 
 HIVE-8920.3-spark.patch


 The following query will not work:
 {code}
 from (select * from table0 union all select * from table1) s
 insert overwrite table table3 select s.x, count(1) group by s.x
 insert overwrite table table4 select s.y, count(1) group by s.y;
 {code}
 Currently, the plan for this query, before SplitSparkWorkResolver, looks like 
 below:
 {noformat}
M1M2
  \  / \
   U3   R5
   |
   R4
 {noformat}
 In {{SplitSparkWorkResolver#splitBaseWork}}, it assumes that the 
 {{childWork}} is a ReduceWork, but for this case, you can see that for M2 the 
 childWork could be UnionWork U3. Thus, the code will fail.
 HIVE-9041 addressed partially addressed the problem by removing union task. 
 However, it's still necessary to cloning M1 and M2 to support multi-insert. 
 Because M1 and M2 can run in a single JVM, the original solution of storing a 
 global IOContext will not work because M1 and M2 have different io contexts, 
 both needing to be stored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8181) Upgrade JavaEWAH version to allow for unsorted bitset creation

2014-12-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14260907#comment-14260907
 ] 

Hive QA commented on HIVE-8181:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12689432/HIVE-8181.2.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6723 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2219/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2219/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2219/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12689432 - PreCommit-HIVE-TRUNK-Build

 Upgrade JavaEWAH version to allow for unsorted bitset creation
 --

 Key: HIVE-8181
 URL: https://issues.apache.org/jira/browse/HIVE-8181
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.14.0, 0.13.1
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-8181.1.patch, HIVE-8181.2.patch.txt


 JavaEWAH has removed the restriction that bitsets can only be set in order in 
 the latest release. 
 Currently the use of {{ewah_bitmap}} UDAF requires a {{SORT BY}}.
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.RuntimeException: Can't set bits out of order with 
 EWAHCompressedBitmap
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:824)
 at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
 at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
 at 
 org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
 at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
 at 
 org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)
 ... 7 more
 Caused by: java.lang.RuntimeException: Can't set bits out of order with 
 EWAHCompressedBitmap
 at 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9226) Beeline interweaves the query result and query log sometimes

2014-12-30 Thread Dong Chen (JIRA)
Dong Chen created HIVE-9226:
---

 Summary: Beeline interweaves the query result and query log 
sometimes
 Key: HIVE-9226
 URL: https://issues.apache.org/jira/browse/HIVE-9226
 Project: Hive
  Issue Type: Improvement
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Minor


In most case, Beeline output the query log during execution and output the 
result at last. However, sometimes there are logs output after result, although 
the query has been done. This might make users a little confused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9226) Beeline interweaves the query result and query log sometimes

2014-12-30 Thread Dong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Chen updated HIVE-9226:

Status: Patch Available  (was: Open)

 Beeline interweaves the query result and query log sometimes
 

 Key: HIVE-9226
 URL: https://issues.apache.org/jira/browse/HIVE-9226
 Project: Hive
  Issue Type: Improvement
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Minor

 In most case, Beeline output the query log during execution and output the 
 result at last. However, sometimes there are logs output after result, 
 although the query has been done. This might make users a little confused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8155) In select statement after * any random characters are allowed in hive but in RDBMS its not allowed

2014-12-30 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-8155:
-
Labels: TODOC15  (was: )

  In select statement after * any random characters are allowed in hive but in 
 RDBMS its not allowed
 ---

 Key: HIVE-8155
 URL: https://issues.apache.org/jira/browse/HIVE-8155
 Project: Hive
  Issue Type: Improvement
Reporter: Ferdinand Xu
Assignee: Dong Chen
Priority: Critical
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-8155.1.patch, HIVE-8155.patch


 In select statement after * any random characters are allowed in hive but in 
 RDBMS its not allowed. 
 Steps:
 In the below query abcdef is random characters.
 In RDBMS(oracle): 
 select *abcdef from mytable;
 Output: 
 ERROR prepare() failed with: ORA-00923: FROM keyword not found where expected
 In Hive:
 select *abcdef from mytable;
 Output: 
 Query worked fine and display all the records of mytable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9226) Beeline interweaves the query result and query log sometimes

2014-12-30 Thread Dong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Chen updated HIVE-9226:

Attachment: HIVE-9226.patch

Hi [~brocknoland], 

I uploaded a patch to fix this issue, could you take a look when time is 
available? Thanks!

In Beeline, the query execution and result fetching is in one thread, and query 
log fetching is in another thread. The original idea is interrupting log thread 
and returning the result ASAP, then fetch remaining log if there are any.

Comparing the long time of query execution, it might be acceptable to show all 
the logs before outputting the result.

 Beeline interweaves the query result and query log sometimes
 

 Key: HIVE-9226
 URL: https://issues.apache.org/jira/browse/HIVE-9226
 Project: Hive
  Issue Type: Improvement
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Minor
 Attachments: HIVE-9226.patch


 In most case, Beeline output the query log during execution and output the 
 result at last. However, sometimes there are logs output after result, 
 although the query has been done. This might make users a little confused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8155) In select statement after * any random characters are allowed in hive but in RDBMS its not allowed

2014-12-30 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14260924#comment-14260924
 ] 

Lefty Leverenz commented on HIVE-8155:
--

Doc note:  This can be documented (with release information) in the Simple 
query bullet after the SELECT syntax.

* [Select Syntax | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select#LanguageManualSelect-SelectSyntax]

  In select statement after * any random characters are allowed in hive but in 
 RDBMS its not allowed
 ---

 Key: HIVE-8155
 URL: https://issues.apache.org/jira/browse/HIVE-8155
 Project: Hive
  Issue Type: Improvement
Reporter: Ferdinand Xu
Assignee: Dong Chen
Priority: Critical
  Labels: TODOC15
 Fix For: 0.15.0

 Attachments: HIVE-8155.1.patch, HIVE-8155.patch


 In select statement after * any random characters are allowed in hive but in 
 RDBMS its not allowed. 
 Steps:
 In the below query abcdef is random characters.
 In RDBMS(oracle): 
 select *abcdef from mytable;
 Output: 
 ERROR prepare() failed with: ORA-00923: FROM keyword not found where expected
 In Hive:
 select *abcdef from mytable;
 Output: 
 Query worked fine and display all the records of mytable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9226) Beeline interweaves the query result and query log sometimes

2014-12-30 Thread Dong Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14260928#comment-14260928
 ] 

Dong Chen commented on HIVE-9226:
-

cc [~chengxiang li]

 Beeline interweaves the query result and query log sometimes
 

 Key: HIVE-9226
 URL: https://issues.apache.org/jira/browse/HIVE-9226
 Project: Hive
  Issue Type: Improvement
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Minor
 Attachments: HIVE-9226.patch


 In most case, Beeline output the query log during execution and output the 
 result at last. However, sometimes there are logs output after result, 
 although the query has been done. This might make users a little confused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7685) Parquet memory manager

2014-12-30 Thread Dong Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14260929#comment-14260929
 ] 

Dong Chen commented on HIVE-7685:
-

The value is correctly passed down after verification.

 Parquet memory manager
 --

 Key: HIVE-7685
 URL: https://issues.apache.org/jira/browse/HIVE-7685
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Reporter: Brock Noland
Assignee: Dong Chen
 Attachments: HIVE-7685.1.patch, HIVE-7685.1.patch.ready, 
 HIVE-7685.patch, HIVE-7685.patch.ready


 Similar to HIVE-4248, Parquet tries to write large very large row groups. 
 This causes Hive to run out of memory during dynamic partitions when a 
 reducer may have many Parquet files open at a given time.
 As such, we should implement a memory manager which ensures that we don't run 
 out of memory due to writing too many row groups within a single JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9119) ZooKeeperHiveLockManager does not use zookeeper in the proper way

2014-12-30 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14260937#comment-14260937
 ] 

Lefty Leverenz commented on HIVE-9119:
--

Thanks for the changes, [~nyang].

One new question:  When you changed the default of 
*hive.zookeeper.session.timeout* to use a TimeValidator, did an extra zero slip 
into the value or was the original not in milliseconds?  (600*1000 - 
600ms.)

Also, you might want to split the description of 
*hive.zookeeper.connection.basesleeptime* into two lines.

 ZooKeeperHiveLockManager does not use zookeeper in the proper way
 -

 Key: HIVE-9119
 URL: https://issues.apache.org/jira/browse/HIVE-9119
 Project: Hive
  Issue Type: Improvement
  Components: Locking
Affects Versions: 0.13.0, 0.14.0, 0.13.1
Reporter: Na Yang
Assignee: Na Yang
 Attachments: HIVE-9119.1.patch, HIVE-9119.2.patch


 ZooKeeperHiveLockManager does not use zookeeper in the proper way. 
 Currently a new zookeeper client instance is created for each 
 getlock/releaselock query which sometimes causes the number of open 
 connections between
 HiveServer2 and ZooKeeper exceed the max connection number that zookeeper 
 server allows. 
 To use zookeeper as a distributed lock, there is no need to create a new 
 zookeeper instance for every getlock try. A single zookeeper instance could 
 be reused and shared by ZooKeeperHiveLockManagers.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7613) Research optimization of auto convert join to map join [Spark branch]

2014-12-30 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-7613:

Attachment: Hive on Spark Join Master Design.pdf

Attaching the master design-doc that describes all the Hive on Spark join 
optimizations, not just mapjoin but all the optimized joins.  It's now updated 
to match latest codebase, so can be useful for future code-maintenance.

 Research optimization of auto convert join to map join [Spark branch]
 -

 Key: HIVE-7613
 URL: https://issues.apache.org/jira/browse/HIVE-7613
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Suhas Satish
Priority: Minor
 Fix For: spark-branch

 Attachments: HIve on Spark Map join background.docx, Hive on Spark 
 Join Master Design.pdf, small_table_broadcasting.pdf


 ConvertJoinMapJoin is an optimization the replaces a common join(aka shuffle 
 join) with a map join(aka broadcast or fragment replicate join) when 
 possible. we need to research how to make it workable with Hive on Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9227) Make HiveInputSplit support InputSplitWithLocationInfo

2014-12-30 Thread Rui Li (JIRA)
Rui Li created HIVE-9227:


 Summary: Make HiveInputSplit support InputSplitWithLocationInfo
 Key: HIVE-9227
 URL: https://issues.apache.org/jira/browse/HIVE-9227
 Project: Hive
  Issue Type: Improvement
Reporter: Rui Li
Assignee: Rui Li


This feature is introduced in MAPREDUCE-5896. We should support it in hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Building Hive-0.14 is failing because artifact pentaho-aggdesigner-algorithm-5.1.3-jhyde could not be resolved

2014-12-30 Thread Lefty Leverenz
Should this issue be documented in a release box in Getting Started:
Building Hive from Source
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-BuildingHivefromSource
?

-- Lefty

On Mon, Dec 29, 2014 at 2:04 PM, Alan Gates ga...@hortonworks.com wrote:


 There was an issue with Hive 0.14 when it was released.  We missed the
 fact that it still had two SNAPSHOT dependencies in the pom.  If you apply
 the patches on HIVE-8845 (for Tez) and HIVE-8873 (for Calcite) that should
 address your issue.  This will be fixed in Hive 0.14.1.

 Alan.

   Ravi Prakash ravi...@ymail.com
  December 22, 2014 at 14:14
 Hi!
 Has anyone tried building Hive-0.14 from source? I'm using the tag for
 release-0.14.0 https://github.com/apache/hive/releases/tag/release-0.14.0

 The command I use is: mvn install -DskipTests -Phadoop-2
 -DcreateChecksum=true -Dtez.version=0.5.3 -Dcalcite.version=0.9.2-incubating

 The build fails for me with the following error:[ERROR] Failed to execute
 goal on project hive-exec: Could not resolve dependencies for project
 org.apache.hive:hive-exec:jar:0.14.0: The following artifacts could not be
 resolved: org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.3-jhyde,
 net.hydromatic:linq4j:jar:0.4, net.hydromatic:quidem:jar:0.1.1: Could not
 find artifact org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.3-jhyde in
 nexus (http://localhost:8081/nexus/content/groups/public) - [Help 1]

 This is a transitive dependency via the calcite-0.9.2-incubating
 artifact. Is there a JIRA which someone can please point me to? It seems
 wrong that an artifact with version 5.1.3-jhyde is required to build
 Apache Hive, no disrespect to Julian. Am I missing something?
 ThanksRavi



 --
 Sent with Postbox http://www.getpostbox.com

 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


[jira] [Commented] (HIVE-7613) Research optimization of auto convert join to map join [Spark branch]

2014-12-30 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14260943#comment-14260943
 ] 

Lefty Leverenz commented on HIVE-7613:
--

Should this join design doc be added to the wiki?  Or if not, should the 
existing Hive on Spark: Getting Started include a link to it?

* [Hive on Spark: Getting Started | 
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started]

 Research optimization of auto convert join to map join [Spark branch]
 -

 Key: HIVE-7613
 URL: https://issues.apache.org/jira/browse/HIVE-7613
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Suhas Satish
Priority: Minor
 Fix For: spark-branch

 Attachments: HIve on Spark Map join background.docx, Hive on Spark 
 Join Master Design.pdf, small_table_broadcasting.pdf


 ConvertJoinMapJoin is an optimization the replaces a common join(aka shuffle 
 join) with a map join(aka broadcast or fragment replicate join) when 
 possible. we need to research how to make it workable with Hive on Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9226) Beeline interweaves the query result and query log sometimes

2014-12-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261010#comment-14261010
 ] 

Hive QA commented on HIVE-9226:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12689457/HIVE-9226.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6723 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2220/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2220/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2220/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12689457 - PreCommit-HIVE-TRUNK-Build

 Beeline interweaves the query result and query log sometimes
 

 Key: HIVE-9226
 URL: https://issues.apache.org/jira/browse/HIVE-9226
 Project: Hive
  Issue Type: Improvement
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Minor
 Attachments: HIVE-9226.patch


 In most case, Beeline output the query log during execution and output the 
 result at last. However, sometimes there are logs output after result, 
 although the query has been done. This might make users a little confused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9228) Problem with subquery using windowing functions

2014-12-30 Thread Aihua Xu (JIRA)
Aihua Xu created HIVE-9228:
--

 Summary: Problem with subquery using windowing functions
 Key: HIVE-9228
 URL: https://issues.apache.org/jira/browse/HIVE-9228
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Aihua Xu
Assignee: Aihua Xu


The following query with window functions failed. The internal query works fine.

select st_fips_cd, zip_cd_5, hh_surr_key
from
(
select st_fips_cd, zip_cd_5, hh_surr_key,
count( case when advtg_len_rsdnc_cd = '1' then 1 end ) over (partition by 
st_fips_cd, zip_cd_5) as CNT_ADVTG_LEN_RSDNC_CD_1,
row_number() over (partition by st_fips_cd, zip_cd_5 order by hh_surr_key asc) 
as analytic_row_number3
from hh_agg
where analytic_row_number2 = 1
) t;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9228) Problem with subquery using windowing functions

2014-12-30 Thread Mariano Dominguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariano Dominguez updated HIVE-9228:

Affects Version/s: 0.13.1

 Problem with subquery using windowing functions
 ---

 Key: HIVE-9228
 URL: https://issues.apache.org/jira/browse/HIVE-9228
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1
Reporter: Aihua Xu
Assignee: Aihua Xu
   Original Estimate: 96h
  Remaining Estimate: 96h

 The following query with window functions failed. The internal query works 
 fine.
 select st_fips_cd, zip_cd_5, hh_surr_key
 from
 (
 select st_fips_cd, zip_cd_5, hh_surr_key,
 count( case when advtg_len_rsdnc_cd = '1' then 1 end ) over (partition by 
 st_fips_cd, zip_cd_5) as CNT_ADVTG_LEN_RSDNC_CD_1,
 row_number() over (partition by st_fips_cd, zip_cd_5 order by hh_surr_key 
 asc) as analytic_row_number3
 from hh_agg
 where analytic_row_number2 = 1
 ) t;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Backup Stage in query plan

2014-12-30 Thread Edson Ramiro
Hi all,

I found in the explain a backup stage, but I didn't find any docs
explaining what it is.

What is a backup stage? Do you have any doc about it?

I got this from the `explain formatted' of the TPCH 16 query.

 STAGE DEPENDENCIES: {
Stage-9: {
  DEPENDENT STAGES: Stage-3, Stage-6
},
Stage-8: {
  ROOT STAGE: TRUE,
  CONDITIONAL CHILD TASKS: Stage-10, Stage-3
},
Stage-2: {
  DEPENDENT STAGES: Stage-0
},
Stage-0: {
  DEPENDENT STAGES: Stage-5
},
Stage-6: {
  DEPENDENT STAGES: Stage-10
},
Stage-10: {
  BACKUP STAGE: Stage-3
},
Stage-5: {
  DEPENDENT STAGES: Stage-9
},
Stage-3: {}
  }

Thanks in advance,

  Edson Ramiro


[jira] [Updated] (HIVE-8920) IOContext problem with multiple MapWorks cloned for multi-insert [Spark Branch]

2014-12-30 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8920:
--
Attachment: HIVE-8920.4-spark.patch

 IOContext problem with multiple MapWorks cloned for multi-insert [Spark 
 Branch]
 ---

 Key: HIVE-8920
 URL: https://issues.apache.org/jira/browse/HIVE-8920
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Chao
Assignee: Xuefu Zhang
 Attachments: HIVE-8920.1-spark.patch, HIVE-8920.2-spark.patch, 
 HIVE-8920.3-spark.patch, HIVE-8920.4-spark.patch


 The following query will not work:
 {code}
 from (select * from table0 union all select * from table1) s
 insert overwrite table table3 select s.x, count(1) group by s.x
 insert overwrite table table4 select s.y, count(1) group by s.y;
 {code}
 Currently, the plan for this query, before SplitSparkWorkResolver, looks like 
 below:
 {noformat}
M1M2
  \  / \
   U3   R5
   |
   R4
 {noformat}
 In {{SplitSparkWorkResolver#splitBaseWork}}, it assumes that the 
 {{childWork}} is a ReduceWork, but for this case, you can see that for M2 the 
 childWork could be UnionWork U3. Thus, the code will fail.
 HIVE-9041 addressed partially addressed the problem by removing union task. 
 However, it's still necessary to cloning M1 and M2 to support multi-insert. 
 Because M1 and M2 can run in a single JVM, the original solution of storing a 
 global IOContext will not work because M1 and M2 have different io contexts, 
 both needing to be stored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9038) Join tests fail on Tez

2014-12-30 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-9038:
-
Assignee: Vikram Dixit K

 Join tests fail on Tez
 --

 Key: HIVE-9038
 URL: https://issues.apache.org/jira/browse/HIVE-9038
 Project: Hive
  Issue Type: Bug
  Components: Tests, Tez
Reporter: Ashutosh Chauhan
Assignee: Vikram Dixit K

 Tez doesn't run all tests. But, if you run them, following tests fail with 
 runt time exception pointing to bugs. 
 {{auto_join21.q,auto_join29.q,auto_join30.q
 ,auto_join_filters.q,auto_join_nulls.q}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9167:
---
Labels: Kanban  (was: )

 Enhance encryption testing framework to allow create keys  zones inside .q 
 files
 -

 Key: HIVE-9167
 URL: https://issues.apache.org/jira/browse/HIVE-9167
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
  Labels: Kanban

 The current implementation of the encryption testing framework on HIVE-8900 
 initializes a couple of encrypted databases to be used on .q test files. This 
 is useful in order to make tests small, but it does not test all details 
 found on the encryption implementation, such as: encrypted tables with 
 different encryption strength in the same database.
 We need to allow this kind of encryption as it is how it will be used in the 
 real world where a database will have a few encrypted tables (not all the DB).
 Also, we need to make this encryption framework flexible so that we can 
 create/delete keys  zones on demand when running the .q files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9167:
---
Labels: Hive-Scrum  (was: Kanban)

 Enhance encryption testing framework to allow create keys  zones inside .q 
 files
 -

 Key: HIVE-9167
 URL: https://issues.apache.org/jira/browse/HIVE-9167
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
  Labels: Hive-Scrum

 The current implementation of the encryption testing framework on HIVE-8900 
 initializes a couple of encrypted databases to be used on .q test files. This 
 is useful in order to make tests small, but it does not test all details 
 found on the encryption implementation, such as: encrypted tables with 
 different encryption strength in the same database.
 We need to allow this kind of encryption as it is how it will be used in the 
 real world where a database will have a few encrypted tables (not all the DB).
 Also, we need to make this encryption framework flexible so that we can 
 create/delete keys  zones on demand when running the .q files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7898) HCatStorer should ignore namespaces generated by Pig

2014-12-30 Thread Justin Leet (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261275#comment-14261275
 ] 

Justin Leet commented on HIVE-7898:
---

This actually already happens in my patch. HCatStorer will abort with an error: 
e.g. Field named field already exists.  This isn't specifically in 
HCatBaseStorer, it actually occurs during the conversion from Pig Schema to 
HCatSchema in convertPigSchemaToHCatSchema(). The modified getColFromSchema 
will pass the now truncated name, so convertPigSchemaToHCatSchema() will 
attempt to add the now duplicated column and HCat won't allow the duplicated 
field to go through.

 HCatStorer should ignore namespaces generated by Pig
 

 Key: HIVE-7898
 URL: https://issues.apache.org/jira/browse/HIVE-7898
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Affects Versions: 0.13.1
Reporter: Justin Leet
Assignee: Justin Leet
Priority: Minor
 Attachments: HIVE-7898.1.patch


 Currently, Pig aliases must exactly match the names of HCat columns for 
 HCatStorer to be successful.  However, several Pig operations prepend a 
 namespace to the alias in order to differentiate fields (e.g. after a group 
 with field b, you might have A::b).  In this case, even if the fields are in 
 the right order and the alias without namespace matches, the store will fail 
 because it tries to match the long form of the alias, despite the namespace 
 being extraneous information in this case.   Note that multiple aliases can 
 be applied (e.g. A::B::C::d).
 A workaround is possible by doing a 
 FOREACH relation GENERATE field1 AS field1, field2 AS field2, etc.  
 This quickly becomes tedious and bloated for tables with many fields.
 Changing this would normally require care around columns named, for example, 
 `A::b` as has been introduced in Hive 13.  However, a different function call 
 only validates Pig aliases if they follow the old rules for Hive columns.  As 
 such, a direct change (rather than attempting to match either the 
 namespace::alias or just alias) maintains compatibility for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7898) HCatStorer should ignore namespaces generated by Pig

2014-12-30 Thread Justin Leet (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261274#comment-14261274
 ] 

Justin Leet commented on HIVE-7898:
---

This actually already happens in my patch. HCatStorer will abort with an error: 
e.g. Field named field already exists.  This isn't specifically in 
HCatBaseStorer, it actually occurs during the conversion from Pig Schema to 
HCatSchema in convertPigSchemaToHCatSchema(). The modified getColFromSchema 
will pass the now truncated name, so convertPigSchemaToHCatSchema() will 
attempt to add the now duplicated column and HCat won't allow the duplicated 
field to go through.

 HCatStorer should ignore namespaces generated by Pig
 

 Key: HIVE-7898
 URL: https://issues.apache.org/jira/browse/HIVE-7898
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Affects Versions: 0.13.1
Reporter: Justin Leet
Assignee: Justin Leet
Priority: Minor
 Attachments: HIVE-7898.1.patch


 Currently, Pig aliases must exactly match the names of HCat columns for 
 HCatStorer to be successful.  However, several Pig operations prepend a 
 namespace to the alias in order to differentiate fields (e.g. after a group 
 with field b, you might have A::b).  In this case, even if the fields are in 
 the right order and the alias without namespace matches, the store will fail 
 because it tries to match the long form of the alias, despite the namespace 
 being extraneous information in this case.   Note that multiple aliases can 
 be applied (e.g. A::B::C::d).
 A workaround is possible by doing a 
 FOREACH relation GENERATE field1 AS field1, field2 AS field2, etc.  
 This quickly becomes tedious and bloated for tables with many fields.
 Changing this would normally require care around columns named, for example, 
 `A::b` as has been introduced in Hive 13.  However, a different function call 
 only validates Pig aliases if they follow the old rules for Hive columns.  As 
 such, a direct change (rather than attempting to match either the 
 namespace::alias or just alias) maintains compatibility for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261279#comment-14261279
 ] 

Brock Noland commented on HIVE-9167:


Hi,

Thank you Sergio! I am going to go ahead and commit this since you will be out 
after today. We can address and remaining issues as follow-on jiras.

Thank you! 

 Enhance encryption testing framework to allow create keys  zones inside .q 
 files
 -

 Key: HIVE-9167
 URL: https://issues.apache.org/jira/browse/HIVE-9167
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
  Labels: Hive-Scrum
 Attachments: HIVE-9167.4.patch


 The current implementation of the encryption testing framework on HIVE-8900 
 initializes a couple of encrypted databases to be used on .q test files. This 
 is useful in order to make tests small, but it does not test all details 
 found on the encryption implementation, such as: encrypted tables with 
 different encryption strength in the same database.
 We need to allow this kind of encryption as it is how it will be used in the 
 real world where a database will have a few encrypted tables (not all the DB).
 Also, we need to make this encryption framework flexible so that we can 
 create/delete keys  zones on demand when running the .q files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9167:
--
Attachment: HIVE-9167.4.patch

 Enhance encryption testing framework to allow create keys  zones inside .q 
 files
 -

 Key: HIVE-9167
 URL: https://issues.apache.org/jira/browse/HIVE-9167
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
  Labels: Hive-Scrum
 Attachments: HIVE-9167.4.patch


 The current implementation of the encryption testing framework on HIVE-8900 
 initializes a couple of encrypted databases to be used on .q test files. This 
 is useful in order to make tests small, but it does not test all details 
 found on the encryption implementation, such as: encrypted tables with 
 different encryption strength in the same database.
 We need to allow this kind of encryption as it is how it will be used in the 
 real world where a database will have a few encrypted tables (not all the DB).
 Also, we need to make this encryption framework flexible so that we can 
 create/delete keys  zones on demand when running the .q files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9167:
--
Status: Patch Available  (was: Open)

 Enhance encryption testing framework to allow create keys  zones inside .q 
 files
 -

 Key: HIVE-9167
 URL: https://issues.apache.org/jira/browse/HIVE-9167
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
  Labels: Hive-Scrum
 Attachments: HIVE-9167.4.patch


 The current implementation of the encryption testing framework on HIVE-8900 
 initializes a couple of encrypted databases to be used on .q test files. This 
 is useful in order to make tests small, but it does not test all details 
 found on the encryption implementation, such as: encrypted tables with 
 different encryption strength in the same database.
 We need to allow this kind of encryption as it is how it will be used in the 
 real world where a database will have a few encrypted tables (not all the DB).
 Also, we need to make this encryption framework flexible so that we can 
 create/delete keys  zones on demand when running the .q files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9222) Fix ordering differences due to Java 8 (Part 4)

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9222:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Thank you Mohit! I have committed this to trunk!

 Fix ordering differences due to Java 8 (Part 4)
 ---

 Key: HIVE-9222
 URL: https://issues.apache.org/jira/browse/HIVE-9222
 Project: Hive
  Issue Type: Sub-task
  Components: Tests
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
 Fix For: 0.15.0

 Attachments: HIVE-9222.patch


 This patch fixes the following tests:
 (1) TestNegativeCliDriver.testNegativeCliDriver: unset_view_property.q and 
 unset_table_property.q
 {{DDLSemanticAnalyzer.analyzeAlterTableProps()}} gets table properties via 
 getProps() which must be an insert order map. 
 (2) TestCliDriver.testCliDriver_overridden_confs
 {{VerifyOverriddenConfigsHook}} emits overridden configs. Changed 
 {{SessionState.overriddenConfigurations}} to insert order map.
 (3) 
 TestNegativeCliDriver.testNegativeCliDriver_columnstats_partlvl_invalid_values
 {{ColumnStatsSemanticAnalyzer.getPartKeyValuePairsFromAST()}} gets 
 {{((ASTNode) tree.getChild(0)}} in different order between Java 7 and Java 8. 
  The order is different in {{HiveParser.statement()}} itself in 
 {{ParseDriver.parse()}} so this difference comes from antlr library. 
 Generated java version specific output.
 (4) TestMinimrCliDriver.testCliDriver_list_bucket_dml_10, TestCliDriver 
 tests: stats_list_bucket.q, list_bucket_dml_12.q and list_bucket_dml_13.q
 Looks like these need rebase after HIVE-9206? Not sure what happened here...
 (5) TestCliDriver.testCliDriver: mapjoin_hook.q, 
 auto_join_without_localtask.q, auto_join25.q, multiMapJoin2.q
 {{PrintCompletedTasksHook}} prints completed task list, which depends on the 
 list of tasks added to runnable task list in {{DriverContext}}.  Some on 
 these tasks may get filtered. We see that different tasks are getting 
 filtered out by the condition resolver in {{ConditionTask}} in Java 8 
 compared to Java 7.
 {{ConditionalTask.execute()}} calls  
 {{ConditionalResolverCommonJoin.resolveDriverAlias()}} via getTasks(), which 
 returns a single task based on task to alias map. The next mapred task in the 
 task list gets filtered out by the resolver in 
 {{ConditionalTask.resolveTask()}}. In other words, the the mapred task that 
 shows up first will be kept and the next one will be filtered. Converted task 
 to alias map to an insert order map so order is same with Java 8 and Java 7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9220) HIVE-9109 missed updating result of list_bucket_dml_10

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9220:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

 HIVE-9109 missed updating result of list_bucket_dml_10
 --

 Key: HIVE-9220
 URL: https://issues.apache.org/jira/browse/HIVE-9220
 Project: Hive
  Issue Type: Sub-task
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-9109.1.patch.txt


 list_bucket_dml_10.q.java1.7.out is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9167:
---
   Resolution: Fixed
Fix Version/s: encryption-branch
   Status: Resolved  (was: Patch Available)

Thank you everyone for the reviews! I have committed this to branch!

 Enhance encryption testing framework to allow create keys  zones inside .q 
 files
 -

 Key: HIVE-9167
 URL: https://issues.apache.org/jira/browse/HIVE-9167
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
  Labels: Hive-Scrum
 Fix For: encryption-branch

 Attachments: HIVE-9167.4.patch


 The current implementation of the encryption testing framework on HIVE-8900 
 initializes a couple of encrypted databases to be used on .q test files. This 
 is useful in order to make tests small, but it does not test all details 
 found on the encryption implementation, such as: encrypted tables with 
 different encryption strength in the same database.
 We need to allow this kind of encryption as it is how it will be used in the 
 real world where a database will have a few encrypted tables (not all the DB).
 Also, we need to make this encryption framework flexible so that we can 
 create/delete keys  zones on demand when running the .q files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7685) Parquet memory manager

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261295#comment-14261295
 ] 

Brock Noland commented on HIVE-7685:


+1

 Parquet memory manager
 --

 Key: HIVE-7685
 URL: https://issues.apache.org/jira/browse/HIVE-7685
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Reporter: Brock Noland
Assignee: Dong Chen
 Attachments: HIVE-7685.1.patch, HIVE-7685.1.patch.ready, 
 HIVE-7685.patch, HIVE-7685.patch.ready


 Similar to HIVE-4248, Parquet tries to write large very large row groups. 
 This causes Hive to run out of memory during dynamic partitions when a 
 reducer may have many Parquet files open at a given time.
 As such, we should implement a memory manager which ensures that we don't run 
 out of memory due to writing too many row groups within a single JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9226) Beeline interweaves the query result and query log sometimes

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261300#comment-14261300
 ] 

Brock Noland commented on HIVE-9226:


+1

 Beeline interweaves the query result and query log sometimes
 

 Key: HIVE-9226
 URL: https://issues.apache.org/jira/browse/HIVE-9226
 Project: Hive
  Issue Type: Improvement
Reporter: Dong Chen
Assignee: Dong Chen
Priority: Minor
 Attachments: HIVE-9226.patch


 In most case, Beeline output the query log during execution and output the 
 result at last. However, sometimes there are logs output after result, 
 although the query has been done. This might make users a little confused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8920) IOContext problem with multiple MapWorks cloned for multi-insert [Spark Branch]

2014-12-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261303#comment-14261303
 ] 

Hive QA commented on HIVE-8920:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12689523/HIVE-8920.4-spark.patch

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 7280 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_annotate_stats_join
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_join4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_windowing
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/599/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/599/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-599/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12689523 - PreCommit-HIVE-SPARK-Build

 IOContext problem with multiple MapWorks cloned for multi-insert [Spark 
 Branch]
 ---

 Key: HIVE-8920
 URL: https://issues.apache.org/jira/browse/HIVE-8920
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Chao
Assignee: Xuefu Zhang
 Attachments: HIVE-8920.1-spark.patch, HIVE-8920.2-spark.patch, 
 HIVE-8920.3-spark.patch, HIVE-8920.4-spark.patch


 The following query will not work:
 {code}
 from (select * from table0 union all select * from table1) s
 insert overwrite table table3 select s.x, count(1) group by s.x
 insert overwrite table table4 select s.y, count(1) group by s.y;
 {code}
 Currently, the plan for this query, before SplitSparkWorkResolver, looks like 
 below:
 {noformat}
M1M2
  \  / \
   U3   R5
   |
   R4
 {noformat}
 In {{SplitSparkWorkResolver#splitBaseWork}}, it assumes that the 
 {{childWork}} is a ReduceWork, but for this case, you can see that for M2 the 
 childWork could be UnionWork U3. Thus, the code will fail.
 HIVE-9041 addressed partially addressed the problem by removing union task. 
 However, it's still necessary to cloning M1 and M2 to support multi-insert. 
 Because M1 and M2 can run in a single JVM, the original solution of storing a 
 global IOContext will not work because M1 and M2 have different io contexts, 
 both needing to be stored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8065) Support HDFS encryption functionality on Hive

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8065:
--
Labels: Hive-Scrum  (was: )

 Support HDFS encryption functionality on Hive
 -

 Key: HIVE-8065
 URL: https://issues.apache.org/jira/browse/HIVE-8065
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.13.1
Reporter: Sergio Peña
Assignee: Sergio Peña
  Labels: Hive-Scrum

 The new encryption support on HDFS makes Hive incompatible and unusable when 
 this feature is used.
 HDFS encryption is designed so that an user can configure different 
 encryption zones (or directories) for multi-tenant environments. An 
 encryption zone has an exclusive encryption key, such as AES-128 or AES-256. 
 Because of security compliance, the HDFS does not allow to move/rename files 
 between encryption zones. Renames are allowed only inside the same encryption 
 zone. A copy is allowed between encryption zones.
 See HDFS-6134 for more details about HDFS encryption design.
 Hive currently uses a scratch directory (like /tmp/$user/$random). This 
 scratch directory is used for the output of intermediate data (between MR 
 jobs) and for the final output of the hive query which is later moved to the 
 table directory location.
 If Hive tables are in different encryption zones than the scratch directory, 
 then Hive won't be able to renames those files/directories, and it will make 
 Hive unusable.
 To handle this problem, we can change the scratch directory of the 
 query/statement to be inside the same encryption zone of the table directory 
 location. This way, the renaming process will be successful. 
 Also, for statements that move files between encryption zones (i.e. LOAD 
 DATA), a copy may be executed instead of a rename. This will cause an 
 overhead when copying large data files, but it won't break the encryption on 
 Hive.
 Another security thing to consider is when using joins selects. If Hive joins 
 different tables with different encryption key strengths, then the results of 
 the select might break the security compliance of the tables. Let's say two 
 tables with 128 bits and 256 bits encryption are joined, then the temporary 
 results might be stored in the 128 bits encryption zone. This will conflict 
 with the table encrypted with 256 bits temporary.
 To fix this, Hive should be able to select the scratch directory that is more 
 secured/encrypted in order to save the intermediate data temporary with no 
 compliance issues.
 For instance:
 {noformat}
 SELECT * FROM table-aes128 t1 JOIN table-aes256 t2 WHERE t1.id == t2.id;
 {noformat}
 - This should use a scratch directory (or staging directory) inside the 
 table-aes256 table location.
 {noformat}
 INSERT OVERWRITE TABLE table-unencrypted SELECT * FROM table-aes1;
 {noformat}
 - This should use a scratch directory inside the table-aes1 location.
 {noformat}
 FROM table-unencrypted
 INSERT OVERWRITE TABLE table-aes128 SELECT id, name
 INSERT OVERWRITE TABLE table-aes256 SELECT id, name
 {noformat}
 - This should use a scratch directory on each of the tables locations.
 - The first SELECT will have its scratch directory on table-aes128 directory.
 - The second SELECT will have its scratch directory on table-aes256 directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261325#comment-14261325
 ] 

Hive QA commented on HIVE-9167:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12689535/HIVE-9167.4.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2221/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2221/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2221/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2221/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'beeline/src/java/org/apache/hive/beeline/Commands.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/hcatalog-unit/target 
itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target 
itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target hcatalog/target hcatalog/core/target 
hcatalog/streaming/target hcatalog/server-extensions/target 
hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target accumulo-handler/target hwi/target 
common/target common/src/gen contrib/target service/target serde/target 
beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target
+ svn update
U
ql/src/test/org/apache/hadoop/hive/ql/plan/TestConditionalResolverCommonJoin.java
Uql/src/test/queries/clientnegative/columnstats_partlvl_invalid_values.q
Dql/src/test/results/clientnegative/columnstats_partlvl_invalid_values.q.out
A
ql/src/test/results/clientnegative/columnstats_partlvl_invalid_values.q.java1.7.out
A
ql/src/test/results/clientnegative/columnstats_partlvl_invalid_values.q.java1.8.out
Uql/src/test/results/clientnegative/unset_table_property.q.out
Dql/src/test/results/clientpositive/list_bucket_dml_10.q.out
Uql/src/test/results/clientpositive/stats_list_bucket.q.java1.8.out
Uql/src/test/results/clientpositive/multiMapJoin2.q.out
Uql/src/test/results/clientpositive/list_bucket_dml_12.q.java1.8.out
Uql/src/test/results/clientpositive/auto_join_without_localtask.q.out
Aql/src/test/results/clientpositive/list_bucket_dml_10.q.java1.7.out
Aql/src/test/results/clientpositive/list_bucket_dml_10.q.java1.8.out
Uql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
Uql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
U
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java
U
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MapJoinResolver.java
U
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SortMergeJoinTaskDispatcher.java

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1648561.

Updated to revision 1648561.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ 

[jira] [Updated] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9167:
--
Labels: Kanban  (was: Hive-Scrum)

 Enhance encryption testing framework to allow create keys  zones inside .q 
 files
 -

 Key: HIVE-9167
 URL: https://issues.apache.org/jira/browse/HIVE-9167
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
  Labels: Kanban
 Fix For: encryption-branch

 Attachments: HIVE-9167.4.patch


 The current implementation of the encryption testing framework on HIVE-8900 
 initializes a couple of encrypted databases to be used on .q test files. This 
 is useful in order to make tests small, but it does not test all details 
 found on the encryption implementation, such as: encrypted tables with 
 different encryption strength in the same database.
 We need to allow this kind of encryption as it is how it will be used in the 
 real world where a database will have a few encrypted tables (not all the DB).
 Also, we need to make this encryption framework flexible so that we can 
 create/delete keys  zones on demand when running the .q files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9205) Change default tez install directory to use /tmp instead of /user and create the directory if it does not exist

2014-12-30 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261340#comment-14261340
 ] 

Vikram Dixit K commented on HIVE-9205:
--

Test failure is unrelated. [~prasanth_j] [~hagleitn] can you take a look.

 Change default tez install directory to use /tmp instead of /user and create 
 the directory if it does not exist
 ---

 Key: HIVE-9205
 URL: https://issues.apache.org/jira/browse/HIVE-9205
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0, 0.15.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.15.0, 0.14.1

 Attachments: HIVE-9205.1.patch, HIVE-9205.2.patch


 The common deployment scenario is to install the packages and start services. 
 Creating the /user/user directory is currently an extra step during manual 
 installation. In case the user tries to bring up the hive shell with tez 
 enabled, this would result in an exception. The solution is to change the 
 default install directory to /tmp (so that we have the permissions to create 
 the directory /tmp/user) and create the /tmp/user directory if it did not 
 exist earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9229) How to calculate the Kendall coefficient of correlation?

2014-12-30 Thread JIRA
Marcin Kosiński created HIVE-9229:
-

 Summary: How to calculate the Kendall coefficient of correlation?
 Key: HIVE-9229
 URL: https://issues.apache.org/jira/browse/HIVE-9229
 Project: Hive
  Issue Type: Wish
Reporter: Marcin Kosiński
Priority: Trivial


In this [wiki 
page](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF) 
there is a function `corr()` that calculates the Pearson coefficient of 
correlation, but my question is that: is there any function in Hive that 
enables to calculate the Kendall coefficient of correlation of a pair of a 
numeric columns?

In anyone has any idea on how to even implement it please answer on 
[this](http://stackoverflow.com/questions/27231039/hive-how-to-calculate-the-kendall-coefficient-of-correlation)
 stackoverflow question.

Thanks for help,
Marcin



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña reassigned HIVE-8816:
-

Assignee: Sergio Peña  (was: Ferdinand Xu)

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8816:
--
Attachment: (was: HIVE-8816.1.patch)

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8816:
--
Attachment: HIVE-8816.1.patch

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.1.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9229) How to calculate the Kendall coefficient of correlation?

2014-12-30 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang resolved HIVE-9229.
---
Resolution: Invalid

[~mkosinski], JIRA is used for reporting a problem or requesting a feature, but 
not for asking questions, for which, user mailing list is a better place.

 How to calculate the Kendall coefficient of correlation?
 

 Key: HIVE-9229
 URL: https://issues.apache.org/jira/browse/HIVE-9229
 Project: Hive
  Issue Type: Wish
Reporter: Marcin Kosiński
Priority: Trivial

 In this [wiki 
 page](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF) 
 there is a function `corr()` that calculates the Pearson coefficient of 
 correlation, but my question is that: is there any function in Hive that 
 enables to calculate the Kendall coefficient of correlation of a pair of a 
 numeric columns?
 In anyone has any idea on how to even implement it please answer on 
 [this](http://stackoverflow.com/questions/27231039/hive-how-to-calculate-the-kendall-coefficient-of-correlation)
  stackoverflow question.
 Thanks for help,
 Marcin



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9167:
---
Comment: was deleted

(was: 

{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12689535/HIVE-9167.4.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2221/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2221/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2221/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2221/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'beeline/src/java/org/apache/hive/beeline/Commands.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/hcatalog-unit/target 
itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target 
itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target hcatalog/target hcatalog/core/target 
hcatalog/streaming/target hcatalog/server-extensions/target 
hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target accumulo-handler/target hwi/target 
common/target common/src/gen contrib/target service/target serde/target 
beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target
+ svn update
U
ql/src/test/org/apache/hadoop/hive/ql/plan/TestConditionalResolverCommonJoin.java
Uql/src/test/queries/clientnegative/columnstats_partlvl_invalid_values.q
Dql/src/test/results/clientnegative/columnstats_partlvl_invalid_values.q.out
A
ql/src/test/results/clientnegative/columnstats_partlvl_invalid_values.q.java1.7.out
A
ql/src/test/results/clientnegative/columnstats_partlvl_invalid_values.q.java1.8.out
Uql/src/test/results/clientnegative/unset_table_property.q.out
Dql/src/test/results/clientpositive/list_bucket_dml_10.q.out
Uql/src/test/results/clientpositive/stats_list_bucket.q.java1.8.out
Uql/src/test/results/clientpositive/multiMapJoin2.q.out
Uql/src/test/results/clientpositive/list_bucket_dml_12.q.java1.8.out
Uql/src/test/results/clientpositive/auto_join_without_localtask.q.out
Aql/src/test/results/clientpositive/list_bucket_dml_10.q.java1.7.out
Aql/src/test/results/clientpositive/list_bucket_dml_10.q.java1.8.out
Uql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
Uql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
U
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java
U
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MapJoinResolver.java
U
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SortMergeJoinTaskDispatcher.java

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1648561.

Updated to revision 1648561.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ 

[jira] [Commented] (HIVE-8410) Typo in DOAP - incorrect category URL

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261449#comment-14261449
 ] 

Brock Noland commented on HIVE-8410:


+1

 Typo in DOAP - incorrect category URL
 -

 Key: HIVE-8410
 URL: https://issues.apache.org/jira/browse/HIVE-8410
 Project: Hive
  Issue Type: Bug
 Environment: http://svn.apache.org/repos/asf/hive/trunk/doap_Hive.rdf
Reporter: Sebb
Assignee: Ferdinand Xu
 Attachments: HIVE-8410.1.patch, HIVE-8410.patch, doap_Hive.rdf


 NO PRECOMMIT TESTS
 The DOAP contains the following:
 {code}
 category rdf:resource=http://www.apache.org/category/database; /
 {code}
 However, the URL is incorrect; it must be
 {code}
 category rdf:resource=http://projects.apache.org/category/database; /
 {code}
 Please fix this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8821) Create unit test where we insert into dynamically partitioned table

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261451#comment-14261451
 ] 

Brock Noland commented on HIVE-8821:


+1

 Create unit test where we insert into dynamically partitioned table
 ---

 Key: HIVE-8821
 URL: https://issues.apache.org/jira/browse/HIVE-8821
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Dong Chen
 Fix For: encryption-branch

 Attachments: HIVE-8821.1.patch, HIVE-8821.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9205) Change default tez install directory to use /tmp instead of /user and create the directory if it does not exist

2014-12-30 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261450#comment-14261450
 ] 

Prasanth Jayachandran commented on HIVE-9205:
-

+1

 Change default tez install directory to use /tmp instead of /user and create 
 the directory if it does not exist
 ---

 Key: HIVE-9205
 URL: https://issues.apache.org/jira/browse/HIVE-9205
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0, 0.15.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.15.0, 0.14.1

 Attachments: HIVE-9205.1.patch, HIVE-9205.2.patch


 The common deployment scenario is to install the packages and start services. 
 Creating the /user/user directory is currently an extra step during manual 
 installation. In case the user tries to bring up the hive shell with tez 
 enabled, this would result in an exception. The solution is to change the 
 default install directory to /tmp (so that we have the permissions to create 
 the directory /tmp/user) and create the /tmp/user directory if it did not 
 exist earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8821) Create unit test where we insert into dynamically partitioned table

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8821:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you [~dongc]! I have committed this to branch.

 Create unit test where we insert into dynamically partitioned table
 ---

 Key: HIVE-8821
 URL: https://issues.apache.org/jira/browse/HIVE-8821
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Dong Chen
 Fix For: encryption-branch

 Attachments: HIVE-8821.1.patch, HIVE-8821.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9221) Remove deprecation warning for hive.metastore.local

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9221:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Thank you Ashutosh for the review! I have committed this to trunk.

 Remove deprecation warning for hive.metastore.local
 ---

 Key: HIVE-9221
 URL: https://issues.apache.org/jira/browse/HIVE-9221
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.15.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Minor
 Fix For: 0.15.0

 Attachments: HIVE-9221.patch


 The property {{hive.metastore.local}} has been removed for years. We can 
 remove the warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9205) Change default tez install directory to use /tmp instead of /user and create the directory if it does not exist

2014-12-30 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261486#comment-14261486
 ] 

Gunther Hagleitner commented on HIVE-9205:
--

+1

 Change default tez install directory to use /tmp instead of /user and create 
 the directory if it does not exist
 ---

 Key: HIVE-9205
 URL: https://issues.apache.org/jira/browse/HIVE-9205
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0, 0.15.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.15.0, 0.14.1

 Attachments: HIVE-9205.1.patch, HIVE-9205.2.patch


 The common deployment scenario is to install the packages and start services. 
 Creating the /user/user directory is currently an extra step during manual 
 installation. In case the user tries to bring up the hive shell with tez 
 enabled, this would result in an exception. The solution is to change the 
 default install directory to /tmp (so that we have the permissions to create 
 the directory /tmp/user) and create the /tmp/user directory if it did not 
 exist earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8920) IOContext problem with multiple MapWorks cloned for multi-insert [Spark Branch]

2014-12-30 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8920:
--
   Resolution: Fixed
Fix Version/s: spark-branch
   Status: Resolved  (was: Patch Available)

Patch #3 is committed to Spark branch.

 IOContext problem with multiple MapWorks cloned for multi-insert [Spark 
 Branch]
 ---

 Key: HIVE-8920
 URL: https://issues.apache.org/jira/browse/HIVE-8920
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Chao
Assignee: Xuefu Zhang
 Fix For: spark-branch

 Attachments: HIVE-8920.1-spark.patch, HIVE-8920.2-spark.patch, 
 HIVE-8920.3-spark.patch, HIVE-8920.4-spark.patch


 The following query will not work:
 {code}
 from (select * from table0 union all select * from table1) s
 insert overwrite table table3 select s.x, count(1) group by s.x
 insert overwrite table table4 select s.y, count(1) group by s.y;
 {code}
 Currently, the plan for this query, before SplitSparkWorkResolver, looks like 
 below:
 {noformat}
M1M2
  \  / \
   U3   R5
   |
   R4
 {noformat}
 In {{SplitSparkWorkResolver#splitBaseWork}}, it assumes that the 
 {{childWork}} is a ReduceWork, but for this case, you can see that for M2 the 
 childWork could be UnionWork U3. Thus, the code will fail.
 HIVE-9041 addressed partially addressed the problem by removing union task. 
 However, it's still necessary to cloning M1 and M2 to support multi-insert. 
 Because M1 and M2 can run in a single JVM, the original solution of storing a 
 global IOContext will not work because M1 and M2 have different io contexts, 
 both needing to be stored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9230) Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]

2014-12-30 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-9230:
-

 Summary: Followup for HIVE-9125, update ppd_join4.q.out for Spark 
[Spark Branch]
 Key: HIVE-9230
 URL: https://issues.apache.org/jira/browse/HIVE-9230
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9230) Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]

2014-12-30 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-9230:
-

Assignee: Xuefu Zhang

 Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]
 ---

 Key: HIVE-9230
 URL: https://issues.apache.org/jira/browse/HIVE-9230
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Follow up on HIVE-3405

2014-12-30 Thread Alexander Pivovarov
Hi Everyone

Can anyone review HIVE-3405.3.patch
https://issues.apache.org/jira/browse/HIVE-3405

build 2218 was successful except of 2 errors in org.apache.hadoop.hive.cli
(which also were in previous builds)

Test results:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2218/testReport

Thank you
Alex


[jira] [Commented] (HIVE-8817) Create unit test where we insert into an encrypted table and then read from it with pig

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261516#comment-14261516
 ] 

Brock Noland commented on HIVE-8817:


I don't we want to reuse the entire tests but we can look at 
{{TestHCatLoader}}, {{TestHCatStorer}}, and {{TestHCatHiveCompatibility}} as 
basic examples.

 Create unit test where we insert into an encrypted table and then read from 
 it with pig
 ---

 Key: HIVE-8817
 URL: https://issues.apache.org/jira/browse/HIVE-8817
 Project: Hive
  Issue Type: Sub-task
Affects Versions: encryption-branch
Reporter: Brock Noland
Assignee: Dong Chen
 Fix For: encryption-branch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8818) Create unit test where we insert into an encrypted table and then read from it with hcatalog mapreduce

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261517#comment-14261517
 ] 

Brock Noland commented on HIVE-8818:


A good example is {{TestSequenceFileReadWrite.testSequenceTableWriteReadMR}}: 

https://github.com/apache/hive/blob/trunk/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/mapreduce/TestSequenceFileReadWrite.java#L160

 Create unit test where we insert into an encrypted table and then read from 
 it with hcatalog mapreduce
 --

 Key: HIVE-8818
 URL: https://issues.apache.org/jira/browse/HIVE-8818
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Dong Chen





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: HIVE-3405 initcap UDF

2014-12-30 Thread Thejas Nair
Thanks for the patch Alexander! I will review it.


On Mon, Dec 29, 2014 at 10:54 PM, Alexander Pivovarov
apivova...@gmail.com wrote:
 Hi Everyone

 I've attached patch HIVE-3405.3.patch which includes:
 - initcap UDF implementation GenericUDFInitCap
 - vectorized expressions StringInitCap
 - initcap unit test
 - udf_initcap.q itest qfile
 - fixed show_functions.q

 https://issues.apache.org/jira/browse/HIVE-3405


 Alex

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Commented] (HIVE-3405) UDF initcap to obtain a string with the first letter of each word in uppercase other letters in lowercase

2014-12-30 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261527#comment-14261527
 ] 

Thejas M Nair commented on HIVE-3405:
-

Thanks for the patch [~apivovarov] .
Can you also upload it to http://reviews.apache.org/ ? (instructions if you end 
up needing them - https://cwiki.apache.org/confluence/display/Hive/Review+Board 
)
Can you also format the code to use two spaces for indentation ? 
(https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-CodingConventions)


 UDF initcap to obtain a string with the first letter of each word in 
 uppercase other letters in lowercase
 -

 Key: HIVE-3405
 URL: https://issues.apache.org/jira/browse/HIVE-3405
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.11.0, 0.13.0, 0.14.0, 
 0.15.0, 0.14.1
Reporter: Archana Nair
Assignee: Alexander Pivovarov
  Labels: patch
 Attachments: HIVE-3405.1.patch.txt, HIVE-3405.2.patch, 
 HIVE-3405.3.patch


 Hive current releases lacks a INITCAP function  which returns String with 
 first letter of the word in uppercase.INITCAP returns String, with the first 
 letter of each word in uppercase, all other letters in same case. Words are 
 delimited by white space.This will be useful report generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9119) ZooKeeperHiveLockManager does not use zookeeper in the proper way

2014-12-30 Thread Na Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Na Yang updated HIVE-9119:
--
Attachment: HIVE-9119.3.patch

 ZooKeeperHiveLockManager does not use zookeeper in the proper way
 -

 Key: HIVE-9119
 URL: https://issues.apache.org/jira/browse/HIVE-9119
 Project: Hive
  Issue Type: Improvement
  Components: Locking
Affects Versions: 0.13.0, 0.14.0, 0.13.1
Reporter: Na Yang
Assignee: Na Yang
 Attachments: HIVE-9119.1.patch, HIVE-9119.2.patch, HIVE-9119.3.patch


 ZooKeeperHiveLockManager does not use zookeeper in the proper way. 
 Currently a new zookeeper client instance is created for each 
 getlock/releaselock query which sometimes causes the number of open 
 connections between
 HiveServer2 and ZooKeeper exceed the max connection number that zookeeper 
 server allows. 
 To use zookeeper as a distributed lock, there is no need to create a new 
 zookeeper instance for every getlock try. A single zookeeper instance could 
 be reused and shared by ZooKeeperHiveLockManagers.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9119) ZooKeeperHiveLockManager does not use zookeeper in the proper way

2014-12-30 Thread Na Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261561#comment-14261561
 ] 

Na Yang commented on HIVE-9119:
---

[~leftylev], thank you for reviewing the patch. I uploaded a new patch 
according to your suggestion and also changed the 
hive.zookeeper.session.timeout to 60ms.

 ZooKeeperHiveLockManager does not use zookeeper in the proper way
 -

 Key: HIVE-9119
 URL: https://issues.apache.org/jira/browse/HIVE-9119
 Project: Hive
  Issue Type: Improvement
  Components: Locking
Affects Versions: 0.13.0, 0.14.0, 0.13.1
Reporter: Na Yang
Assignee: Na Yang
 Attachments: HIVE-9119.1.patch, HIVE-9119.2.patch, HIVE-9119.3.patch


 ZooKeeperHiveLockManager does not use zookeeper in the proper way. 
 Currently a new zookeeper client instance is created for each 
 getlock/releaselock query which sometimes causes the number of open 
 connections between
 HiveServer2 and ZooKeeper exceed the max connection number that zookeeper 
 server allows. 
 To use zookeeper as a distributed lock, there is no need to create a new 
 zookeeper instance for every getlock try. A single zookeeper instance could 
 be reused and shared by ZooKeeperHiveLockManagers.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 29494: HIVE-9119: ZooKeeperHiveLockManager does not use zookeeper in the proper way

2014-12-30 Thread Na Yang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29494/
---

Review request for hive, Brock Noland, Szehon Ho, and Xuefu Zhang.


Bugs: HIVE-9119
https://issues.apache.org/jira/browse/HIVE-9119


Repository: hive-git


Description
---

1. Use singleton ZooKeeper client for ZooKeeperHiveLocManager
2. Use CuratorFramework to manage ZooKeeper client 


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 2e51518 
  itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 878202a 
  ql/pom.xml 84e912e 
  
ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/CuratorFrameworkSingleton.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
 1334a91 
  
ql/src/test/org/apache/hadoop/hive/ql/lockmgr/zookeeper/TestZookeeperLockManager.java
 aacb73f 

Diff: https://reviews.apache.org/r/29494/diff/


Testing
---


Thanks,

Na Yang



[jira] [Updated] (HIVE-3405) UDF initcap to obtain a string with the first letter of each word in uppercase other letters in lowercase

2014-12-30 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-3405:
--
Status: In Progress  (was: Patch Available)

 UDF initcap to obtain a string with the first letter of each word in 
 uppercase other letters in lowercase
 -

 Key: HIVE-3405
 URL: https://issues.apache.org/jira/browse/HIVE-3405
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.14.0, 0.13.0, 0.11.0, 0.10.0, 0.9.0, 0.8.1, 0.15.0, 
 0.14.1, 0.9.1
Reporter: Archana Nair
Assignee: Alexander Pivovarov
  Labels: patch
 Attachments: HIVE-3405.1.patch.txt, HIVE-3405.2.patch, 
 HIVE-3405.3.patch


 Hive current releases lacks a INITCAP function  which returns String with 
 first letter of the word in uppercase.INITCAP returns String, with the first 
 letter of each word in uppercase, all other letters in same case. Words are 
 delimited by white space.This will be useful report generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3405) UDF initcap to obtain a string with the first letter of each word in uppercase other letters in lowercase

2014-12-30 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-3405:
--
Attachment: HIVE-3405.4.patch

use 2 spaces for indent

 UDF initcap to obtain a string with the first letter of each word in 
 uppercase other letters in lowercase
 -

 Key: HIVE-3405
 URL: https://issues.apache.org/jira/browse/HIVE-3405
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.11.0, 0.13.0, 0.14.0, 
 0.15.0, 0.14.1
Reporter: Archana Nair
Assignee: Alexander Pivovarov
  Labels: patch
 Attachments: HIVE-3405.1.patch.txt, HIVE-3405.2.patch, 
 HIVE-3405.3.patch, HIVE-3405.4.patch


 Hive current releases lacks a INITCAP function  which returns String with 
 first letter of the word in uppercase.INITCAP returns String, with the first 
 letter of each word in uppercase, all other letters in same case. Words are 
 delimited by white space.This will be useful report generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3405) UDF initcap to obtain a string with the first letter of each word in uppercase other letters in lowercase

2014-12-30 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-3405:
--
Status: Patch Available  (was: In Progress)

 UDF initcap to obtain a string with the first letter of each word in 
 uppercase other letters in lowercase
 -

 Key: HIVE-3405
 URL: https://issues.apache.org/jira/browse/HIVE-3405
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.14.0, 0.13.0, 0.11.0, 0.10.0, 0.9.0, 0.8.1, 0.15.0, 
 0.14.1, 0.9.1
Reporter: Archana Nair
Assignee: Alexander Pivovarov
  Labels: patch
 Attachments: HIVE-3405.1.patch.txt, HIVE-3405.2.patch, 
 HIVE-3405.3.patch, HIVE-3405.4.patch


 Hive current releases lacks a INITCAP function  which returns String with 
 first letter of the word in uppercase.INITCAP returns String, with the first 
 letter of each word in uppercase, all other letters in same case. Words are 
 delimited by white space.This will be useful report generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8816:
--
Status: Patch Available  (was: Open)

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.2.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8816:
--
Attachment: HIVE-8816.2.patch

Hi [~Ferd]

Here's the test updated with the new testing framework.

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.2.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9205) Change default tez install directory to use /tmp instead of /user and create the directory if it does not exist

2014-12-30 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-9205:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Change default tez install directory to use /tmp instead of /user and create 
 the directory if it does not exist
 ---

 Key: HIVE-9205
 URL: https://issues.apache.org/jira/browse/HIVE-9205
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0, 0.15.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.15.0, 0.14.1

 Attachments: HIVE-9205.1.patch, HIVE-9205.2.patch


 The common deployment scenario is to install the packages and start services. 
 Creating the /user/user directory is currently an extra step during manual 
 installation. In case the user tries to bring up the hive shell with tez 
 enabled, this would result in an exception. The solution is to change the 
 default install directory to /tmp (so that we have the permissions to create 
 the directory /tmp/user) and create the /tmp/user directory if it did not 
 exist earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9205) Change default tez install directory to use /tmp instead of /user and create the directory if it does not exist

2014-12-30 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261583#comment-14261583
 ] 

Vikram Dixit K commented on HIVE-9205:
--

Committed to trunk and branch 0.14.

 Change default tez install directory to use /tmp instead of /user and create 
 the directory if it does not exist
 ---

 Key: HIVE-9205
 URL: https://issues.apache.org/jira/browse/HIVE-9205
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.14.0, 0.15.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.15.0, 0.14.1

 Attachments: HIVE-9205.1.patch, HIVE-9205.2.patch


 The common deployment scenario is to install the packages and start services. 
 Creating the /user/user directory is currently an extra step during manual 
 installation. In case the user tries to bring up the hive shell with tez 
 enabled, this would result in an exception. The solution is to change the 
 default install directory to /tmp (so that we have the permissions to create 
 the directory /tmp/user) and create the /tmp/user directory if it did not 
 exist earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8816:
--
Status: Patch Available  (was: Open)

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.3.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8816:
--
Attachment: HIVE-8816.3.patch

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.3.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8816:
--
Attachment: (was: HIVE-8816.2.patch)

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.3.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3405) UDF initcap to obtain a string with the first letter of each word in uppercase other letters in lowercase

2014-12-30 Thread Alexander Pivovarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261585#comment-14261585
 ] 

Alexander Pivovarov commented on HIVE-3405:
---

review board link https://reviews.apache.org/r/29495/

 UDF initcap to obtain a string with the first letter of each word in 
 uppercase other letters in lowercase
 -

 Key: HIVE-3405
 URL: https://issues.apache.org/jira/browse/HIVE-3405
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.11.0, 0.13.0, 0.14.0, 
 0.15.0, 0.14.1
Reporter: Archana Nair
Assignee: Alexander Pivovarov
  Labels: patch
 Attachments: HIVE-3405.1.patch.txt, HIVE-3405.2.patch, 
 HIVE-3405.3.patch, HIVE-3405.4.patch


 Hive current releases lacks a INITCAP function  which returns String with 
 first letter of the word in uppercase.INITCAP returns String, with the first 
 letter of each word in uppercase, all other letters in same case. Words are 
 delimited by white space.This will be useful report generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-8816:
--
Status: Open  (was: Patch Available)

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.3.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9230) Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]

2014-12-30 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9230:
--
Status: Patch Available  (was: Open)

 Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]
 ---

 Key: HIVE-9230
 URL: https://issues.apache.org/jira/browse/HIVE-9230
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-9230.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9230) Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]

2014-12-30 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9230:
--
Attachment: HIVE-9230.1-spark.patch

 Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]
 ---

 Key: HIVE-9230
 URL: https://issues.apache.org/jira/browse/HIVE-9230
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-9230.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8816:
---
Assignee: Ferdinand Xu  (was: Sergio Peña)

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Ferdinand Xu
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.3.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261586#comment-14261586
 ] 

Brock Noland commented on HIVE-8816:


HI [~spena],

Thank you for updating the patch with the new changes! Since [~Ferd] was 
already working on this I am going to keep it assigned to him. I see there is 
an additional fix {{keyProvider.flush();}} in this patch? Could you fix that in 
a separate issue?

[~Ferd] - could you review Sergio's update and let me know what you think. 

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.3.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9231) Encryption keys deletion need to be flushed so that it updates the JKS file

2014-12-30 Thread JIRA
Sergio Peña created HIVE-9231:
-

 Summary: Encryption keys deletion need to be flushed so that it 
updates the JKS file
 Key: HIVE-9231
 URL: https://issues.apache.org/jira/browse/HIVE-9231
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9231) Encryption keys deletion need to be flushed so that it updates the JKS file

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9231:
--
Attachment: HIVE-9231.1.patch

 Encryption keys deletion need to be flushed so that it updates the JKS file
 ---

 Key: HIVE-9231
 URL: https://issues.apache.org/jira/browse/HIVE-9231
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-9231.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9231) Encryption keys deletion need to be flushed so that it updates the JKS file

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9231:
--
Status: Patch Available  (was: Open)

 Encryption keys deletion need to be flushed so that it updates the JKS file
 ---

 Key: HIVE-9231
 URL: https://issues.apache.org/jira/browse/HIVE-9231
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-9231.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9231) Encryption keys deletion need to be flushed so that it updates the JKS file

2014-12-30 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9231:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch!

 Encryption keys deletion need to be flushed so that it updates the JKS file
 ---

 Key: HIVE-9231
 URL: https://issues.apache.org/jira/browse/HIVE-9231
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-9231.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261592#comment-14261592
 ] 

Brock Noland commented on HIVE-8816:


[~Ferd] FYI I just committed HIVE-9231 to branch thus the {{keyprovider.flush}} 
will need to be removed from this patch.

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Ferdinand Xu
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.3.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9232) We should throw an error on Hadoop23Shims.createKey() if the key already exists

2014-12-30 Thread JIRA
Sergio Peña created HIVE-9232:
-

 Summary: We should throw an error on Hadoop23Shims.createKey() if 
the key already exists
 Key: HIVE-9232
 URL: https://issues.apache.org/jira/browse/HIVE-9232
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch


We should throw an error when creating an encryption key if the key already 
exists. 

Developers might forget to delete the keys during the q-tests, and the next 
q-test that creates the same key name with a different bit-length will not 
fail, causing the test to run successfully (but not correctly).

Let's just throw an error on Hadoop23Shims.createKey()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9233) Delete default encrypted databases created by TestEncryptedHDFSCliDriver

2014-12-30 Thread JIRA
Sergio Peña created HIVE-9233:
-

 Summary: Delete default encrypted databases created by 
TestEncryptedHDFSCliDriver
 Key: HIVE-9233
 URL: https://issues.apache.org/jira/browse/HIVE-9233
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch


The default encrypted databases created/deleted by HIVE-8900:
- q_test_init_for_encryption.sql
- q_test_cleanup_for_encrypted.sql

are not needed anymore because of the changes made by HIVE-9167.

We should delete all code related to those default databases for testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9234) HiveServer2 leaks FileSystem objects in FileSystem.CACHE

2014-12-30 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-9234:
---
Component/s: HiveServer2

 HiveServer2 leaks FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-9234
 URL: https://issues.apache.org/jira/browse/HIVE-9234
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta

 Running over extended period (48+ hrs), we've noticed HiveServer2 leaking 
 FileSystem objects in FileSystem.CACHE. Linked jiras were previous attempts 
 to fix it, but the issue still seems to be there. A workaround is to disable 
 the caching (by setting {{fs.hdfs.impl.disable.cache}} and 
 {{fs.file.impl.disable.cache}} to {{true}}), but creating new FileSystem 
 objects is expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9234) HiveServer2 leaks FileSystem objects in FileSystem.CACHE

2014-12-30 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-9234:
--

 Summary: HiveServer2 leaks FileSystem objects in FileSystem.CACHE
 Key: HIVE-9234
 URL: https://issues.apache.org/jira/browse/HIVE-9234
 Project: Hive
  Issue Type: Bug
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta


Running over extended period (48+ hrs), we've noticed HiveServer2 leaking 
FileSystem objects in FileSystem.CACHE. Linked jiras were previous attempts to 
fix it, but the issue still seems to be there. A workaround is to disable the 
caching (by setting {{fs.hdfs.impl.disable.cache}} and 
{{fs.file.impl.disable.cache}} to {{true}}), but creating new FileSystem 
objects is expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9234) HiveServer2 leaks FileSystem objects in FileSystem.CACHE

2014-12-30 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-9234:
---
Fix Version/s: 0.14.1

 HiveServer2 leaks FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-9234
 URL: https://issues.apache.org/jira/browse/HIVE-9234
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.14.1


 Running over extended period (48+ hrs), we've noticed HiveServer2 leaking 
 FileSystem objects in FileSystem.CACHE. Linked jiras were previous attempts 
 to fix it, but the issue still seems to be there. A workaround is to disable 
 the caching (by setting {{fs.hdfs.impl.disable.cache}} and 
 {{fs.file.impl.disable.cache}} to {{true}}), but creating new FileSystem 
 objects is expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9234) HiveServer2 leaks FileSystem objects in FileSystem.CACHE

2014-12-30 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-9234:
---
Affects Version/s: 0.12.1
   0.12.0
   0.13.0
   0.14.0
   0.13.1

 HiveServer2 leaks FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-9234
 URL: https://issues.apache.org/jira/browse/HIVE-9234
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.14.1


 Running over extended period (48+ hrs), we've noticed HiveServer2 leaking 
 FileSystem objects in FileSystem.CACHE. Linked jiras were previous attempts 
 to fix it, but the issue still seems to be there. A workaround is to disable 
 the caching (by setting {{fs.hdfs.impl.disable.cache}} and 
 {{fs.file.impl.disable.cache}} to {{true}}), but creating new FileSystem 
 objects is expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9232) We should throw an error on Hadoop23Shims.createKey() if the key already exists

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9232:
--
Status: Patch Available  (was: Open)

 We should throw an error on Hadoop23Shims.createKey() if the key already 
 exists
 ---

 Key: HIVE-9232
 URL: https://issues.apache.org/jira/browse/HIVE-9232
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-9232.1.patch


 We should throw an error when creating an encryption key if the key already 
 exists. 
 Developers might forget to delete the keys during the q-tests, and the next 
 q-test that creates the same key name with a different bit-length will not 
 fail, causing the test to run successfully (but not correctly).
 Let's just throw an error on Hadoop23Shims.createKey()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9232) We should throw an error on Hadoop23Shims.createKey() if the key already exists

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9232:
--
Attachment: HIVE-9232.1.patch

 We should throw an error on Hadoop23Shims.createKey() if the key already 
 exists
 ---

 Key: HIVE-9232
 URL: https://issues.apache.org/jira/browse/HIVE-9232
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-9232.1.patch


 We should throw an error when creating an encryption key if the key already 
 exists. 
 Developers might forget to delete the keys during the q-tests, and the next 
 q-test that creates the same key name with a different bit-length will not 
 fail, causing the test to run successfully (but not correctly).
 Let's just throw an error on Hadoop23Shims.createKey()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8815) Create unit test join of encrypted and unencrypted table

2014-12-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261661#comment-14261661
 ] 

Sergio Peña commented on HIVE-8815:
---

Hi [~Ferd]

Could you update the patch so that it uses the new encryption testing framework 
HIVE-9167?
Here's an example on HIVE-8816

 Create unit test join of encrypted and unencrypted table
 

 Key: HIVE-8815
 URL: https://issues.apache.org/jira/browse/HIVE-8815
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Ferdinand Xu
 Fix For: encryption-branch

 Attachments: HIVE-8815.1.patch, HIVE-8815.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9233) Delete default encrypted databases created by TestEncryptedHDFSCliDriver

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9233:
--
Attachment: HIVE-9233.1.patch

 Delete default encrypted databases created by TestEncryptedHDFSCliDriver
 

 Key: HIVE-9233
 URL: https://issues.apache.org/jira/browse/HIVE-9233
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-9233.1.patch


 The default encrypted databases created/deleted by HIVE-8900:
 - q_test_init_for_encryption.sql
 - q_test_cleanup_for_encrypted.sql
 are not needed anymore because of the changes made by HIVE-9167.
 We should delete all code related to those default databases for testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9233) Delete default encrypted databases created by TestEncryptedHDFSCliDriver

2014-12-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9233:
--
Status: Patch Available  (was: Open)

 Delete default encrypted databases created by TestEncryptedHDFSCliDriver
 

 Key: HIVE-9233
 URL: https://issues.apache.org/jira/browse/HIVE-9233
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergio Peña
Assignee: Sergio Peña
 Fix For: encryption-branch

 Attachments: HIVE-9233.1.patch


 The default encrypted databases created/deleted by HIVE-8900:
 - q_test_init_for_encryption.sql
 - q_test_cleanup_for_encrypted.sql
 are not needed anymore because of the changes made by HIVE-9167.
 We should delete all code related to those default databases for testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8821) Create unit test where we insert into dynamically partitioned table

2014-12-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-8821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261686#comment-14261686
 ] 

Sergio Peña commented on HIVE-8821:
---

Hi [~dongc]

Could you update the patch so that it uses the new encryption testing framework 
HIVE-9167?
Here's an example on HIVE-8816

 Create unit test where we insert into dynamically partitioned table
 ---

 Key: HIVE-8821
 URL: https://issues.apache.org/jira/browse/HIVE-8821
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Dong Chen
 Fix For: encryption-branch

 Attachments: HIVE-8821.1.patch, HIVE-8821.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8822) Create unit test where we insert into statically partitioned table

2014-12-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261687#comment-14261687
 ] 

Sergio Peña commented on HIVE-8822:
---

Hi [~dongc]

Could you update the patch so that it uses the new encryption testing framework 
HIVE-9167?
Here's an example on HIVE-8816

 Create unit test where we insert into statically partitioned table
 --

 Key: HIVE-8822
 URL: https://issues.apache.org/jira/browse/HIVE-8822
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Dong Chen
 Fix For: encryption-branch

 Attachments: HIVE-8822.patch, 
 encryption_insert_partition_static.q.out.orig






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9230) Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]

2014-12-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261722#comment-14261722
 ] 

Hive QA commented on HIVE-9230:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12689582/HIVE-9230.1-spark.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 7281 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_windowing
org.apache.hive.hcatalog.streaming.TestStreaming.testAddPartition
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/600/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/600/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-600/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12689582 - PreCommit-HIVE-SPARK-Build

 Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]
 ---

 Key: HIVE-9230
 URL: https://issues.apache.org/jira/browse/HIVE-9230
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-9230.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9230) Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]

2014-12-30 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9230:
--
   Resolution: Fixed
Fix Version/s: spark-branch
   Status: Resolved  (was: Patch Available)

Patch committed to Spark branch.

 Followup for HIVE-9125, update ppd_join4.q.out for Spark [Spark Branch]
 ---

 Key: HIVE-9230
 URL: https://issues.apache.org/jira/browse/HIVE-9230
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: spark-branch

 Attachments: HIVE-9230.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8816) Create unit test join of two encrypted tables with different keys

2014-12-30 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261731#comment-14261731
 ] 

Ferdinand Xu commented on HIVE-8816:


[~brocknoland], I am working on this jira and blocking by the encryption zones 
inconsistent issue. In details, the encryption zone list is empty when I try to 
get it in Hive.java. I am think it may be related to HIVE-9231. I will rebase 
my code and have a try again.

 Create unit test join of two encrypted tables with different keys
 -

 Key: HIVE-8816
 URL: https://issues.apache.org/jira/browse/HIVE-8816
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Ferdinand Xu
 Fix For: encryption-branch

 Attachments: HIVE-8816.1.patch, HIVE-8816.3.patch, HIVE-8816.patch


 NO PRECOMMIT TESTS
 The results should be inserted into a third table encrypted with a separate 
 key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9084) Investigate IOContext object initialization problem [Spark Branch]

2014-12-30 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9084:
--
Resolution: Done
Status: Resolved  (was: Patch Available)

Investigation is done. Problem is fixed via HIVE-8920.

 Investigate IOContext object initialization problem [Spark Branch]
 --

 Key: HIVE-9084
 URL: https://issues.apache.org/jira/browse/HIVE-9084
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-9084.1-spark.patch, HIVE-9084.2-spark.patch, 
 HIVE-9084.2-spark.patch, HIVE-9084.3-spark.patch, HIVE-9084.4-spark.patch, 
 HIVE-9084.4-spark.patch


 In recent ptest run (Test results: 
 http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/511/testReport),
  test groupby_multi_single_reducer.q failed w/ the following stacktrace:
 {code}
 java.lang.RuntimeException: Map operator initialization failed
   at 
 org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.init(SparkMapRecordHandler.java:136)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunction.call(HiveMapFunction.java:54)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveMapFunction.call(HiveMapFunction.java:29)
   at 
 org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:167)
   at 
 org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:167)
   at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:601)
   at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:601)
   at 
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
   at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
   at org.apache.spark.scheduler.Task.run(Task.scala:56)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.NullPointerException
   at org.apache.hadoop.hive.ql.io.IOContext.copy(IOContext.java:119)
   at 
 org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.init(SparkMapRecordHandler.java:97)
   ... 16 more
 {code}
 This failure is again about IOContext object, which needs further 
 investigation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 29498: Upgrade JavaEWAH version to allow for unsorted bitset creation

2014-12-30 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29498/
---

Review request for hive.


Bugs: HIVE-8181
https://issues.apache.org/jira/browse/HIVE-8181


Repository: hive-git


Description
---

JavaEWAH has removed the restriction that bitsets can only be set in order in 
the latest release. 

Currently the use of {{ewah_bitmap}} UDAF requires a {{SORT BY}}.

{code}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.RuntimeException: Can't set bits out of order with 
EWAHCompressedBitmap
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:824)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
at 
org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at 
org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)
... 7 more
Caused by: java.lang.RuntimeException: Can't set bits out of order with 
EWAHCompressedBitmap
at 
{code}


Diffs
-

  pom.xml 0e30078 
  
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/AbstractGenericUDFEWAHBitmapBop.java
 58ea3ba 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFEWAHBitmap.java 
e4b412e 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java e3fb558 
  
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapAnd.java 
7838b54 
  
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapEmpty.java
 4a14a65 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapOr.java 
d438f82 
  ql/src/test/queries/clientpositive/index_bitmap2.q 89fbe76 
  ql/src/test/queries/clientpositive/udf_bitmap_empty.q 142b248 
  ql/src/test/results/clientpositive/index_bitmap2.q.out 73c5b90 
  ql/src/test/results/clientpositive/index_bitmap3.q.out 599bf3a 
  ql/src/test/results/clientpositive/index_bitmap_auto.q.out 81c1795 
  ql/src/test/results/clientpositive/udf_bitmap_and.q.out 8c93398 
  ql/src/test/results/clientpositive/udf_bitmap_empty.q.out ca96e78 
  ql/src/test/results/clientpositive/udf_bitmap_or.q.out 43521da 

Diff: https://reviews.apache.org/r/29498/diff/


Testing
---


Thanks,

Navis Ryu



Hive 0.14.1 release

2014-12-30 Thread Vikram Dixit K
Hi Folks,

Given that there have been a number of fixes that have gone into branch
0.14 in the past 8 weeks, I would like to make a release of 0.14.1 soon. I
would like to fix some of the release issues as well this time around. I am
thinking of some time around 15th January for getting a RC out. Please let
me know if you have any concerns. Also, from a previous thread, I would
like to make this release the 1.0 branch of hive. The process for getting
jiras into this release is going to be the same as the previous one viz.:

1. Mark the jira with fix version 0.14.1 and update the status to
blocker/critical.
2. If a committer +1s the patch for 0.14.1, it is good to go in. Please
mention me in the jira in case you are not sure if the jira should make it
for 0.14.1.

Thanks
Vikram.


  1   2   >