[jira] [Updated] (HIVE-5430) NOT expression doesn't handle nulls correctly.

2013-10-07 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5430:
---

Attachment: HIVE-5430.2.patch

> NOT expression doesn't handle nulls correctly.
> --
>
> Key: HIVE-5430
> URL: https://issues.apache.org/jira/browse/HIVE-5430
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5430.1.patch, HIVE-5430.2.patch
>
>
> NOT expression doesn't handle nulls correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5430) NOT expression doesn't handle nulls correctly.

2013-10-07 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5430:
---

Status: Patch Available  (was: Open)

> NOT expression doesn't handle nulls correctly.
> --
>
> Key: HIVE-5430
> URL: https://issues.apache.org/jira/browse/HIVE-5430
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5430.1.patch, HIVE-5430.2.patch
>
>
> NOT expression doesn't handle nulls correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4898) make vectorized math functions work end-to-end (update VectorizationContext.java)

2013-10-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788931#comment-13788931
 ] 

Hive QA commented on HIVE-4898:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12607222/HIVE-4898.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4059 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1068/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1068/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

> make vectorized math functions work end-to-end (update 
> VectorizationContext.java)
> -
>
> Key: HIVE-4898
> URL: https://issues.apache.org/jira/browse/HIVE-4898
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: vectorization-branch
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Attachments: HIVE-4898.3.patch, HIVE-4898.3.patch
>
>
> The vectorized math function VectorExpression classes were added in 
> HIVE-4822. This JIRA is to allow those to actually be used in a SQL query 
> end-to-end. This requires updating VectorizationContext to use the new 
> classes in vectorized expression creation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [VOTE] Apache Hive 0.12.0 Release Candidate 0

2013-10-07 Thread Thejas Nair
Carl pointed some issues with the RC. I will be rolling out a new RC
to address those (hopefully sometime tomorrow).
If anybody finds additional issues, please let me know, so that I can
address those as well in the next RC.

HIVE-5489 - NOTICE copyright dates are out of date
HIVE-5488 - some files are missing apache license headers



On Mon, Oct 7, 2013 at 4:38 PM, Thejas Nair  wrote:
> Yes, that is the correct tag. Thanks for pointing it out.
> I also update the tag as it was a little behind what is in the RC
> (found some issues with maven-publish).
>
> I have also updated the release vote email template in hive
> HowToRelease wiki page, to include note about the tag .
>
> Thanks,
> Thejas
>
>
>
> On Mon, Oct 7, 2013 at 4:26 PM, Brock Noland  wrote:
>> Hi Thejas,
>>
>> Thank you very much for the hard work!  I believe the vote email should
>> contain a link to the tag we are voting on. I assume the tag is:
>> release-0.12.0-rc0 (
>> http://svn.apache.org/viewvc/hive/tags/release-0.12.0-rc0/). Is that
>> correct?
>>
>> Brock
>>
>>
>> On Mon, Oct 7, 2013 at 6:02 PM, Thejas Nair  wrote:
>>
>>> Apache Hive 0.12.0 Release Candidate 0 is available here:
>>> http://people.apache.org/~thejas/hive-0.12.0-rc0/
>>>
>>> Maven artifacts are available here:
>>> https://repository.apache.org/content/repositories/orgapachehive-138/
>>>
>>> This release has 406 issues fixed.
>>> This includes several new features such as data types date and
>>> varchar, optimizer improvements, ORC format improvements and many bug
>>> fixes. Hcatalog packages have now moved to org.apache.hive.hcatalog
>>> (from org.apache.hcatalog), and the maven packages are published under
>>> org.apache.hive.hcatalog.
>>>
>>> Voting will conclude in 72 hours.
>>>
>>> Hive PMC Members: Please test and vote.
>>>
>>> Thanks,
>>> Thejas
>>>
>>> --
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity to
>>> which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>>>
>>
>>
>>
>> --
>> Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Updated] (HIVE-5478) WebHCat e2e testsuite for hcat authorization tests needs some fixes

2013-10-07 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5478:
-

Attachment: HIVE-5478.1.patch

Thanks Thejas for the quick review. I have attached an updated patch with the 
minor change.

> WebHCat e2e testsuite for hcat authorization tests needs some fixes
> ---
>
> Key: HIVE-5478
> URL: https://issues.apache.org/jira/browse/HIVE-5478
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5478.1.patch, HIVE-5478.patch
>
>
> Here are the issues:
> 1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
> testdist root otherwise the ant command fails to look for 
> resource/default.res.
> 2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions 
> on the hive warehouse directory which can vary based on the environment, 
> improved the test to check what is set.
> 3. DB_OPS_18 error message is old, now we get a more specific message, 
> updated to verify the new one.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5489) NOTICE copyright dates are out of date

2013-10-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1377#comment-1377
 ] 

Thejas M Nair commented on HIVE-5489:
-

Also, in the README it says "We have provided upgrade scripts for Derby and 
MySQL databases."
We should included the additional supported databases.
The paragraph  about replacing old copies of hive-default.xml can be removed, 
since we ignore that file.

(Thanks to Carl for reporting these issues).

> NOTICE copyright dates are out of date
> --
>
> Key: HIVE-5489
> URL: https://issues.apache.org/jira/browse/HIVE-5489
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Blocker
>
> This needs to be updated for 0.12 release



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5489) NOTICE copyright dates are out of date

2013-10-07 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5489:
---

 Summary: NOTICE copyright dates are out of date
 Key: HIVE-5489
 URL: https://issues.apache.org/jira/browse/HIVE-5489
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Blocker


This needs to be updated for 0.12 release




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5478) WebHCat e2e testsuite for hcat authorization tests needs some fixes

2013-10-07 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5478:
-

Description: 
Here are the issues:
1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
testdist root otherwise the ant command fails to look for resource/default.res.
2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions on 
the hive warehouse directory which can vary based on the environment, improved 
the test to check what is set.
3. DB_OPS_18 error message is old, now we get a more specific message, updated 
to verify the new one.

NO PRECOMMIT TESTS

  was:
Here are the issues:
1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
testdist root otherwise the ant command fails to look for resource/default.res.
2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions on 
the hive warehouse directory which can vary based on the environment, improved 
the test to check what is set.
3. DB_OPS_18 error message is old, now we get a more specific message, updated 
to verify the new one.


> WebHCat e2e testsuite for hcat authorization tests needs some fixes
> ---
>
> Key: HIVE-5478
> URL: https://issues.apache.org/jira/browse/HIVE-5478
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5478.patch
>
>
> Here are the issues:
> 1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
> testdist root otherwise the ant command fails to look for 
> resource/default.res.
> 2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions 
> on the hive warehouse directory which can vary based on the environment, 
> improved the test to check what is set.
> 3. DB_OPS_18 error message is old, now we get a more specific message, 
> updated to verify the new one.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5418) Integer overflow bug in ConditionalResolverCommonJoin.AliasFileSizePair

2013-10-07 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5418:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Steven!

> Integer overflow bug in ConditionalResolverCommonJoin.AliasFileSizePair
> ---
>
> Key: HIVE-5418
> URL: https://issues.apache.org/jira/browse/HIVE-5418
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0, 0.13.0
>Reporter: Steven Wong
>Assignee: Steven Wong
> Fix For: 0.13.0
>
> Attachments: HIVE-5418.0.patch, HIVE-5418.1.patch
>
>
> Sometimes, auto map join conversion unexpectedly fails to choose map join 
> over a common join, even if the auto map join conversion's size criterion is 
> satisfied.
> This is caused by an integer overflow bug in the method {{compareTo}} of the 
> class {{ConditionalResolverCommonJoin.AliasFileSizePair}}.
> The bug is triggered only if the big table size exceeds the small table size 
> by at least 2**31 bytes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5488) some files are missing apache license headers

2013-10-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1372#comment-1372
 ] 

Thejas M Nair commented on HIVE-5488:
-

List of files that should have AL header and are missing it -

hbase-handler/src/test/templates/TestHBaseNegativeCliDriver.vm
metastore/src/model/org/apache/hadoop/hive/metastore/model/MDelegationToken.java
metastore/src/model/org/apache/hadoop/hive/metastore/model/MMasterKey.java
ql/src/java/org/apache/hadoop/hive/ql/udf/GenericUDFDecode.java
ql/src/java/org/apache/hadoop/hive/ql/udf/GenericUDFEncode.java
ql/src/java/org/apache/hadoop/hive/ql/udf/UDFBase64.java
ql/src/java/org/apache/hadoop/hive/ql/udf/UDFUnbase64.java
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUnixTimeStamp.java
ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto
ql/src/test/org/apache/hadoop/hive/ql/io/udf/Rot13OutputFormat.java
ql/src/test/org/apache/hadoop/hive/ql/udf/TestGenericUDFDecode.java
ql/src/test/org/apache/hadoop/hive/ql/udf/TestGenericUDFEncode.java
ql/src/test/org/apache/hadoop/hive/ql/udf/TestToInteger.java
ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFBase64.java
ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFHex.java
ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFUnbase64.java
ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFUnhex.java
ql/src/test/org/apache/hadoop/hive/serde2/CustomNonSettableStructObjectInspector1.java
ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe1.java
ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe2.java
ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe3.java
serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/primitive/TestPrimitiveObjectInspectorUtils.java
service/src/java/org/apache/hive/service/auth/TSetIpAddressProcessor.java
service/src/java/org/apache/hive/service/auth/TUGIContainingProcessor.java
shims/src/common-secure/java/org/apache/hadoop/hive/thrift/DBTokenStore.java
shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestDBTokenStore.java
shims/src/common/java/org/apache/hadoop/hive/shims/HiveEventCounter.java
testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/CleanupPhase.java
testutils/ptest2/src/test/java/org/apache/hive/ptest/execution/TestCleanupPhase.java

> some files are missing apache license headers
> -
>
> Key: HIVE-5488
> URL: https://issues.apache.org/jira/browse/HIVE-5488
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Blocker
>
> Around 29 files that should have apache license headers are missing it.
> This needs to be fixed for the 0.12 release.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5488) some files are missing apache license headers

2013-10-07 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5488:
---

 Summary: some files are missing apache license headers
 Key: HIVE-5488
 URL: https://issues.apache.org/jira/browse/HIVE-5488
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Blocker


Around 29 files that should have apache license headers are missing it.
This needs to be fixed for the 0.12 release.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5487) custom LogicalIOProcessor - reduce record processor - multiple inputs

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5487:


Attachment: HIVE-5487.1.patch

> custom LogicalIOProcessor - reduce record processor - multiple inputs
> -
>
> Key: HIVE-5487
> URL: https://issues.apache.org/jira/browse/HIVE-5487
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: tez-branch
>
> Attachments: HIVE-5487.1.patch
>
>
> Changes to ReduceRecordProcessor to merge multiple inputs for a shuffle join.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5270) Enable hash joins using tez

2013-10-07 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788878#comment-13788878
 ] 

Gunther Hagleitner commented on HIVE-5270:
--

Ugh. Tired. I meant Thanks Vikram!

> Enable hash joins using tez
> ---
>
> Key: HIVE-5270
> URL: https://issues.apache.org/jira/browse/HIVE-5270
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: BroadCastJoinsHiveOnTez.pdf, HIVE-5270.1.patch, 
> HIVE-5270.2.patch, HIVE-5270.4.patch
>
>
> Since hash join involves replicating a hash table to all the map tasks, an 
> equivalent operation needs to be performed in tez. In the tez world, such an 
> operation is done via a broadcast edge (TEZ-410). We need to rework the 
> planning and execution phases within hive for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5270) Enable hash joins using tez

2013-10-07 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-5270.
--

Resolution: Fixed

Committed to branch. Thanks Gunther!

> Enable hash joins using tez
> ---
>
> Key: HIVE-5270
> URL: https://issues.apache.org/jira/browse/HIVE-5270
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: BroadCastJoinsHiveOnTez.pdf, HIVE-5270.1.patch, 
> HIVE-5270.2.patch, HIVE-5270.4.patch
>
>
> Since hash join involves replicating a hash table to all the map tasks, an 
> equivalent operation needs to be performed in tez. In the tez world, such an 
> operation is done via a broadcast edge (TEZ-410). We need to rework the 
> planning and execution phases within hive for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5270) Enable hash joins using tez

2013-10-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788874#comment-13788874
 ] 

Thejas M Nair commented on HIVE-5270:
-

+1

> Enable hash joins using tez
> ---
>
> Key: HIVE-5270
> URL: https://issues.apache.org/jira/browse/HIVE-5270
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: BroadCastJoinsHiveOnTez.pdf, HIVE-5270.1.patch, 
> HIVE-5270.2.patch, HIVE-5270.4.patch
>
>
> Since hash join involves replicating a hash table to all the map tasks, an 
> equivalent operation needs to be performed in tez. In the tez world, such an 
> operation is done via a broadcast edge (TEZ-410). We need to rework the 
> planning and execution phases within hive for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5487) custom LogicalIOProcessor - reduce record processor - multiple inputs

2013-10-07 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5487:
---

 Summary: custom LogicalIOProcessor - reduce record processor - 
multiple inputs
 Key: HIVE-5487
 URL: https://issues.apache.org/jira/browse/HIVE-5487
 Project: Hive
  Issue Type: Bug
  Components: Tez
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: tez-branch


Changes to ReduceRecordProcessor to merge multiple inputs for a shuffle join.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5270) Enable hash joins using tez

2013-10-07 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5270:
-

Attachment: HIVE-5270.4.patch

> Enable hash joins using tez
> ---
>
> Key: HIVE-5270
> URL: https://issues.apache.org/jira/browse/HIVE-5270
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: BroadCastJoinsHiveOnTez.pdf, HIVE-5270.1.patch, 
> HIVE-5270.2.patch, HIVE-5270.4.patch
>
>
> Since hash join involves replicating a hash table to all the map tasks, an 
> equivalent operation needs to be performed in tez. In the tez world, such an 
> operation is done via a broadcast edge (TEZ-410). We need to rework the 
> planning and execution phases within hive for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5480) WebHCat e2e tests for doAs feature are failing

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5480:


   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk.
Thanks for the contribution Deepesh!


> WebHCat e2e tests for doAs feature are failing
> --
>
> Key: HIVE-5480
> URL: https://issues.apache.org/jira/browse/HIVE-5480
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Fix For: 0.13.0
>
> Attachments: HIVE-5480.patch
>
>
> WebHCat testsuite have two tests failing:
> 1. doAsTests_6 - The test was assuming that the metadata can be read even if 
> reading data cannot be. As part of the setup we are using the 
> StorageBasedAuthorizationProvider which will not allow for this operation to 
> succeed. Updated the test to check for the failure and verify the error 
> message.
> 2. doAsTests_7 - Updated the error message to reflect the current error 
> message which looks correct.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5452:


Fix Version/s: 0.13.0

> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Fix For: 0.13.0
>
> Attachments: HIVE-5452.patch
>
>
> HCatalog e2e test Pig_HBase_1 tries to read data from a table it created 
> using the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat 
> loader org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5452:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to trunk.
Thanks Deepesh for the patch, and Eugene for the review!


> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Fix For: 0.13.0
>
> Attachments: HIVE-5452.patch
>
>
> HCatalog e2e test Pig_HBase_1 tries to read data from a table it created 
> using the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat 
> loader org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5478) WebHCat e2e testsuite for hcat authorization tests needs some fixes

2013-10-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788836#comment-13788836
 ] 

Thejas M Nair commented on HIVE-5478:
-

Looks like we can change " WebHCat e2e testsuite for hcat authorization tests needs some fixes
> ---
>
> Key: HIVE-5478
> URL: https://issues.apache.org/jira/browse/HIVE-5478
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5478.patch
>
>
> Here are the issues:
> 1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
> testdist root otherwise the ant command fails to look for 
> resource/default.res.
> 2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions 
> on the hive warehouse directory which can vary based on the environment, 
> improved the test to check what is set.
> 3. DB_OPS_18 error message is old, now we get a more specific message, 
> updated to verify the new one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5480) WebHCat e2e tests for doAs feature are failing

2013-10-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788838#comment-13788838
 ] 

Thejas M Nair commented on HIVE-5480:
-

+1

> WebHCat e2e tests for doAs feature are failing
> --
>
> Key: HIVE-5480
> URL: https://issues.apache.org/jira/browse/HIVE-5480
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5480.patch
>
>
> WebHCat testsuite have two tests failing:
> 1. doAsTests_6 - The test was assuming that the metadata can be read even if 
> reading data cannot be. As part of the setup we are using the 
> StorageBasedAuthorizationProvider which will not allow for this operation to 
> succeed. Updated the test to check for the failure and verify the error 
> message.
> 2. doAsTests_7 - Updated the error message to reflect the current error 
> message which looks correct.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5478) WebHCat e2e testsuite for hcat authorization tests needs some fixes

2013-10-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788832#comment-13788832
 ] 

Hive QA commented on HIVE-5478:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12607248/HIVE-5478.patch

{color:green}SUCCESS:{color} +1 4060 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1067/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1067/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> WebHCat e2e testsuite for hcat authorization tests needs some fixes
> ---
>
> Key: HIVE-5478
> URL: https://issues.apache.org/jira/browse/HIVE-5478
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5478.patch
>
>
> Here are the issues:
> 1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
> testdist root otherwise the ant command fails to look for 
> resource/default.res.
> 2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions 
> on the hive warehouse directory which can vary based on the environment, 
> improved the test to check what is set.
> 3. DB_OPS_18 error message is old, now we get a more specific message, 
> updated to verify the new one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788831#comment-13788831
 ] 

Thejas M Nair commented on HIVE-5452:
-

+1

> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5452.patch
>
>
> HCatalog e2e test Pig_HBase_1 tries to read data from a table it created 
> using the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat 
> loader org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5385) StringUtils is not in commons codec 1.3

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788824#comment-13788824
 ] 

Hudson commented on HIVE-5385:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2386 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2386/])
HIVE-5385 : StringUtils is not in commons codec 1.3 (Kousuke Saruta via Yin 
Huai) (yhuai: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529830)
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/shims/ivy.xml


> StringUtils is not in commons codec 1.3
> ---
>
> Key: HIVE-5385
> URL: https://issues.apache.org/jira/browse/HIVE-5385
> Project: Hive
>  Issue Type: Bug
>Reporter: Yin Huai
>Assignee: Kousuke Saruta
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5385.1.patch, HIVE-5385.2.patch
>
>
> In ThriftHttpServlet introduced by HIVE-4763, StringUtils is imported which 
> was introduced by commons codec 1.4. But, our 0.20 shims depends on commons 
> codec 1.3. Our eclipse classpath template is also using libs of 0.20 shims. 
> So, we will get two errors in eclipse. 
> Compiling hive will not have a problem because we are loading codec 1.4 for 
> project service (1.4 is also used when "-Dhadoop.version=0.20.2 
> -Dhadoop.mr.rev=20").



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5486) HiveServer2 should create base scratch directories at startup

2013-10-07 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5486:
--

Attachment: HIVE-5486.2.patch

Reformatted 

> HiveServer2 should create base scratch directories at startup
> -
>
> Key: HIVE-5486
> URL: https://issues.apache.org/jira/browse/HIVE-5486
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5486.2.patch
>
>
> With impersonation enabled, the same base directory  is used by all 
> sessions/queries. For a new deployment, this directory gets created on first 
> invocation by the user running that session. This would cause directory 
> permission conflict for other users.
> HiveServer2 should create the base scratch dirs if it doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5486) HiveServer2 should create base scratch directories at startup

2013-10-07 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5486:
--

Attachment: (was: HIVE-5486.1.patch)

> HiveServer2 should create base scratch directories at startup
> -
>
> Key: HIVE-5486
> URL: https://issues.apache.org/jira/browse/HIVE-5486
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5486.2.patch
>
>
> With impersonation enabled, the same base directory  is used by all 
> sessions/queries. For a new deployment, this directory gets created on first 
> invocation by the user running that session. This would cause directory 
> permission conflict for other users.
> HiveServer2 should create the base scratch dirs if it doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14523: HIVE-5486 HiveServer2 should create base scratch directories at startup

2013-10-07 Thread Prasad Mujumdar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14523/
---

(Updated Oct. 8, 2013, 1:34 a.m.)


Review request for hive.


Changes
---

Reformatted patch


Bugs: HIVE-5486
https://issues.apache.org/jira/browse/HIVE-5486


Repository: hive-git


Description
---

With impersonation enabled, the same base directory is used by all 
sessions/queries. For a new deployment, this directory gets created on first 
invocation by the user running that session. This would cause directory 
permission conflict for other users.
The patch is creating base scratch dirs at startup if it doesn't exist.


Diffs (updated)
-

  service/src/java/org/apache/hive/service/cli/CLIService.java 1a7f338 
  
service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
 ae7bb6b 
  
service/src/test/org/apache/hive/service/cli/TestEmbeddedThriftBinaryCLIService.java
 da325da 
  service/src/test/org/apache/hive/service/cli/TestScratchDir.java PRE-CREATION 

Diff: https://reviews.apache.org/r/14523/diff/


Testing
---

Added new test


Thanks,

Prasad Mujumdar



[jira] [Updated] (HIVE-5486) HiveServer2 should create base scratch directories at startup

2013-10-07 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5486:
--

Attachment: HIVE-5486.1.patch

> HiveServer2 should create base scratch directories at startup
> -
>
> Key: HIVE-5486
> URL: https://issues.apache.org/jira/browse/HIVE-5486
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5486.1.patch
>
>
> With impersonation enabled, the same base directory  is used by all 
> sessions/queries. For a new deployment, this directory gets created on first 
> invocation by the user running that session. This would cause directory 
> permission conflict for other users.
> HiveServer2 should create the base scratch dirs if it doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5486) HiveServer2 should create base scratch directories at startup

2013-10-07 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5486:
--

Status: Patch Available  (was: Open)

Review request on https://reviews.apache.org/r/14523/

> HiveServer2 should create base scratch directories at startup
> -
>
> Key: HIVE-5486
> URL: https://issues.apache.org/jira/browse/HIVE-5486
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5486.1.patch
>
>
> With impersonation enabled, the same base directory  is used by all 
> sessions/queries. For a new deployment, this directory gets created on first 
> invocation by the user running that session. This would cause directory 
> permission conflict for other users.
> HiveServer2 should create the base scratch dirs if it doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14523: HIVE-5486 HiveServer2 should create base scratch directories at startup

2013-10-07 Thread Prasad Mujumdar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14523/
---

Review request for hive.


Bugs: HIVE-5486
https://issues.apache.org/jira/browse/HIVE-5486


Repository: hive-git


Description
---

With impersonation enabled, the same base directory is used by all 
sessions/queries. For a new deployment, this directory gets created on first 
invocation by the user running that session. This would cause directory 
permission conflict for other users.
The patch is creating base scratch dirs at startup if it doesn't exist.


Diffs
-

  service/src/java/org/apache/hive/service/cli/CLIService.java 1a7f338 
  
service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
 ae7bb6b 
  
service/src/test/org/apache/hive/service/cli/TestEmbeddedThriftBinaryCLIService.java
 da325da 
  service/src/test/org/apache/hive/service/cli/TestScratchDir.java PRE-CREATION 

Diff: https://reviews.apache.org/r/14523/diff/


Testing
---

Added new test


Thanks,

Prasad Mujumdar



[jira] [Created] (HIVE-5486) HiveServer2 should create base scratch directories at startup

2013-10-07 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-5486:
-

 Summary: HiveServer2 should create base scratch directories at 
startup
 Key: HIVE-5486
 URL: https://issues.apache.org/jira/browse/HIVE-5486
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0, 0.12.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar


With impersonation enabled, the same base directory  is used by all 
sessions/queries. For a new deployment, this directory gets created on first 
invocation by the user running that session. This would cause directory 
permission conflict for other users.
HiveServer2 should create the base scratch dirs if it doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5485) SBAP errors on null partition being passed into partition level authorization

2013-10-07 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5485:
---

Description: 
SBAP causes an NPE when null is passed in as a partition for partition-level or 
column-level authorization.

Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
AuthorizationProviders - one should not be calling the column-level authorize 
(given that column-level is more basic than partition-level) function and pass 
in a null as the partition value. However, that happens on code introduced by 
HIVE-1887, and unless we rewrite that (and possibly a whole bunch more(will 
need evaluation)), we have to accommodate that null and appropriately attempt 
to fall back to table-level authorization in that case.

The offending code section is in Driver.java:685

{code}
 678 // if we reach here, it means it needs to do a table authorization
 679 // check, and the table authorization may already happened because 
of other
 680 // partitions
 681 if (tbl != null && !tableAuthChecked.contains(tbl.getTableName()) 
&&
 682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
Boolean.TRUE)) {
 683   List cols = tab2Cols.get(tbl);
 684   if (cols != null && cols.size() > 0) {
 685 ss.getAuthorizer().authorize(tbl, null, cols,
 686 op.getInputRequiredPrivileges(), null);
 687   } else {
 688 ss.getAuthorizer().authorize(tbl, 
op.getInputRequiredPrivileges(),
 689 null);
 690   }
 691   tableAuthChecked.add(tbl.getTableName());
 692 }
{code}


  was:
SBAP causes an NPE when null is passed in as a partition for partition-level 
authorization.

Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
AuthorizationProviders - one should not be calling the column-level authorize 
(given that column-level is more basic than partition-level) function and pass 
in a null as the partition value. However, that happens on code introduced by 
HIVE-1887, and unless we rewrite that (and possibly a whole bunch more(will 
need evaluation)), we have to accommodate that null and appropriately attempt 
to fall back to table-level authorization in that case.

The offending code section is in Driver.java:685

{code}
 678 // if we reach here, it means it needs to do a table authorization
 679 // check, and the table authorization may already happened because 
of other
 680 // partitions
 681 if (tbl != null && !tableAuthChecked.contains(tbl.getTableName()) 
&&
 682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
Boolean.TRUE)) {
 683   List cols = tab2Cols.get(tbl);
 684   if (cols != null && cols.size() > 0) {
 685 ss.getAuthorizer().authorize(tbl, null, cols,
 686 op.getInputRequiredPrivileges(), null);
 687   } else {
 688 ss.getAuthorizer().authorize(tbl, 
op.getInputRequiredPrivileges(),
 689 null);
 690   }
 691   tableAuthChecked.add(tbl.getTableName());
 692 }
{code}



> SBAP errors on null partition being passed into partition level authorization
> -
>
> Key: HIVE-5485
> URL: https://issues.apache.org/jira/browse/HIVE-5485
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5485.patch
>
>
> SBAP causes an NPE when null is passed in as a partition for partition-level 
> or column-level authorization.
> Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
> AuthorizationProviders - one should not be calling the column-level authorize 
> (given that column-level is more basic than partition-level) function and 
> pass in a null as the partition value. However, that happens on code 
> introduced by HIVE-1887, and unless we rewrite that (and possibly a whole 
> bunch more(will need evaluation)), we have to accommodate that null and 
> appropriately attempt to fall back to table-level authorization in that case.
> The offending code section is in Driver.java:685
> {code}
>  678 // if we reach here, it means it needs to do a table 
> authorization
>  679 // check, and the table authorization may already happened 
> because of other
>  680 // partitions
>  681 if (tbl != null && 
> !tableAuthChecked.contains(tbl.getTableName()) &&
>  682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
> Boolean.TRUE)) {
>  683   List cols = tab2Cols.get(tbl);
>  684   if (cols != null && cols.size() > 0) {
>  685 ss.getAuthorizer().aut

[jira] [Updated] (HIVE-4856) Upgrade HCat to 2.0.5-alpha

2013-10-07 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4856:
---

Fix Version/s: 0.13.0

> Upgrade HCat to 2.0.5-alpha
> ---
>
> Key: HIVE-4856
> URL: https://issues.apache.org/jira/browse/HIVE-4856
> Project: Hive
>  Issue Type: Task
>  Components: HCatalog
>Reporter: Brock Noland
> Fix For: 0.13.0
>
>
> In HIVE-4756 we upgraded Hive to 2.0.5-alpha. I see that HCat specifies it's 
> deps differently. We should probably keep them on the same version of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-4856) Upgrade HCat to 2.0.5-alpha

2013-10-07 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan resolved HIVE-4856.


Resolution: Implemented

> Upgrade HCat to 2.0.5-alpha
> ---
>
> Key: HIVE-4856
> URL: https://issues.apache.org/jira/browse/HIVE-4856
> Project: Hive
>  Issue Type: Task
>  Components: HCatalog
>Reporter: Brock Noland
>
> In HIVE-4756 we upgraded Hive to 2.0.5-alpha. I see that HCat specifies it's 
> deps differently. We should probably keep them on the same version of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4856) Upgrade HCat to 2.0.5-alpha

2013-10-07 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788800#comment-13788800
 ] 

Sushanth Sowmyan commented on HIVE-4856:


No, this can be closed as being old. I see hcat's pom.xml referring 2.1.0-beta 
as well. This change was done as of HIVE-5112. Closing bug.

> Upgrade HCat to 2.0.5-alpha
> ---
>
> Key: HIVE-4856
> URL: https://issues.apache.org/jira/browse/HIVE-4856
> Project: Hive
>  Issue Type: Task
>  Components: HCatalog
>Reporter: Brock Noland
>
> In HIVE-4756 we upgraded Hive to 2.0.5-alpha. I see that HCat specifies it's 
> deps differently. We should probably keep them on the same version of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5485) SBAP errors on null partition being passed into partition level authorization

2013-10-07 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5485:
---

Attachment: HIVE-5485.patch

Attaching patch to make SBAP more robust in these cases.

> SBAP errors on null partition being passed into partition level authorization
> -
>
> Key: HIVE-5485
> URL: https://issues.apache.org/jira/browse/HIVE-5485
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5485.patch
>
>
> SBAP causes an NPE when null is passed in as a partition for partition-level 
> authorization.
> Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
> AuthorizationProviders - one should not be calling the column-level authorize 
> (given that column-level is more basic than partition-level) function and 
> pass in a null as the partition value. However, that happens on code 
> introduced by HIVE-1887, and unless we rewrite that (and possibly a whole 
> bunch more(will need evaluation)), we have to accommodate that null and 
> appropriately attempt to fall back to table-level authorization in that case.
> The offending code section is in Driver.java:685
> {code}
>  678 // if we reach here, it means it needs to do a table 
> authorization
>  679 // check, and the table authorization may already happened 
> because of other
>  680 // partitions
>  681 if (tbl != null && 
> !tableAuthChecked.contains(tbl.getTableName()) &&
>  682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
> Boolean.TRUE)) {
>  683   List cols = tab2Cols.get(tbl);
>  684   if (cols != null && cols.size() > 0) {
>  685 ss.getAuthorizer().authorize(tbl, null, cols,
>  686 op.getInputRequiredPrivileges(), null);
>  687   } else {
>  688 ss.getAuthorizer().authorize(tbl, 
> op.getInputRequiredPrivileges(),
>  689 null);
>  690   }
>  691   tableAuthChecked.add(tbl.getTableName());
>  692 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5485) SBAP errors on null partition being passed into partition level authorization

2013-10-07 Thread Sushanth Sowmyan (JIRA)
Sushanth Sowmyan created HIVE-5485:
--

 Summary: SBAP errors on null partition being passed into partition 
level authorization
 Key: HIVE-5485
 URL: https://issues.apache.org/jira/browse/HIVE-5485
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan


SBAP causes an NPE when null is passed in as a partition for partition-level 
authorization.

Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
AuthorizationProviders - one should not be calling the column-level authorize 
(given that column-level is more basic than partition-level) function and pass 
in a null as the partition value. However, that happens on code introduced by 
HIVE-1887, and unless we rewrite that (and possibly a whole bunch more(will 
need evaluation)), we have to accommodate that null and appropriately attempt 
to fall back to table-level authorization in that case.

The offending code section is in Driver.java:685

{code}
 678 // if we reach here, it means it needs to do a table authorization
 679 // check, and the table authorization may already happened because 
of other
 680 // partitions
 681 if (tbl != null && !tableAuthChecked.contains(tbl.getTableName()) 
&&
 682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
Boolean.TRUE)) {
 683   List cols = tab2Cols.get(tbl);
 684   if (cols != null && cols.size() > 0) {
 685 ss.getAuthorizer().authorize(tbl, null, cols,
 686 op.getInputRequiredPrivileges(), null);
 687   } else {
 688 ss.getAuthorizer().authorize(tbl, 
op.getInputRequiredPrivileges(),
 689 null);
 690   }
 691   tableAuthChecked.add(tbl.getTableName());
 692 }
{code}




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5484) TestSchemaTool failures when Hive version has more than 3 revision numbers

2013-10-07 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788786#comment-13788786
 ] 

Ashutosh Chauhan commented on HIVE-5484:


+1

> TestSchemaTool failures when Hive version has more than 3 revision numbers
> --
>
> Key: HIVE-5484
> URL: https://issues.apache.org/jira/browse/HIVE-5484
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-5484.1.patch
>
>
> If Hive is created with a version string with more than 3 numbers, we end up 
> with a couple of test failures in TestSchemaTool, because the metastore is 
> expecting a version with the format of 
> majorVersion.minorVersion.changeVersion. 
>  type="org.apache.hadoop.hive.metastore.HiveMetaException">org.apache.hadoop.hive.metastore.HiveMetaException:
>  Unknown version specified for initialization: 0.12.0.2.0.6.0-61
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:113)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:269)
>   at 
> org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaInit(TestSchemaTool.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> 
>   
>name="testSchemaUpgrade" time="2.164">
>  type="org.apache.hadoop.hive.metastore.HiveMetaException">org.apache.hadoop.hive.metastore.HiveMetaException:
>  Found unexpected schema version 0.12.0
>   at 
> org.apache.hive.beeline.HiveSchemaTool.verifySchemaVersion(HiveSchemaTool.java:192)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:242)
>   at 
> org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaUpgrade(TestSchemaTool.java:128)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5484) TestSchemaTool failures when Hive version has more than 3 revision numbers

2013-10-07 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-5484:
-

Attachment: HIVE-5484.1.patch

patch v1

> TestSchemaTool failures when Hive version has more than 3 revision numbers
> --
>
> Key: HIVE-5484
> URL: https://issues.apache.org/jira/browse/HIVE-5484
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
> Attachments: HIVE-5484.1.patch
>
>
> If Hive is created with a version string with more than 3 numbers, we end up 
> with a couple of test failures in TestSchemaTool, because the metastore is 
> expecting a version with the format of 
> majorVersion.minorVersion.changeVersion. 
>  type="org.apache.hadoop.hive.metastore.HiveMetaException">org.apache.hadoop.hive.metastore.HiveMetaException:
>  Unknown version specified for initialization: 0.12.0.2.0.6.0-61
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:113)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:269)
>   at 
> org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaInit(TestSchemaTool.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> 
>   
>name="testSchemaUpgrade" time="2.164">
>  type="org.apache.hadoop.hive.metastore.HiveMetaException">org.apache.hadoop.hive.metastore.HiveMetaException:
>  Found unexpected schema version 0.12.0
>   at 
> org.apache.hive.beeline.HiveSchemaTool.verifySchemaVersion(HiveSchemaTool.java:192)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:242)
>   at 
> org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaUpgrade(TestSchemaTool.java:128)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5484) TestSchemaTool failures when Hive version has more than 3 revision numbers

2013-10-07 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-5484:
-

Assignee: Jason Dere
  Status: Patch Available  (was: Open)

> TestSchemaTool failures when Hive version has more than 3 revision numbers
> --
>
> Key: HIVE-5484
> URL: https://issues.apache.org/jira/browse/HIVE-5484
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-5484.1.patch
>
>
> If Hive is created with a version string with more than 3 numbers, we end up 
> with a couple of test failures in TestSchemaTool, because the metastore is 
> expecting a version with the format of 
> majorVersion.minorVersion.changeVersion. 
>  type="org.apache.hadoop.hive.metastore.HiveMetaException">org.apache.hadoop.hive.metastore.HiveMetaException:
>  Unknown version specified for initialization: 0.12.0.2.0.6.0-61
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:113)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:269)
>   at 
> org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaInit(TestSchemaTool.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> 
>   
>name="testSchemaUpgrade" time="2.164">
>  type="org.apache.hadoop.hive.metastore.HiveMetaException">org.apache.hadoop.hive.metastore.HiveMetaException:
>  Found unexpected schema version 0.12.0
>   at 
> org.apache.hive.beeline.HiveSchemaTool.verifySchemaVersion(HiveSchemaTool.java:192)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:242)
>   at 
> org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaUpgrade(TestSchemaTool.java:128)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5476) Authorization-provider tests fail in sequential run

2013-10-07 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5476:
---

Affects Version/s: 0.12.0

> Authorization-provider tests fail in sequential run
> ---
>
> Key: HIVE-5476
> URL: https://issues.apache.org/jira/browse/HIVE-5476
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5476.patch
>
>
> As seen in 0.12 build with hadoop1 - 
> https://builds.apache.org/job/Hive-branch-0.12-hadoop1/lastCompletedBuild/testReport/
> Following tests fail - 
> >>> org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
> >>> 12 sec  1
> >>> org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
> >>>12 sec  1
> >>> org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
> >>> 12 sec  1
> >



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5476) Authorization-provider tests fail in sequential run

2013-10-07 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5476:
---

Attachment: HIVE-5476.patch

Patch attached.

> Authorization-provider tests fail in sequential run
> ---
>
> Key: HIVE-5476
> URL: https://issues.apache.org/jira/browse/HIVE-5476
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5476.patch
>
>
> As seen in 0.12 build with hadoop1 - 
> https://builds.apache.org/job/Hive-branch-0.12-hadoop1/lastCompletedBuild/testReport/
> Following tests fail - 
> >>> org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
> >>> 12 sec  1
> >>> org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
> >>>12 sec  1
> >>> org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
> >>> 12 sec  1
> >



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5476) Authorization-provider tests fail in sequential run

2013-10-07 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5476:
---

Status: Patch Available  (was: Open)

> Authorization-provider tests fail in sequential run
> ---
>
> Key: HIVE-5476
> URL: https://issues.apache.org/jira/browse/HIVE-5476
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5476.patch
>
>
> As seen in 0.12 build with hadoop1 - 
> https://builds.apache.org/job/Hive-branch-0.12-hadoop1/lastCompletedBuild/testReport/
> Following tests fail - 
> >>> org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
> >>> 12 sec  1
> >>> org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
> >>>12 sec  1
> >>> org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
> >>> 12 sec  1
> >



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4945) Make RLIKE/REGEXP run end-to-end by updating VectorizationContext

2013-10-07 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788755#comment-13788755
 ] 

Ashutosh Chauhan commented on HIVE-4945:


+1

> Make RLIKE/REGEXP run end-to-end by updating VectorizationContext
> -
>
> Key: HIVE-4945
> URL: https://issues.apache.org/jira/browse/HIVE-4945
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: vectorization-branch
>Reporter: Eric Hanson
>Assignee: Teddy Choi
> Attachments: HIVE-4945.1.patch.txt, HIVE-4945.2.patch.txt, 
> HIVE-4945.3.patch.txt, HIVE-4945.4.patch.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5365) Boolean constants in the query are not handled correctly.

2013-10-07 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788750#comment-13788750
 ] 

Ashutosh Chauhan commented on HIVE-5365:


+1

> Boolean constants in the query are not handled correctly.
> -
>
> Key: HIVE-5365
> URL: https://issues.apache.org/jira/browse/HIVE-5365
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5365.1.patch, HIVE-5365.2.patch, HIVE-5365.3.patch
>
>
> Boolean constants in the query are not handled correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5407) show create table creating unusable DDL when some reserved keywords exist

2013-10-07 Thread Zhichun Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhichun Wu updated HIVE-5407:
-

Status: Patch Available  (was: Open)

re-submit patch for preCommit test.

> show create table creating unusable DDL when some reserved keywords  exist
> --
>
> Key: HIVE-5407
> URL: https://issues.apache.org/jira/browse/HIVE-5407
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
> Environment: hive 0.11
>Reporter: Zhichun Wu
>Priority: Minor
> Attachments: D13191.1.patch, HIVE-5407.1.patch
>
>
> HIVE-701 already makes most reserved keywords available for 
> table/column/partition names and 'show create table' produces usable DDLs.
> However I think it's better if we quote table/column/partition names for the 
> output of 'show create table', which is how mysql works and seems more robust.
> For example, use select as column name will produce unusable DDL:
> {code}
> create table table_select(`select` string);
> show create table table_select;
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5407) show create table creating unusable DDL when some reserved keywords exist

2013-10-07 Thread Zhichun Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhichun Wu updated HIVE-5407:
-

Attachment: HIVE-5407.1.patch

> show create table creating unusable DDL when some reserved keywords  exist
> --
>
> Key: HIVE-5407
> URL: https://issues.apache.org/jira/browse/HIVE-5407
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
> Environment: hive 0.11
>Reporter: Zhichun Wu
>Priority: Minor
> Attachments: D13191.1.patch, HIVE-5407.1.patch
>
>
> HIVE-701 already makes most reserved keywords available for 
> table/column/partition names and 'show create table' produces usable DDLs.
> However I think it's better if we quote table/column/partition names for the 
> output of 'show create table', which is how mysql works and seems more robust.
> For example, use select as column name will produce unusable DDL:
> {code}
> create table table_select(`select` string);
> show create table table_select;
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5270) Enable hash joins using tez

2013-10-07 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5270:
-

Attachment: HIVE-5270.2.patch

> Enable hash joins using tez
> ---
>
> Key: HIVE-5270
> URL: https://issues.apache.org/jira/browse/HIVE-5270
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: BroadCastJoinsHiveOnTez.pdf, HIVE-5270.1.patch, 
> HIVE-5270.2.patch
>
>
> Since hash join involves replicating a hash table to all the map tasks, an 
> equivalent operation needs to be performed in tez. In the tez world, such an 
> operation is done via a broadcast edge (TEZ-410). We need to rework the 
> planning and execution phases within hive for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5134) add tests to partition filter JDO pushdown for like and make sure it works, or remove it

2013-10-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5134:
---

Issue Type: Bug  (was: Improvement)

> add tests to partition filter JDO pushdown for like and make sure it works, 
> or remove it
> 
>
> Key: HIVE-5134
> URL: https://issues.apache.org/jira/browse/HIVE-5134
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-5134-does-not-work.patch
>
>
> There's a mailing list thread. Partition filtering w/JDO pushdown using LIKE 
> is not used by Hive due to client check (in PartitionPruner); after enabling 
> it seems to be broken. We need to fix and enable it, or remove it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [VOTE] Apache Hive 0.12.0 Release Candidate 0

2013-10-07 Thread Thejas Nair
Yes, that is the correct tag. Thanks for pointing it out.
I also update the tag as it was a little behind what is in the RC
(found some issues with maven-publish).

I have also updated the release vote email template in hive
HowToRelease wiki page, to include note about the tag .

Thanks,
Thejas



On Mon, Oct 7, 2013 at 4:26 PM, Brock Noland  wrote:
> Hi Thejas,
>
> Thank you very much for the hard work!  I believe the vote email should
> contain a link to the tag we are voting on. I assume the tag is:
> release-0.12.0-rc0 (
> http://svn.apache.org/viewvc/hive/tags/release-0.12.0-rc0/). Is that
> correct?
>
> Brock
>
>
> On Mon, Oct 7, 2013 at 6:02 PM, Thejas Nair  wrote:
>
>> Apache Hive 0.12.0 Release Candidate 0 is available here:
>> http://people.apache.org/~thejas/hive-0.12.0-rc0/
>>
>> Maven artifacts are available here:
>> https://repository.apache.org/content/repositories/orgapachehive-138/
>>
>> This release has 406 issues fixed.
>> This includes several new features such as data types date and
>> varchar, optimizer improvements, ORC format improvements and many bug
>> fixes. Hcatalog packages have now moved to org.apache.hive.hcatalog
>> (from org.apache.hcatalog), and the maven packages are published under
>> org.apache.hive.hcatalog.
>>
>> Voting will conclude in 72 hours.
>>
>> Hive PMC Members: Please test and vote.
>>
>> Thanks,
>> Thejas
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>
>
>
> --
> Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Commented] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788710#comment-13788710
 ] 

Hive QA commented on HIVE-5452:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12607241/HIVE-5452.patch

{color:green}SUCCESS:{color} +1 4060 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1066/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1066/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5452.patch
>
>
> HCatalog e2e test Pig_HBase_1 tries to read data from a table it created 
> using the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat 
> loader org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatI

Re: [VOTE] Apache Hive 0.12.0 Release Candidate 0

2013-10-07 Thread Brock Noland
Hi Thejas,

Thank you very much for the hard work!  I believe the vote email should
contain a link to the tag we are voting on. I assume the tag is:
release-0.12.0-rc0 (
http://svn.apache.org/viewvc/hive/tags/release-0.12.0-rc0/). Is that
correct?

Brock


On Mon, Oct 7, 2013 at 6:02 PM, Thejas Nair  wrote:

> Apache Hive 0.12.0 Release Candidate 0 is available here:
> http://people.apache.org/~thejas/hive-0.12.0-rc0/
>
> Maven artifacts are available here:
> https://repository.apache.org/content/repositories/orgapachehive-138/
>
> This release has 406 issues fixed.
> This includes several new features such as data types date and
> varchar, optimizer improvements, ORC format improvements and many bug
> fixes. Hcatalog packages have now moved to org.apache.hive.hcatalog
> (from org.apache.hcatalog), and the maven packages are published under
> org.apache.hive.hcatalog.
>
> Voting will conclude in 72 hours.
>
> Hive PMC Members: Please test and vote.
>
> Thanks,
> Thejas
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>



-- 
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Commented] (HIVE-5484) TestSchemaTool failures when Hive version has more than 3 revision numbers

2013-10-07 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788688#comment-13788688
 ] 

Jason Dere commented on HIVE-5484:
--

If we change the HiveVersionAnnotation to just show the major/minor/change 
numbers, it looks like things work ok.

> TestSchemaTool failures when Hive version has more than 3 revision numbers
> --
>
> Key: HIVE-5484
> URL: https://issues.apache.org/jira/browse/HIVE-5484
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>
> If Hive is created with a version string with more than 3 numbers, we end up 
> with a couple of test failures in TestSchemaTool, because the metastore is 
> expecting a version with the format of 
> majorVersion.minorVersion.changeVersion. 
>  type="org.apache.hadoop.hive.metastore.HiveMetaException">org.apache.hadoop.hive.metastore.HiveMetaException:
>  Unknown version specified for initialization: 0.12.0.2.0.6.0-61
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:113)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:269)
>   at 
> org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaInit(TestSchemaTool.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> 
>   
>name="testSchemaUpgrade" time="2.164">
>  type="org.apache.hadoop.hive.metastore.HiveMetaException">org.apache.hadoop.hive.metastore.HiveMetaException:
>  Found unexpected schema version 0.12.0
>   at 
> org.apache.hive.beeline.HiveSchemaTool.verifySchemaVersion(HiveSchemaTool.java:192)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:242)
>   at 
> org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaUpgrade(TestSchemaTool.java:128)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
>   at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5484) TestSchemaTool failures when Hive version has more than 3 revision numbers

2013-10-07 Thread Jason Dere (JIRA)
Jason Dere created HIVE-5484:


 Summary: TestSchemaTool failures when Hive version has more than 3 
revision numbers
 Key: HIVE-5484
 URL: https://issues.apache.org/jira/browse/HIVE-5484
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere


If Hive is created with a version string with more than 3 numbers, we end up 
with a couple of test failures in TestSchemaTool, because the metastore is 
expecting a version with the format of majorVersion.minorVersion.changeVersion. 


org.apache.hadoop.hive.metastore.HiveMetaException:
 Unknown version specified for initialization: 0.12.0.2.0.6.0-61
at 
org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:113)
at 
org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:269)
at 
org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaInit(TestSchemaTool.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)

  
  
org.apache.hadoop.hive.metastore.HiveMetaException:
 Found unexpected schema version 0.12.0
at 
org.apache.hive.beeline.HiveSchemaTool.verifySchemaVersion(HiveSchemaTool.java:192)
at 
org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:242)
at 
org.apache.hive.beeline.src.test.TestSchemaTool.testSchemaUpgrade(TestSchemaTool.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5479) SBAP restricts hcat -e 'show databases'

2013-10-07 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788677#comment-13788677
 ] 

Sushanth Sowmyan commented on HIVE-5479:


Available workaround : If the problem is observed in hcat commandline, then 
instead of running hcat -e 'show databases;', run hive -e 'show databases;'.

If using webhcat in secure mode, then there's no way to get around it using 
hcat, then in that case, the suggested workaround would be to disable 
client-side authorization or SBAP on client-side to get around this problem. 
Metastore-side authorization can still be used.


> SBAP restricts hcat -e 'show databases'
> ---
>
> Key: HIVE-5479
> URL: https://issues.apache.org/jira/browse/HIVE-5479
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, HCatalog
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5479.patch
>
>
> During testing for 0.12, it was found that if someone tries to use the SBAP 
> as a client-side authorization provider, and runs hcat -e "show databases;", 
> SBAP denies permission to the user.
> Looking at SBAP code, why it does so is self-evident from this section:
> {code}
>   @Override
>   public void authorize(Privilege[] readRequiredPriv, Privilege[] 
> writeRequiredPriv)
>   throws HiveException, AuthorizationException {
> // Currently not used in hive code-base, but intended to authorize actions
> // that are directly user-level. As there's no storage based aspect to 
> this,
> // we can follow one of two routes:
> // a) We can allow by default - that way, this call stays out of the way
> // b) We can deny by default - that way, no privileges are authorized that
> // is not understood and explicitly allowed.
> // Both approaches have merit, but given that things like grants and 
> revokes
> // that are user-level do not make sense from the context of 
> storage-permission
> // based auth, denying seems to be more canonical here.
> throw new 
> AuthorizationException(StorageBasedAuthorizationProvider.class.getName() +
> " does not allow user-level authorization");
>   }
> {code}
> Thus, this deny-by-default behaviour affects the "show databases" call from 
> hcat cli, which uses user-level privileges to determine if a user can perform 
> that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[VOTE] Apache Hive 0.12.0 Release Candidate 0

2013-10-07 Thread Thejas Nair
Apache Hive 0.12.0 Release Candidate 0 is available here:
http://people.apache.org/~thejas/hive-0.12.0-rc0/

Maven artifacts are available here:
https://repository.apache.org/content/repositories/orgapachehive-138/

This release has 406 issues fixed.
This includes several new features such as data types date and
varchar, optimizer improvements, ORC format improvements and many bug
fixes. Hcatalog packages have now moved to org.apache.hive.hcatalog
(from org.apache.hcatalog), and the maven packages are published under
org.apache.hive.hcatalog.

Voting will conclude in 72 hours.

Hive PMC Members: Please test and vote.

Thanks,
Thejas

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HIVE-5483) use metastore statistics to optimize max/min/etc. queries

2013-10-07 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-5483:
--

 Summary: use metastore statistics to optimize max/min/etc. queries
 Key: HIVE-5483
 URL: https://issues.apache.org/jira/browse/HIVE-5483
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin


We have discussed this a little bit.
Hive can answer queries such as select max(c1) from t purely from metastore 
using partition statistics, provided that we know the statistics are up to date.
All data changes (e.g. adding new partitions) currently go thru metastore so we 
can track up-to-date-ness. If they are not up-to-date, the queries will have to 
read data (at least for outdated partitions) until someone runs analyze table. 
We can also analyze new partitions after add, if that is configured/specified 
in the command.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5483) use metastore statistics to optimize max/min/etc. queries

2013-10-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788652#comment-13788652
 ] 

Sergey Shelukhin commented on HIVE-5483:


[~ashutoshc] [~prasanth_j] [~acmurthy] fyi

> use metastore statistics to optimize max/min/etc. queries
> -
>
> Key: HIVE-5483
> URL: https://issues.apache.org/jira/browse/HIVE-5483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>
> We have discussed this a little bit.
> Hive can answer queries such as select max(c1) from t purely from metastore 
> using partition statistics, provided that we know the statistics are up to 
> date.
> All data changes (e.g. adding new partitions) currently go thru metastore so 
> we can track up-to-date-ness. If they are not up-to-date, the queries will 
> have to read data (at least for outdated partitions) until someone runs 
> analyze table. We can also analyze new partitions after add, if that is 
> configured/specified in the command.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5482) JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be consistent with other modules

2013-10-07 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-5482:
--

 Summary: JDBC should depend on httpclient.version and 
httpcore.version 4.1.3 to be consistent with other modules
 Key: HIVE-5482
 URL: https://issues.apache.org/jira/browse/HIVE-5482
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0


Currently depends on 4.2.4 and 4.2.5 which conflicts with thrift-0.9 which 
depends on 4.1.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5481) WebHCat e2e test: TestStreaming -ve tests should also check for job completion success

2013-10-07 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5481:
---

Description: Since TempletonController will anyway succeed for the -ve 
tests as well. However, the exit value should be non-zero.  (was: Since 
TempletonController will anyway succeed in the -ve tests as well. However, the 
exit value should be non-zero.)

> WebHCat e2e test: TestStreaming -ve tests should also check for job 
> completion success
> --
>
> Key: HIVE-5481
> URL: https://issues.apache.org/jira/browse/HIVE-5481
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
>
> Since TempletonController will anyway succeed for the -ve tests as well. 
> However, the exit value should be non-zero.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5481) WebHCat e2e test: TestStreaming -ve tests should also check for job completion success

2013-10-07 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5481:
---

Fix Version/s: 0.13.0

> WebHCat e2e test: TestStreaming -ve tests should also check for job 
> completion success
> --
>
> Key: HIVE-5481
> URL: https://issues.apache.org/jira/browse/HIVE-5481
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
>
> Since TempletonController will anyway succeed in the -ve tests as well. 
> However, the exit value should be non-zero.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5481) WebHCat e2e test: TestStreaming -ve tests should also check for job completion success

2013-10-07 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-5481:
--

 Summary: WebHCat e2e test: TestStreaming -ve tests should also 
check for job completion success
 Key: HIVE-5481
 URL: https://issues.apache.org/jira/browse/HIVE-5481
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta


Since TempletonController will anyway succeed in the -ve tests as well. 
However, the exit value should be non-zero.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5473) sqrt of -ve value returns null instead of throwing an error

2013-10-07 Thread N Campbell (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788638#comment-13788638
 ] 

N Campbell commented on HIVE-5473:
--

other cases are

select 0 / 0 from tversion
select sqrt ( -4 ) from tversion
select power( 0, -1) from tversion

note, in ISO these should be exceptions
in the first case you get a garbage value lack and infinity in the others
so even if design intent in hive the docs make no statement etc.

> sqrt of -ve value returns null instead of throwing an error
> ---
>
> Key: HIVE-5473
> URL: https://issues.apache.org/jira/browse/HIVE-5473
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: N Campbell
>Priority: Minor
>
> select sqrt( -4 ) from t
> will return a null instead of throwing an  exception. 
> no discussion on web page that this would be by design to not throw an error.
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4898) make vectorized math functions work end-to-end (update VectorizationContext.java)

2013-10-07 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-4898:
---

Status: Patch Available  (was: Open)

> make vectorized math functions work end-to-end (update 
> VectorizationContext.java)
> -
>
> Key: HIVE-4898
> URL: https://issues.apache.org/jira/browse/HIVE-4898
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: vectorization-branch
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Attachments: HIVE-4898.3.patch, HIVE-4898.3.patch
>
>
> The vectorized math function VectorExpression classes were added in 
> HIVE-4822. This JIRA is to allow those to actually be used in a SQL query 
> end-to-end. This requires updating VectorizationContext to use the new 
> classes in vectorized expression creation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5480) WebHCat e2e tests for doAs feature are failing

2013-10-07 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5480:
-

Status: Patch Available  (was: Open)

> WebHCat e2e tests for doAs feature are failing
> --
>
> Key: HIVE-5480
> URL: https://issues.apache.org/jira/browse/HIVE-5480
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5480.patch
>
>
> WebHCat testsuite have two tests failing:
> 1. doAsTests_6 - The test was assuming that the metadata can be read even if 
> reading data cannot be. As part of the setup we are using the 
> StorageBasedAuthorizationProvider which will not allow for this operation to 
> succeed. Updated the test to check for the failure and verify the error 
> message.
> 2. doAsTests_7 - Updated the error message to reflect the current error 
> message which looks correct.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5480) WebHCat e2e tests for doAs feature are failing

2013-10-07 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5480:
-

Attachment: HIVE-5480.patch

Attaching the patch.

> WebHCat e2e tests for doAs feature are failing
> --
>
> Key: HIVE-5480
> URL: https://issues.apache.org/jira/browse/HIVE-5480
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5480.patch
>
>
> WebHCat testsuite have two tests failing:
> 1. doAsTests_6 - The test was assuming that the metadata can be read even if 
> reading data cannot be. As part of the setup we are using the 
> StorageBasedAuthorizationProvider which will not allow for this operation to 
> succeed. Updated the test to check for the failure and verify the error 
> message.
> 2. doAsTests_7 - Updated the error message to reflect the current error 
> message which looks correct.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5480) WebHCat e2e tests for doAs feature are failing

2013-10-07 Thread Deepesh Khandelwal (JIRA)
Deepesh Khandelwal created HIVE-5480:


 Summary: WebHCat e2e tests for doAs feature are failing
 Key: HIVE-5480
 URL: https://issues.apache.org/jira/browse/HIVE-5480
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.12.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal


WebHCat testsuite have two tests failing:
1. doAsTests_6 - The test was assuming that the metadata can be read even if 
reading data cannot be. As part of the setup we are using the 
StorageBasedAuthorizationProvider which will not allow for this operation to 
succeed. Updated the test to check for the failure and verify the error message.
2. doAsTests_7 - Updated the error message to reflect the current error message 
which looks correct.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5479) SBAP restricts hcat -e 'show databases'

2013-10-07 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5479:
---

Attachment: HIVE-5479.patch

Attaching patch to make SBAP mimic the old HdfsAuthorizationProvider for 
user-level  authorization

> SBAP restricts hcat -e 'show databases'
> ---
>
> Key: HIVE-5479
> URL: https://issues.apache.org/jira/browse/HIVE-5479
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, HCatalog
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5479.patch
>
>
> During testing for 0.12, it was found that if someone tries to use the SBAP 
> as a client-side authorization provider, and runs hcat -e "show databases;", 
> SBAP denies permission to the user.
> Looking at SBAP code, why it does so is self-evident from this section:
> {code}
>   @Override
>   public void authorize(Privilege[] readRequiredPriv, Privilege[] 
> writeRequiredPriv)
>   throws HiveException, AuthorizationException {
> // Currently not used in hive code-base, but intended to authorize actions
> // that are directly user-level. As there's no storage based aspect to 
> this,
> // we can follow one of two routes:
> // a) We can allow by default - that way, this call stays out of the way
> // b) We can deny by default - that way, no privileges are authorized that
> // is not understood and explicitly allowed.
> // Both approaches have merit, but given that things like grants and 
> revokes
> // that are user-level do not make sense from the context of 
> storage-permission
> // based auth, denying seems to be more canonical here.
> throw new 
> AuthorizationException(StorageBasedAuthorizationProvider.class.getName() +
> " does not allow user-level authorization");
>   }
> {code}
> Thus, this deny-by-default behaviour affects the "show databases" call from 
> hcat cli, which uses user-level privileges to determine if a user can perform 
> that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5479) SBAP restricts hcat -e 'show databases'

2013-10-07 Thread Sushanth Sowmyan (JIRA)
Sushanth Sowmyan created HIVE-5479:
--

 Summary: SBAP restricts hcat -e 'show databases'
 Key: HIVE-5479
 URL: https://issues.apache.org/jira/browse/HIVE-5479
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan


During testing for 0.12, it was found that if someone tries to use the SBAP as 
a client-side authorization provider, and runs hcat -e "show databases;", SBAP 
denies permission to the user.

Looking at SBAP code, why it does so is self-evident from this section:

{code}
  @Override
  public void authorize(Privilege[] readRequiredPriv, Privilege[] 
writeRequiredPriv)
  throws HiveException, AuthorizationException {
// Currently not used in hive code-base, but intended to authorize actions
// that are directly user-level. As there's no storage based aspect to this,
// we can follow one of two routes:
// a) We can allow by default - that way, this call stays out of the way
// b) We can deny by default - that way, no privileges are authorized that
// is not understood and explicitly allowed.
// Both approaches have merit, but given that things like grants and revokes
// that are user-level do not make sense from the context of 
storage-permission
// based auth, denying seems to be more canonical here.

throw new 
AuthorizationException(StorageBasedAuthorizationProvider.class.getName() +
" does not allow user-level authorization");
  }
{code}

Thus, this deny-by-default behaviour affects the "show databases" call from 
hcat cli, which uses user-level privileges to determine if a user can perform 
that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5477) maven-publish fails because it can't find hive-metastore-0.12.0.pom

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair resolved HIVE-5477.
-

Resolution: Fixed

Patch committed to trunk and 0.12 branch.


> maven-publish fails because it can't find hive-metastore-0.12.0.pom
> ---
>
> Key: HIVE-5477
> URL: https://issues.apache.org/jira/browse/HIVE-5477
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: HIVE-5477.1.patch
>
>
> The maven-sign target is looking for 
> build/maven/jars/hive-metastore-0.12.0.pom (note the "jars" dir).
> The correct location is build/maven/poms/hive-metastore-0.12.0.pom
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5477) maven-publish fails because it can't find hive-metastore-0.12.0.pom

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5477:


Fix Version/s: 0.12.0

> maven-publish fails because it can't find hive-metastore-0.12.0.pom
> ---
>
> Key: HIVE-5477
> URL: https://issues.apache.org/jira/browse/HIVE-5477
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: HIVE-5477.1.patch
>
>
> The maven-sign target is looking for 
> build/maven/jars/hive-metastore-0.12.0.pom (note the "jars" dir).
> The correct location is build/maven/poms/hive-metastore-0.12.0.pom
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5455) Add build/ql/gen/vector to source folder in eclipse template

2013-10-07 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5455:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Yin!

> Add build/ql/gen/vector to source folder in eclipse template
> 
>
> Key: HIVE-5455
> URL: https://issues.apache.org/jira/browse/HIVE-5455
> Project: Hive
>  Issue Type: Bug
>Reporter: Yin Huai
>Assignee: Yin Huai
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5455.1.patch
>
>
> https://issues.apache.org/jira/browse/HIVE-5385?focusedCommentId=13786412&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13786412
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5478) WebHCat e2e testsuite for hcat authorization tests needs some fixes

2013-10-07 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5478:
-

Attachment: HIVE-5478.patch

Attaching the patch.

> WebHCat e2e testsuite for hcat authorization tests needs some fixes
> ---
>
> Key: HIVE-5478
> URL: https://issues.apache.org/jira/browse/HIVE-5478
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5478.patch
>
>
> Here are the issues:
> 1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
> testdist root otherwise the ant command fails to look for 
> resource/default.res.
> 2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions 
> on the hive warehouse directory which can vary based on the environment, 
> improved the test to check what is set.
> 3. DB_OPS_18 error message is old, now we get a more specific message, 
> updated to verify the new one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5478) WebHCat e2e testsuite for hcat authorization tests needs some fixes

2013-10-07 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5478:
-

Status: Patch Available  (was: Open)

> WebHCat e2e testsuite for hcat authorization tests needs some fixes
> ---
>
> Key: HIVE-5478
> URL: https://issues.apache.org/jira/browse/HIVE-5478
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5478.patch
>
>
> Here are the issues:
> 1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
> testdist root otherwise the ant command fails to look for 
> resource/default.res.
> 2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions 
> on the hive warehouse directory which can vary based on the environment, 
> improved the test to check what is set.
> 3. DB_OPS_18 error message is old, now we get a more specific message, 
> updated to verify the new one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5478) WebHCat e2e testsuite for hcat authorization tests needs some fixes

2013-10-07 Thread Deepesh Khandelwal (JIRA)
Deepesh Khandelwal created HIVE-5478:


 Summary: WebHCat e2e testsuite for hcat authorization tests needs 
some fixes
 Key: HIVE-5478
 URL: https://issues.apache.org/jira/browse/HIVE-5478
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.12.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Attachments: HIVE-5478.patch

Here are the issues:
1. The HARNESS_ROOT in the test-hcat-authorization testsuite needs to be 
testdist root otherwise the ant command fails to look for resource/default.res.
2. A few tests DB_OPS_5 and TABLE_OPS_2 were relying on default permissions on 
the hive warehouse directory which can vary based on the environment, improved 
the test to check what is set.
3. DB_OPS_18 error message is old, now we get a more specific message, updated 
to verify the new one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5477) maven-publish fails because it can't find hive-metastore-0.12.0.pom

2013-10-07 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788606#comment-13788606
 ] 

Ashutosh Chauhan commented on HIVE-5477:


+1 LGTM

> maven-publish fails because it can't find hive-metastore-0.12.0.pom
> ---
>
> Key: HIVE-5477
> URL: https://issues.apache.org/jira/browse/HIVE-5477
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Blocker
> Attachments: HIVE-5477.1.patch
>
>
> The maven-sign target is looking for 
> build/maven/jars/hive-metastore-0.12.0.pom (note the "jars" dir).
> The correct location is build/maven/poms/hive-metastore-0.12.0.pom
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5477) maven-publish fails because it can't find hive-metastore-0.12.0.pom

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5477:


Attachment: HIVE-5477.1.patch

HIVE-5477.1.patch - Looks for metastore pom in poms dir .
Can we forgo the 24 hour waiting for this one, this is like a minor typo fix? 
The 0.12 is blocked by this.


> maven-publish fails because it can't find hive-metastore-0.12.0.pom
> ---
>
> Key: HIVE-5477
> URL: https://issues.apache.org/jira/browse/HIVE-5477
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Blocker
> Attachments: HIVE-5477.1.patch
>
>
> The maven-sign target is looking for 
> build/maven/jars/hive-metastore-0.12.0.pom (note the "jars" dir).
> The correct location is build/maven/poms/hive-metastore-0.12.0.pom
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5477) maven-publish fails because it can't find hive-metastore-0.12.0.pom

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5477:


Priority: Blocker  (was: Major)

> maven-publish fails because it can't find hive-metastore-0.12.0.pom
> ---
>
> Key: HIVE-5477
> URL: https://issues.apache.org/jira/browse/HIVE-5477
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Blocker
>
> The maven-sign target is looking for 
> build/maven/jars/hive-metastore-0.12.0.pom (note the "jars" dir).
> The correct location is build/maven/poms/hive-metastore-0.12.0.pom



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5477) maven-publish fails because it can't find hive-metastore-0.12.0.pom

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5477:


Description: 
The maven-sign target is looking for build/maven/jars/hive-metastore-0.12.0.pom 
(note the "jars" dir).

The correct location is build/maven/poms/hive-metastore-0.12.0.pom

NO PRECOMMIT TESTS



  was:
The maven-sign target is looking for build/maven/jars/hive-metastore-0.12.0.pom 
(note the "jars" dir).

The correct location is build/maven/poms/hive-metastore-0.12.0.pom



> maven-publish fails because it can't find hive-metastore-0.12.0.pom
> ---
>
> Key: HIVE-5477
> URL: https://issues.apache.org/jira/browse/HIVE-5477
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Blocker
>
> The maven-sign target is looking for 
> build/maven/jars/hive-metastore-0.12.0.pom (note the "jars" dir).
> The correct location is build/maven/poms/hive-metastore-0.12.0.pom
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5477) maven-publish fails because it can't find hive-metastore-0.12.0.pom

2013-10-07 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5477:
---

 Summary: maven-publish fails because it can't find 
hive-metastore-0.12.0.pom
 Key: HIVE-5477
 URL: https://issues.apache.org/jira/browse/HIVE-5477
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair


The maven-sign target is looking for build/maven/jars/hive-metastore-0.12.0.pom 
(note the "jars" dir).

The correct location is build/maven/poms/hive-metastore-0.12.0.pom




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-07 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5452:
-

Attachment: (was: BUG-5452.patch)

> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5452.patch
>
>
> HCatalog e2e test Pig_HBase_1 tries to read data from a table it created 
> using the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat 
> loader org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-07 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5452:
-

Attachment: HIVE-5452.patch

Renaming the patch file to HIVE-5452.patch.

> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: BUG-5452.patch, HIVE-5452.patch
>
>
> HCatalog e2e test Pig_HBase_1 tries to read data from a table it created 
> using the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat 
> loader org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5270) Enable hash joins using tez

2013-10-07 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788559#comment-13788559
 ] 

Gunther Hagleitner commented on HIVE-5270:
--

https://reviews.facebook.net/D13323

> Enable hash joins using tez
> ---
>
> Key: HIVE-5270
> URL: https://issues.apache.org/jira/browse/HIVE-5270
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: BroadCastJoinsHiveOnTez.pdf, HIVE-5270.1.patch
>
>
> Since hash join involves replicating a hash table to all the map tasks, an 
> equivalent operation needs to be performed in tez. In the tez world, such an 
> operation is done via a broadcast edge (TEZ-410). We need to rework the 
> planning and execution phases within hive for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5270) Enable hash joins using tez

2013-10-07 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5270:
-

Attachment: HIVE-5270.1.patch

> Enable hash joins using tez
> ---
>
> Key: HIVE-5270
> URL: https://issues.apache.org/jira/browse/HIVE-5270
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: BroadCastJoinsHiveOnTez.pdf, HIVE-5270.1.patch
>
>
> Since hash join involves replicating a hash table to all the map tasks, an 
> equivalent operation needs to be performed in tez. In the tez world, such an 
> operation is done via a broadcast edge (TEZ-410). We need to rework the 
> planning and execution phases within hive for this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4871) Apache builds fail with Target "make-pom" does not exist in the project "hcatalog".

2013-10-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788539#comment-13788539
 ] 

Thejas M Nair commented on HIVE-4871:
-

Committed this to 0.12 branch to be able to run maven-build target .

> Apache builds fail with Target "make-pom" does not exist in the project 
> "hcatalog".
> ---
>
> Key: HIVE-4871
> URL: https://issues.apache.org/jira/browse/HIVE-4871
> Project: Hive
>  Issue Type: Sub-task
>  Components: HCatalog
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-4871.patch
>
>   Original Estimate: 24h
>  Time Spent: 24.4h
>  Remaining Estimate: 0h
>
> For example,
> https://builds.apache.org/job/Hive-trunk-h0.21/2192/console.
> All unit tests pass, but deployment of build artifacts fails.
> HIVE-4387 provided a bandaid for 0.11.  Need to figure out long term fix for 
> this for 0.12.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4871) Apache builds fail with Target "make-pom" does not exist in the project "hcatalog".

2013-10-07 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4871:


Fix Version/s: (was: 0.13.0)
   0.12.0

> Apache builds fail with Target "make-pom" does not exist in the project 
> "hcatalog".
> ---
>
> Key: HIVE-4871
> URL: https://issues.apache.org/jira/browse/HIVE-4871
> Project: Hive
>  Issue Type: Sub-task
>  Components: HCatalog
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-4871.patch
>
>   Original Estimate: 24h
>  Time Spent: 24.4h
>  Remaining Estimate: 0h
>
> For example,
> https://builds.apache.org/job/Hive-trunk-h0.21/2192/console.
> All unit tests pass, but deployment of build artifacts fails.
> HIVE-4387 provided a bandaid for 0.11.  Need to figure out long term fix for 
> this for 0.12.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5385) StringUtils is not in commons codec 1.3

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788477#comment-13788477
 ] 

Hudson commented on HIVE-5385:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #194 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/194/])
HIVE-5385 : StringUtils is not in commons codec 1.3 (Kousuke Saruta via Yin 
Huai) (yhuai: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529830)
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/shims/ivy.xml


> StringUtils is not in commons codec 1.3
> ---
>
> Key: HIVE-5385
> URL: https://issues.apache.org/jira/browse/HIVE-5385
> Project: Hive
>  Issue Type: Bug
>Reporter: Yin Huai
>Assignee: Kousuke Saruta
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5385.1.patch, HIVE-5385.2.patch
>
>
> In ThriftHttpServlet introduced by HIVE-4763, StringUtils is imported which 
> was introduced by commons codec 1.4. But, our 0.20 shims depends on commons 
> codec 1.3. Our eclipse classpath template is also using libs of 0.20 shims. 
> So, we will get two errors in eclipse. 
> Compiling hive will not have a problem because we are loading codec 1.4 for 
> project service (1.4 is also used when "-Dhadoop.version=0.20.2 
> -Dhadoop.mr.rev=20").



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5253) Create component to compile and jar dynamic code

2013-10-07 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788481#comment-13788481
 ] 

Brock Noland commented on HIVE-5253:


Hi Edward, yes since there was some concern earlier I thought I'd give people a 
chance to speak up.  I am +1.

> Create component to compile and jar dynamic code
> 
>
> Key: HIVE-5253
> URL: https://issues.apache.org/jira/browse/HIVE-5253
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Attachments: HIVE-5253.10.patch.txt, HIVE-5253.11.patch.txt, 
> HIVE-5253.1.patch.txt, HIVE-5253.3.patch.txt, HIVE-5253.3.patch.txt, 
> HIVE-5253.3.patch.txt, HIVE-5253.8.patch.txt, HIVE-5253.9.patch.txt, 
> HIVE-5253.patch.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4898) make vectorized math functions work end-to-end (update VectorizationContext.java)

2013-10-07 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-4898:
--

Attachment: HIVE-4898.3.patch

uploading same patch again to trigger automated tests

> make vectorized math functions work end-to-end (update 
> VectorizationContext.java)
> -
>
> Key: HIVE-4898
> URL: https://issues.apache.org/jira/browse/HIVE-4898
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: vectorization-branch
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Attachments: HIVE-4898.3.patch, HIVE-4898.3.patch
>
>
> The vectorized math function VectorExpression classes were added in 
> HIVE-4822. This JIRA is to allow those to actually be used in a SQL query 
> end-to-end. This requires updating VectorizationContext to use the new 
> classes in vectorized expression creation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4898) make vectorized math functions work end-to-end (update VectorizationContext.java)

2013-10-07 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-4898:
--

Affects Version/s: vectorization-branch

> make vectorized math functions work end-to-end (update 
> VectorizationContext.java)
> -
>
> Key: HIVE-4898
> URL: https://issues.apache.org/jira/browse/HIVE-4898
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: vectorization-branch
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Attachments: HIVE-4898.3.patch, HIVE-4898.3.patch
>
>
> The vectorized math function VectorExpression classes were added in 
> HIVE-4822. This JIRA is to allow those to actually be used in a SQL query 
> end-to-end. This requires updating VectorizationContext to use the new 
> classes in vectorized expression creation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4898) make vectorized math functions work end-to-end (update VectorizationContext.java)

2013-10-07 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-4898:
--

Affects Version/s: (was: vectorization-branch)

> make vectorized math functions work end-to-end (update 
> VectorizationContext.java)
> -
>
> Key: HIVE-4898
> URL: https://issues.apache.org/jira/browse/HIVE-4898
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: vectorization-branch
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Attachments: HIVE-4898.3.patch, HIVE-4898.3.patch
>
>
> The vectorized math function VectorExpression classes were added in 
> HIVE-4822. This JIRA is to allow those to actually be used in a SQL query 
> end-to-end. This requires updating VectorizationContext to use the new 
> classes in vectorized expression creation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4898) make vectorized math functions work end-to-end (update VectorizationContext.java)

2013-10-07 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-4898:
--

Status: Open  (was: Patch Available)

I'll create a new, equivalent patch, to get the automated tests to run on trunk.

> make vectorized math functions work end-to-end (update 
> VectorizationContext.java)
> -
>
> Key: HIVE-4898
> URL: https://issues.apache.org/jira/browse/HIVE-4898
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: vectorization-branch
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Attachments: HIVE-4898.3.patch
>
>
> The vectorized math function VectorExpression classes were added in 
> HIVE-4822. This JIRA is to allow those to actually be used in a SQL query 
> end-to-end. This requires updating VectorizationContext to use the new 
> classes in vectorized expression creation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Bug in map join optimization causing "OutOfMemory" error

2013-10-07 Thread Mehant Baid

Hey Folks,

We are using hive-0.11 and are hitting java.lang.OutOfMemoryError. The 
problem seems to be in CommonJoinResolver.java (processCurrentTask()), 
in this function we try and convert a map-reduce join to a map join if 
'n-1' tables involved in a 'n' way join have a size below a certain 
threshold.


If the tables are maintained by hive then we have accurate sizes of each 
table and can apply this optimization but if the tables are created 
using storage handlers, HBaseStorageHanlder in our case then the size is 
set to be zero. Due to this we assume that we can apply the optimization 
and convert the map-reduce join to a map join. So we build a in-memory 
hash table for all the keys, since our table created using the storage 
handler is large, it does not fit in memory and we hit the error.


Should I open a JIRA for this? One way to fix this is to set the size of 
the table (created using storage handler) to be equal to the map join 
threshold. This way the table would be selected as the big table and we 
can proceed with the optimization if other tables in the join have size 
below the threshold. If we have multiple big tables then the 
optimization would be turned off.


Thanks
Mehant


[jira] [Updated] (HIVE-5441) Async query execution doesn't return resultset status

2013-10-07 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5441:
--

Attachment: HIVE-5441.3.patch

> Async query execution doesn't return resultset status
> -
>
> Key: HIVE-5441
> URL: https://issues.apache.org/jira/browse/HIVE-5441
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5441.1.patch, HIVE-5441.3.patch
>
>
> For synchronous statement execution (SQL as well as metadata and other), the 
> operation handle includes a boolean flag indicating whether the statement 
> returns a resultset. In case of async execution, that's always set to false.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14486: HIVE-5441: Async query execution doesn't return resultset status

2013-10-07 Thread Prasad Mujumdar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14486/
---

(Updated Oct. 7, 2013, 6:51 p.m.)


Review request for hive.


Changes
---

Updated dependency on 14484 (HIVE-5440)


Bugs: HIVE-5441
https://issues.apache.org/jira/browse/HIVE-5441


Repository: hive-git


Description
---

Separate out the query compilation and execute that part synchronously.


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 5308e2c 
  service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java 
bb0f711 
  service/src/test/org/apache/hive/service/cli/CLIServiceTest.java 794ede8 

Diff: https://reviews.apache.org/r/14486/diff/


Testing
---

Added test cases


Thanks,

Prasad Mujumdar



Re: Review Request 14486: HIVE-5441: Async query execution doesn't return resultset status

2013-10-07 Thread Prasad Mujumdar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14486/
---

(Updated Oct. 7, 2013, 6:50 p.m.)


Review request for hive.


Changes
---

Updated patch


Bugs: HIVE-5441
https://issues.apache.org/jira/browse/HIVE-5441


Repository: hive-git


Description
---

Separate out the query compilation and execute that part synchronously.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 5308e2c 
  service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java 
bb0f711 
  service/src/test/org/apache/hive/service/cli/CLIServiceTest.java 794ede8 

Diff: https://reviews.apache.org/r/14486/diff/


Testing
---

Added test cases


Thanks,

Prasad Mujumdar



[jira] [Commented] (HIVE-4945) Make RLIKE/REGEXP run end-to-end by updating VectorizationContext

2013-10-07 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788375#comment-13788375
 ] 

Jitendra Nath Pandey commented on HIVE-4945:


+1.

> Make RLIKE/REGEXP run end-to-end by updating VectorizationContext
> -
>
> Key: HIVE-4945
> URL: https://issues.apache.org/jira/browse/HIVE-4945
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: vectorization-branch
>Reporter: Eric Hanson
>Assignee: Teddy Choi
> Attachments: HIVE-4945.1.patch.txt, HIVE-4945.2.patch.txt, 
> HIVE-4945.3.patch.txt, HIVE-4945.4.patch.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5377) the error handling in serialize/deserializeExpression is insufficient, some test may pass in error

2013-10-07 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788369#comment-13788369
 ] 

Ashutosh Chauhan commented on HIVE-5377:


Nopes this doesnt exist on Kryo serialization and it doesn't repro with javaXML 
if you have HIVE-5411 applied.

> the error handling in serialize/deserializeExpression is insufficient, some 
> test may pass in error
> --
>
> Key: HIVE-5377
> URL: https://issues.apache.org/jira/browse/HIVE-5377
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Owen O'Malley
>
> TestSearchArgumentImpl output has stuff like this:
> {code}
> Continuing ...
> java.lang.NoSuchMethodException: 
> =GenericUDFBridge.setUdfClass(Class);
> Continuing ...
> java.lang.NoSuchMethodException: 
> =GenericUDFBridge.setUdfClass(Class);
> {code}
> XMLDecoder used in deserializeExpression by default would swallow some 
> exceptions, such as the ones above; setExceptionListener can be used to 
> receive those.
> When I set the listener to inline class that would rethrow them wrapped in 
> RuntimeException, the test failed.
> Discovered in HIVE-4914.
> It may be a test-specific issue, or some general Expr serialization issue 
> that may affect the real case.
> Also Kryo can now be used for serializing stuff.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5385) StringUtils is not in commons codec 1.3

2013-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788358#comment-13788358
 ] 

Hudson commented on HIVE-5385:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #129 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/129/])
HIVE-5385 : StringUtils is not in commons codec 1.3 (Kousuke Saruta via Yin 
Huai) (yhuai: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529830)
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/shims/ivy.xml


> StringUtils is not in commons codec 1.3
> ---
>
> Key: HIVE-5385
> URL: https://issues.apache.org/jira/browse/HIVE-5385
> Project: Hive
>  Issue Type: Bug
>Reporter: Yin Huai
>Assignee: Kousuke Saruta
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5385.1.patch, HIVE-5385.2.patch
>
>
> In ThriftHttpServlet introduced by HIVE-4763, StringUtils is imported which 
> was introduced by commons codec 1.4. But, our 0.20 shims depends on commons 
> codec 1.3. Our eclipse classpath template is also using libs of 0.20 shims. 
> So, we will get two errors in eclipse. 
> Compiling hive will not have a problem because we are loading codec 1.4 for 
> project service (1.4 is also used when "-Dhadoop.version=0.20.2 
> -Dhadoop.mr.rev=20").



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >