[jira] [Commented] (HIVE-2599) Support Composit/Compound Keys with HBaseStorageHandler

2013-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722112#comment-13722112
 ] 

Hive QA commented on HIVE-2599:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12594616/HIVE-2599.2.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 2737 tests executed
*Failed tests:*
{noformat}
org.apache.hcatalog.pig.TestHCatStorer.testMultiPartColsInData
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/219/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/219/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

> Support Composit/Compound Keys with HBaseStorageHandler
> ---
>
> Key: HIVE-2599
> URL: https://issues.apache.org/jira/browse/HIVE-2599
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 0.8.0
>Reporter: Hans Uhlig
>Assignee: Swarnim Kulkarni
> Attachments: HIVE-2599.1.patch.txt, HIVE-2599.2.patch.txt
>
>
> It would be really nice for hive to be able to understand composite keys from 
> an underlying HBase schema. Currently we have to store key fields twice to be 
> able to both key and make data available. I noticed John Sichi mentioned in 
> HIVE-1228 that this would be a separate issue but I cant find any follow up. 
> How feasible is this in the HBaseStorageHandler?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2599) Support Composit/Compound Keys with HBaseStorageHandler

2013-07-28 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722091#comment-13722091
 ] 

Swarnim Kulkarni commented on HIVE-2599:


Review request: https://reviews.apache.org/r/13007/

> Support Composit/Compound Keys with HBaseStorageHandler
> ---
>
> Key: HIVE-2599
> URL: https://issues.apache.org/jira/browse/HIVE-2599
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 0.8.0
>Reporter: Hans Uhlig
>Assignee: Swarnim Kulkarni
> Attachments: HIVE-2599.1.patch.txt, HIVE-2599.2.patch.txt
>
>
> It would be really nice for hive to be able to understand composite keys from 
> an underlying HBase schema. Currently we have to store key fields twice to be 
> able to both key and make data available. I noticed John Sichi mentioned in 
> HIVE-1228 that this would be a separate issue but I cant find any follow up. 
> How feasible is this in the HBaseStorageHandler?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Review Request 13007: Add support to query composite/compound keys stored in HBase

2013-07-28 Thread Swarnim Kulkarni

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/13007/
---

Review request for hive.


Bugs: HIVE-2599
https://issues.apache.org/jira/browse/HIVE-2599


Repository: hive-git


Description
---

Added support to query composite keys stored in HBase.


Diffs
-

  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 4900a41 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java b254b0d 
  
hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java 
PRE-CREATION 
  hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseSerDe.java 
d25c731 
  
serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazySimpleStructObjectInspector.java
 08400f1 

Diff: https://reviews.apache.org/r/13007/diff/


Testing
---

Added additional unit tests to prove the functionality. Also ensured all 
existing unit tests pass.


Thanks,

Swarnim Kulkarni



[jira] [Updated] (HIVE-2599) Support Composit/Compound Keys with HBaseStorageHandler

2013-07-28 Thread Swarnim Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swarnim Kulkarni updated HIVE-2599:
---

Attachment: HIVE-2599.2.patch.txt

Rebased with master to get a clean patch. If a committer gets a chance to 
review this, that would be awesome! Thanks!

> Support Composit/Compound Keys with HBaseStorageHandler
> ---
>
> Key: HIVE-2599
> URL: https://issues.apache.org/jira/browse/HIVE-2599
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 0.8.0
>Reporter: Hans Uhlig
>Assignee: Swarnim Kulkarni
> Attachments: HIVE-2599.1.patch.txt, HIVE-2599.2.patch.txt
>
>
> It would be really nice for hive to be able to understand composite keys from 
> an underlying HBase schema. Currently we have to store key fields twice to be 
> able to both key and make data available. I noticed John Sichi mentioned in 
> HIVE-1228 that this would be a separate issue but I cant find any follow up. 
> How feasible is this in the HBaseStorageHandler?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4928) Date literals do not work properly in partition spec clause

2013-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722084#comment-13722084
 ] 

Hive QA commented on HIVE-4928:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12594613/HIVE-4928.D11871.1.patch

{color:green}SUCCESS:{color} +1 2736 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/218/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/218/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Date literals do not work properly in partition spec clause
> ---
>
> Key: HIVE-4928
> URL: https://issues.apache.org/jira/browse/HIVE-4928
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-4928.1.patch.txt, HIVE-4928.D11871.1.patch
>
>
> The partition spec parsing doesn't do any actual real evaluation of the 
> values in the partition spec, instead just taking the text value of the 
> ASTNode representing the partition value. This works fine for string/numeric 
> literals (expression tree below):
> (TOK_PARTVAL region 99)
> But not for Date literals which are of form DATE '-mm-dd' (expression 
> tree below:
> (TOK_DATELITERAL '1999-12-31')
> In this case the parser/analyzer uses "TOK_DATELITERAL" as the partition 
> column value, when it should really get value of the child of the DATELITERAL 
> token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4928) Date literals do not work properly in partition spec clause

2013-07-28 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722076#comment-13722076
 ] 

Jason Dere commented on HIVE-4928:
--

Review at https://reviews.facebook.net/D11871

These changes make use of a Java action in ANTLR grammar to create a 
DATELITERAL token containing the text of the date literal string.  There didn't 
seem to be any other way to do that. 

Ran unit tests yesterday night on Mac, 4 failed, but these all passed when I 
ran them on a Linux VM. 

> Date literals do not work properly in partition spec clause
> ---
>
> Key: HIVE-4928
> URL: https://issues.apache.org/jira/browse/HIVE-4928
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-4928.1.patch.txt, HIVE-4928.D11871.1.patch
>
>
> The partition spec parsing doesn't do any actual real evaluation of the 
> values in the partition spec, instead just taking the text value of the 
> ASTNode representing the partition value. This works fine for string/numeric 
> literals (expression tree below):
> (TOK_PARTVAL region 99)
> But not for Date literals which are of form DATE '-mm-dd' (expression 
> tree below:
> (TOK_DATELITERAL '1999-12-31')
> In this case the parser/analyzer uses "TOK_DATELITERAL" as the partition 
> column value, when it should really get value of the child of the DATELITERAL 
> token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4928) Date literals do not work properly in partition spec clause

2013-07-28 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4928:
--

Attachment: HIVE-4928.D11871.1.patch

jdere requested code review of "HIVE-4928 [jira] Date literals do not work 
properly in partition spec clause".

Reviewers: JIRA

HIVE-4928 fix date literal parsing to work in partition_spec clause

The partition spec parsing doesn't do any actual real evaluation of the values 
in the partition spec, instead just taking the text value of the ASTNode 
representing the partition value. This works fine for string/numeric literals 
(expression tree below):

(TOK_PARTVAL region 99)

But not for Date literals which are of form DATE '-mm-dd' (expression tree 
below:

(TOK_DATELITERAL '1999-12-31')

In this case the parser/analyzer uses "TOK_DATELITERAL" as the partition column 
value, when it should really get value of the child of the DATELITERAL token.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D11871

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
  ql/src/test/queries/clientpositive/partition_date2.q
  ql/src/test/results/clientpositive/partition_date2.q.out

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/28221/

To: JIRA, jdere


> Date literals do not work properly in partition spec clause
> ---
>
> Key: HIVE-4928
> URL: https://issues.apache.org/jira/browse/HIVE-4928
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-4928.1.patch.txt, HIVE-4928.D11871.1.patch
>
>
> The partition spec parsing doesn't do any actual real evaluation of the 
> values in the partition spec, instead just taking the text value of the 
> ASTNode representing the partition value. This works fine for string/numeric 
> literals (expression tree below):
> (TOK_PARTVAL region 99)
> But not for Date literals which are of form DATE '-mm-dd' (expression 
> tree below:
> (TOK_DATELITERAL '1999-12-31')
> In this case the parser/analyzer uses "TOK_DATELITERAL" as the partition 
> column value, when it should really get value of the child of the DATELITERAL 
> token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4928) Date literals do not work properly in partition spec clause

2013-07-28 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-4928:
-

Description: 
The partition spec parsing doesn't do any actual real evaluation of the values 
in the partition spec, instead just taking the text value of the ASTNode 
representing the partition value. This works fine for string/numeric literals 
(expression tree below):

(TOK_PARTVAL region 99)

But not for Date literals which are of form DATE '-mm-dd' (expression tree 
below:

(TOK_DATELITERAL '1999-12-31')

In this case the parser/analyzer uses "TOK_DATELITERAL" as the partition column 
value, when it should really get value of the child of the DATELITERAL token.



  was:
The partition spec parsing doesn't do any actual real evaluation of the values 
in the partition spec, instead just taking the text value of the ASTNode 
representing the partition value. This works fine for string/numeric literals 
(expression tree below):

(TOK_PARTVAL region 99)

But not for Date literals which are of form DATE '-mm-dd' (expression tree 
below:

(TOK_DATELITERAL '1999-12-31')

In this case the parser/analyzer uses "TOK_DATELITERAL" as the partition column 
value, when it should really get value of the child of the DATELITERAL token.


NO PRECOMMIT TESTS


> Date literals do not work properly in partition spec clause
> ---
>
> Key: HIVE-4928
> URL: https://issues.apache.org/jira/browse/HIVE-4928
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-4928.1.patch.txt
>
>
> The partition spec parsing doesn't do any actual real evaluation of the 
> values in the partition spec, instead just taking the text value of the 
> ASTNode representing the partition value. This works fine for string/numeric 
> literals (expression tree below):
> (TOK_PARTVAL region 99)
> But not for Date literals which are of form DATE '-mm-dd' (expression 
> tree below:
> (TOK_DATELITERAL '1999-12-31')
> In this case the parser/analyzer uses "TOK_DATELITERAL" as the partition 
> column value, when it should really get value of the child of the DATELITERAL 
> token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4683) fix coverage org.apache.hadoop.hive.cli

2013-07-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4683:
---

Affects Version/s: (was: 0.11.1)
   (was: 0.12.0)
   (was: 0.10.1)
   0.10.0
   0.11.0
   Status: Open  (was: Patch Available)

Left couple of comments on ReviewBoard.

> fix coverage org.apache.hadoop.hive.cli
> ---
>
> Key: HIVE-4683
> URL: https://issues.apache.org/jira/browse/HIVE-4683
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0, 0.10.0
>Reporter: Aleksey Gorshkov
>Assignee: Aleksey Gorshkov
> Attachments: HIVE-4683-branch-0.10.patch, 
> HIVE-4683-branch-0.10-v1.patch, HIVE-4683-branch-0.11-v1.patch, 
> HIVE-4683-trunk.patch, HIVE-4683-trunk-v1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4520) java.lang.NegativeArraySizeException when query on hive-0.11.0, hbase-0.94.6.1

2013-07-28 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722016#comment-13722016
 ] 

Swarnim Kulkarni commented on HIVE-4520:


I updated the affect versions and environment on HIVE-4515 to reflect that. 
Marking this as duplicate. 

> java.lang.NegativeArraySizeException when query on hive-0.11.0, hbase-0.94.6.1
> --
>
> Key: HIVE-4520
> URL: https://issues.apache.org/jira/browse/HIVE-4520
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.11.0
> Environment: hive-0.11.0
> hbase-0.94.6.1
> zookeeper-3.4.3
> hadoop-1.0.4
> centos-5.7
>Reporter: Yanhui Ma
>Priority: Critical
>
> After integration hive-0.11.0+hbase-0.94.6.1, these commands could be 
> executed sucessfully:
> create table
> insert overwrite table
> select * from table
> However, when execute "select count(*) from table", throws exception:
> hive> select count(*) from test; 
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201305061042_0028, Tracking URL = 
> http://master0:50030/jobdetails.jsp?jobid=job_201305061042_0028
> Kill Command = /opt/modules/hadoop/hadoop-1.0.4/libexec/../bin/hadoop job  
> -kill job_201305061042_0028
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2013-05-07 18:41:42,649 Stage-1 map = 0%,  reduce = 0%
> 2013-05-07 18:42:14,789 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201305061042_0028 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: 
> http://master0:50030/jobdetails.jsp?jobid=job_201305061042_0028
> Examining task ID: task_201305061042_0028_m_02 (and more) from job 
> job_201305061042_0028
> Task with the most failures(4): 
> -
> Task ID:
>   task_201305061042_0028_m_00
> URL:
>   
> http://master0:50030/taskdetails.jsp?jobid=job_201305061042_0028&tipid=task_201305061042_0028_m_00
> -
> Diagnostic Messages for this Task:
> java.lang.NegativeArraySizeException: -1
>   at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:148)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableSplit.readFields(TableSplit.java:133)
>   at 
> org.apache.hadoop.hive.hbase.HBaseSplit.readFields(HBaseSplit.java:53)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:150)
>   at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
>   at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
>   at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:396)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:412)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>   at org.apache.hadoop.mapred.Child.main(Child.java:249)
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> ==
> The log of tasktracker:
> stderr logs
> 13/05/07 18:43:20 INFO util.NativeCodeLoader: Loaded the native-hadoop library
> 13/05/07 18:43:20 INFO mapred.TaskRunner: Creating symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/distcache/107328478296390_-1298160740_2123690974/master0/tmp/hive-hadoop/hive_2013-05-07_18-41-30_290_832140779606816147/-mr-10003/fd22448b-e923-498c-bc00-2164ca68447d
>  <- 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/attempt_201305061042_0028_m_00_0/work/HIVE_PLANfd22448b-e923-498c-bc00-2164ca68447d
> 13/05/07 18:43:20 INFO filecache.TrackerDistributedCacheManager: Creating 
> symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/jars/javolution
>  <- 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/attempt_201305061042_0028_m_00_0/w

[jira] [Resolved] (HIVE-4520) java.lang.NegativeArraySizeException when query on hive-0.11.0, hbase-0.94.6.1

2013-07-28 Thread Swarnim Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swarnim Kulkarni resolved HIVE-4520.


Resolution: Duplicate

> java.lang.NegativeArraySizeException when query on hive-0.11.0, hbase-0.94.6.1
> --
>
> Key: HIVE-4520
> URL: https://issues.apache.org/jira/browse/HIVE-4520
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.11.0
> Environment: hive-0.11.0
> hbase-0.94.6.1
> zookeeper-3.4.3
> hadoop-1.0.4
> centos-5.7
>Reporter: Yanhui Ma
>Priority: Critical
>
> After integration hive-0.11.0+hbase-0.94.6.1, these commands could be 
> executed sucessfully:
> create table
> insert overwrite table
> select * from table
> However, when execute "select count(*) from table", throws exception:
> hive> select count(*) from test; 
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201305061042_0028, Tracking URL = 
> http://master0:50030/jobdetails.jsp?jobid=job_201305061042_0028
> Kill Command = /opt/modules/hadoop/hadoop-1.0.4/libexec/../bin/hadoop job  
> -kill job_201305061042_0028
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2013-05-07 18:41:42,649 Stage-1 map = 0%,  reduce = 0%
> 2013-05-07 18:42:14,789 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201305061042_0028 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: 
> http://master0:50030/jobdetails.jsp?jobid=job_201305061042_0028
> Examining task ID: task_201305061042_0028_m_02 (and more) from job 
> job_201305061042_0028
> Task with the most failures(4): 
> -
> Task ID:
>   task_201305061042_0028_m_00
> URL:
>   
> http://master0:50030/taskdetails.jsp?jobid=job_201305061042_0028&tipid=task_201305061042_0028_m_00
> -
> Diagnostic Messages for this Task:
> java.lang.NegativeArraySizeException: -1
>   at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:148)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableSplit.readFields(TableSplit.java:133)
>   at 
> org.apache.hadoop.hive.hbase.HBaseSplit.readFields(HBaseSplit.java:53)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:150)
>   at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
>   at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
>   at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:396)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:412)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>   at org.apache.hadoop.mapred.Child.main(Child.java:249)
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> ==
> The log of tasktracker:
> stderr logs
> 13/05/07 18:43:20 INFO util.NativeCodeLoader: Loaded the native-hadoop library
> 13/05/07 18:43:20 INFO mapred.TaskRunner: Creating symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/distcache/107328478296390_-1298160740_2123690974/master0/tmp/hive-hadoop/hive_2013-05-07_18-41-30_290_832140779606816147/-mr-10003/fd22448b-e923-498c-bc00-2164ca68447d
>  <- 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/attempt_201305061042_0028_m_00_0/work/HIVE_PLANfd22448b-e923-498c-bc00-2164ca68447d
> 13/05/07 18:43:20 INFO filecache.TrackerDistributedCacheManager: Creating 
> symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/jars/javolution
>  <- 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/attempt_201305061042_0028_m_00_0/work/javolution
> 13/05/07 18:43:20 INFO filecache.TrackerDistributedCacheManager: Creating 
> symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTrac

[jira] [Updated] (HIVE-4515) "select count(*) from table" query on hive-0.10.0, hbase-0.94.7 integration throws exceptions

2013-07-28 Thread Swarnim Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swarnim Kulkarni updated HIVE-4515:
---

Affects Version/s: 0.11.0

> "select count(*) from table" query on hive-0.10.0, hbase-0.94.7 integration 
> throws exceptions
> -
>
> Key: HIVE-4515
> URL: https://issues.apache.org/jira/browse/HIVE-4515
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.10.0, 0.11.0
> Environment: hive-0.10.0, hive-0.11.0
> hbase-0.94.7, hbase-0.94.6.1
> zookeeper-3.4.3
> hadoop-1.0.4
> centos-5.7
>Reporter: Yanhui Ma
>Priority: Critical
>
> After integration hive-0.10.0+hbase-0.94.7, these commands could be executed 
> sucessfully:
> create table
> insert overwrite table
> select * from table
> However, when execute "select count(*) from table", throws exception:
> hive> select count(*) from test; 
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201305061042_0028, Tracking URL = 
> http://master0:50030/jobdetails.jsp?jobid=job_201305061042_0028
> Kill Command = /opt/modules/hadoop/hadoop-1.0.4/libexec/../bin/hadoop job  
> -kill job_201305061042_0028
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2013-05-07 18:41:42,649 Stage-1 map = 0%,  reduce = 0%
> 2013-05-07 18:42:14,789 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201305061042_0028 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: 
> http://master0:50030/jobdetails.jsp?jobid=job_201305061042_0028
> Examining task ID: task_201305061042_0028_m_02 (and more) from job 
> job_201305061042_0028
> Task with the most failures(4): 
> -
> Task ID:
>   task_201305061042_0028_m_00
> URL:
>   
> http://master0:50030/taskdetails.jsp?jobid=job_201305061042_0028&tipid=task_201305061042_0028_m_00
> -
> Diagnostic Messages for this Task:
> java.lang.NegativeArraySizeException: -1
>   at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:148)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableSplit.readFields(TableSplit.java:133)
>   at 
> org.apache.hadoop.hive.hbase.HBaseSplit.readFields(HBaseSplit.java:53)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:150)
>   at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
>   at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
>   at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:396)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:412)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>   at org.apache.hadoop.mapred.Child.main(Child.java:249)
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> ==
> The log of tasktracker:
> stderr logs
> 13/05/07 18:43:20 INFO util.NativeCodeLoader: Loaded the native-hadoop library
> 13/05/07 18:43:20 INFO mapred.TaskRunner: Creating symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/distcache/107328478296390_-1298160740_2123690974/master0/tmp/hive-hadoop/hive_2013-05-07_18-41-30_290_832140779606816147/-mr-10003/fd22448b-e923-498c-bc00-2164ca68447d
>  <- 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/attempt_201305061042_0028_m_00_0/work/HIVE_PLANfd22448b-e923-498c-bc00-2164ca68447d
> 13/05/07 18:43:20 INFO filecache.TrackerDistributedCacheManager: Creating 
> symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/jars/javolution
>  <- 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/attempt_201305061042_0028_m_00_0/work/javolution
> 13/05/07 18:43:20 INFO filecache.TrackerDistributedCacheManag

[jira] [Updated] (HIVE-4515) "select count(*) from table" query on hive-0.10.0, hbase-0.94.7 integration throws exceptions

2013-07-28 Thread Swarnim Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swarnim Kulkarni updated HIVE-4515:
---

Environment: 
hive-0.10.0, hive-0.11.0
hbase-0.94.7, hbase-0.94.6.1
zookeeper-3.4.3
hadoop-1.0.4

centos-5.7

  was:
hive-0.10.0
hbase-0.94.7
zookeeper-3.4.3
hadoop-1.0.4

centos-5.7


> "select count(*) from table" query on hive-0.10.0, hbase-0.94.7 integration 
> throws exceptions
> -
>
> Key: HIVE-4515
> URL: https://issues.apache.org/jira/browse/HIVE-4515
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.10.0
> Environment: hive-0.10.0, hive-0.11.0
> hbase-0.94.7, hbase-0.94.6.1
> zookeeper-3.4.3
> hadoop-1.0.4
> centos-5.7
>Reporter: Yanhui Ma
>Priority: Critical
>
> After integration hive-0.10.0+hbase-0.94.7, these commands could be executed 
> sucessfully:
> create table
> insert overwrite table
> select * from table
> However, when execute "select count(*) from table", throws exception:
> hive> select count(*) from test; 
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201305061042_0028, Tracking URL = 
> http://master0:50030/jobdetails.jsp?jobid=job_201305061042_0028
> Kill Command = /opt/modules/hadoop/hadoop-1.0.4/libexec/../bin/hadoop job  
> -kill job_201305061042_0028
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2013-05-07 18:41:42,649 Stage-1 map = 0%,  reduce = 0%
> 2013-05-07 18:42:14,789 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201305061042_0028 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: 
> http://master0:50030/jobdetails.jsp?jobid=job_201305061042_0028
> Examining task ID: task_201305061042_0028_m_02 (and more) from job 
> job_201305061042_0028
> Task with the most failures(4): 
> -
> Task ID:
>   task_201305061042_0028_m_00
> URL:
>   
> http://master0:50030/taskdetails.jsp?jobid=job_201305061042_0028&tipid=task_201305061042_0028_m_00
> -
> Diagnostic Messages for this Task:
> java.lang.NegativeArraySizeException: -1
>   at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:148)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableSplit.readFields(TableSplit.java:133)
>   at 
> org.apache.hadoop.hive.hbase.HBaseSplit.readFields(HBaseSplit.java:53)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:150)
>   at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
>   at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
>   at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:396)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:412)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>   at org.apache.hadoop.mapred.Child.main(Child.java:249)
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> ==
> The log of tasktracker:
> stderr logs
> 13/05/07 18:43:20 INFO util.NativeCodeLoader: Loaded the native-hadoop library
> 13/05/07 18:43:20 INFO mapred.TaskRunner: Creating symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/distcache/107328478296390_-1298160740_2123690974/master0/tmp/hive-hadoop/hive_2013-05-07_18-41-30_290_832140779606816147/-mr-10003/fd22448b-e923-498c-bc00-2164ca68447d
>  <- 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/attempt_201305061042_0028_m_00_0/work/HIVE_PLANfd22448b-e923-498c-bc00-2164ca68447d
> 13/05/07 18:43:20 INFO filecache.TrackerDistributedCacheManager: Creating 
> symlink: 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201305061042_0028/jars/javolution
>  <- 
> /tmp/hadoop-hadoop/mapred/local/taskTracker/hado

[jira] [Updated] (HIVE-2991) Integrate Clover with Hive

2013-07-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2991:
---

Affects Version/s: 0.10.0
   0.11.0
   Status: Open  (was: Patch Available)

Canceling patch for now. Breaking it down in series of patches (instead of 
clubbing everything together) for different issues is a good idea.

> Integrate Clover with Hive
> --
>
> Key: HIVE-2991
> URL: https://issues.apache.org/jira/browse/HIVE-2991
> Project: Hive
>  Issue Type: Test
>  Components: Testing Infrastructure
>Affects Versions: 0.11.0, 0.10.0, 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Ivan A. Veselovsky
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2991.D2985.1.patch, 
> hive.2991.1.branch-0.10.patch, hive.2991.1.branch-0.9.patch, 
> hive.2991.1.trunk.patch, hive.2991.2.branch-0.10.patch, 
> hive.2991.2.branch-0.9.patch, hive.2991.2.trunk.patch, 
> hive.2991.3.branch-0.10.patch, hive.2991.3.branch-0.9.patch, 
> hive.2991.3.trunk.patch, hive.2991.4.branch-0.10.patch, 
> hive.2991.4.branch-0.9.patch, hive.2991.4.trunk.patch, 
> HIVE-clover-branch-0.10--N1.patch, HIVE-clover-branch-0.11--N1.patch, 
> HIVE-clover-trunk--N1.patch, hive-trunk-clover-html-report.zip
>
>
> Atlassian has donated license of their code coverage tool Clover to ASF. Lets 
> make use of it to generate code coverage report to figure out which areas of 
> Hive are well tested and which ones are not. More information about license 
> can be found in Hadoop jira HADOOP-1718 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2702) listPartitionsByFilter only supports string partitions for equals

2013-07-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722009#comment-13722009
 ] 

Ashutosh Chauhan commented on HIVE-2702:


I ran the tests on current patch on latest trunk. All tests passed except 
alter_partition_coltype.q for which you just need to update .q.out file. Can 
you also include that update in your next refresh of the patch?

> listPartitionsByFilter only supports string partitions for equals
> -
>
> Key: HIVE-2702
> URL: https://issues.apache.org/jira/browse/HIVE-2702
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Aniket Mokashi
>Assignee: Sergey Shelukhin
> Fix For: 0.12.0
>
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2702.D2043.1.patch, 
> HIVE-2702.1.patch, HIVE-2702.D11715.1.patch, HIVE-2702.D11715.2.patch, 
> HIVE-2702.D11715.3.patch, HIVE-2702.D11847.1.patch, HIVE-2702.patch, 
> HIVE-2702-v0.patch
>
>
> listPartitionsByFilter supports only non-string partitions. This is because 
> its explicitly specified in generateJDOFilterOverPartitions in 
> ExpressionTree.java. 
> //Can only support partitions whose types are string
>   if( ! table.getPartitionKeys().get(partitionColumnIndex).
>   
> getType().equals(org.apache.hadoop.hive.serde.Constants.STRING_TYPE_NAME) ) {
> throw new MetaException
> ("Filtering is supported only on partition keys of type string");
>   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4879) Window functions that imply order can only be registered at compile time

2013-07-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722002#comment-13722002
 ] 

Brock Noland commented on HIVE-4879:


Hey, 

Yeah it takes a few more steps that is for sure. So you can see what tests 
executed/failed in the [Jenkins test 
report|https://builds.apache.org/job/PreCommit-HIVE-Build/215/testReport/] but 
that doesn't show much of value for .q file tests. If you hit the 
[console|https://builds.apache.org/job/PreCommit-HIVE-Build/215/console] at the 
bottom is a link:

{noformat}
Logs are located: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-215
{noformat}

If you hit that link and then look in the "failed" directory you will see a 
test batch which included the failed test. In that directory are all the logs 
for the test.

> Window functions that imply order can only be registered at compile time
> 
>
> Key: HIVE-4879
> URL: https://issues.apache.org/jira/browse/HIVE-4879
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.11.0
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 0.12.0
>
> Attachments: HIVE-4879.1.patch.txt, HIVE-4879.2.patch.txt
>
>
> Adding an annotation for impliesOrder

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4879) Window functions that imply order can only be registered at compile time

2013-07-28 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721995#comment-13721995
 ] 

Edward Capriolo commented on HIVE-4879:
---

[~brocknoland] Possibly I am looking in the wrong place but I can not see any 
files that show that this test ran. Am I looking in the wrong place?

> Window functions that imply order can only be registered at compile time
> 
>
> Key: HIVE-4879
> URL: https://issues.apache.org/jira/browse/HIVE-4879
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.11.0
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 0.12.0
>
> Attachments: HIVE-4879.1.patch.txt, HIVE-4879.2.patch.txt
>
>
> Adding an annotation for impliesOrder

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4949) ant test -Dtestcase cannot execute some tests

2013-07-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-4949:
--

 Summary: ant test -Dtestcase cannot execute some tests
 Key: HIVE-4949
 URL: https://issues.apache.org/jira/browse/HIVE-4949
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland


The following
{noformat}
$ ant test -Dtestcase=TestHadoop20SAuthBridge
{noformat}
does not execute a test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4920) PTest2 handle Spot Price increases gracefully and improve rsync paralllelsim

2013-07-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-4920:
---

Attachment: HIVE-4920.patch

Minor update. We now handle skipped tests (via @Ignore) properly. New unit 
tests added to handle this.

> PTest2 handle Spot Price increases gracefully and improve rsync paralllelsim
> 
>
> Key: HIVE-4920
> URL: https://issues.apache.org/jira/browse/HIVE-4920
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Critical
> Attachments: HIVE-4920.patch, HIVE-4920.patch, Screen Shot 2013-07-23 
> at 3.35.00 PM.png
>
>
> We should handle spot price increases more gracefully and parallelize rsync 
> to slaves better
> NO PRECOMMIT TESTS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4941) PTest2 Investigate Ignores

2013-07-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721984#comment-13721984
 ] 

Brock Noland commented on HIVE-4941:


Actually 4. above should be that some tests cannot be executed via ant test 
-Dtestcase. For example TestHadoop20SAuthBridge.

> PTest2 Investigate Ignores
> --
>
> Key: HIVE-4941
> URL: https://issues.apache.org/jira/browse/HIVE-4941
> Project: Hive
>  Issue Type: Task
>Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Minor
>
> Currently we excluding the following tests:
> unitTests.exclude = TestHiveMetaStore TestSerDe TestBeeLineDriver 
> TestHiveServer2Concurrency TestJdbcDriver2 TestHiveServer2Concurrency 
> TestBeeLineDriver
> some of them we got from the build files but I am not sure about 
> TestJdbcDriver2 for example. We should investigate why these are excluded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4948) WriteLockTest and ZNodeNameTest do not follow test naming pattern

2013-07-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-4948:
--

 Summary: WriteLockTest and ZNodeNameTest do not follow test naming 
pattern
 Key: HIVE-4948
 URL: https://issues.apache.org/jira/browse/HIVE-4948
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Priority: Minor


These tests should be renamed TestWriteLock and TestZNodeName

org.apache.hcatalog.hbase.snapshot.lock.WriteLockTest
org.apache.hcatalog.hbase.snapshot.lock.ZNodeNameTest

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4941) PTest2 Investigate Ignores

2013-07-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721967#comment-13721967
 ] 

Brock Noland commented on HIVE-4941:


OK, the first item is that PTest2 wasn't reporting on all the tests that it 
ran. This was fixed in HIVE-4892. Now as it did 
[https://issues.apache.org/jira/browse/HIVE-4299?focusedCommentId=13721929&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13721929]
 and 
[here|https://issues.apache.org/jira/browse/HIVE-3926?focusedCommentId=13721861&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13721861]
 PTest2 should be reporting ~2729.

I counted the number of tests ran by ant test and ptest2 and I got the 
following This is using grep ' PTest2 Investigate Ignores
> --
>
> Key: HIVE-4941
> URL: https://issues.apache.org/jira/browse/HIVE-4941
> Project: Hive
>  Issue Type: Task
>Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Minor
>
> Currently we excluding the following tests:
> unitTests.exclude = TestHiveMetaStore TestSerDe TestBeeLineDriver 
> TestHiveServer2Concurrency TestJdbcDriver2 TestHiveServer2Concurrency 
> TestBeeLineDriver
> some of them we got from the build files but I am not sure about 
> TestJdbcDriver2 for example. We should investigate why these are excluded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4299) exported metadata by HIVE-3068 cannot be imported because of wrong file name

2013-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721929#comment-13721929
 ] 

Hive QA commented on HIVE-4299:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12594564/HIVE-4299.5.patch.txt

{color:green}SUCCESS:{color} +1 2729 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/216/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/216/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> exported metadata by HIVE-3068 cannot be imported because of wrong file name
> 
>
> Key: HIVE-4299
> URL: https://issues.apache.org/jira/browse/HIVE-4299
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Sho Shimauchi
>Assignee: Edward Capriolo
> Attachments: HIVE-4299.1.patch.txt, HIVE-4299.4.patch.txt, 
> HIVE-4299.5.patch.txt, HIVE-4299.patch
>
>
> h2. Symptom
> When DROP TABLE a table, metadata of the table is generated to be able to 
> import the dropped table again.
> However, the exported metadata name is '.metadata'.
> Since ImportSemanticAnalyzer allows only '_metadata' as metadata filename, 
> user have to rename the metadata file to import the table.
> h2. How to reproduce
> Set the following setting to hive-site.xml:
> {code}
>  
>hive.metastore.pre.event.listeners
>org.apache.hadoop.hive.ql.parse.MetaDataExportListener
>  
> {code}
> Then run the following queries:
> {code}
> > CREATE TABLE test_table (id INT, name STRING);
> > DROP TABLE test_table;
> > IMPORT TABLE test_table_imported FROM '/path/to/metadata/file';
> FAILED: SemanticException [Error 10027]: Invalid path
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4879) Window functions that imply order can only be registered at compile time

2013-07-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721921#comment-13721921
 ] 

Hive QA commented on HIVE-4879:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12594577/HIVE-4879.2.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 2730 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_windowing
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/215/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/215/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

> Window functions that imply order can only be registered at compile time
> 
>
> Key: HIVE-4879
> URL: https://issues.apache.org/jira/browse/HIVE-4879
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.11.0
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 0.12.0
>
> Attachments: HIVE-4879.1.patch.txt, HIVE-4879.2.patch.txt
>
>
> Adding an annotation for impliesOrder

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira